<div dir="ltr">Thanks, I saw that. ASM converges slower than SOR :(</div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Jul 3, 2016 at 6:50 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Here is the problem:<br>
<br>
if (!sz) {<br>
IS nis;<br>
ierr = ISCreateGeneral(PETSC_COMM_SELF, 0, NULL, PETSC_COPY_VALUES, &nis);CHKERRQ(ierr);<br>
ierr = PCASMSetLocalSubdomains(subpc, 1, NULL, &nis);CHKERRQ(ierr);<br>
ierr = ISDestroy(&nis);CHKERRQ(ierr);<br>
<br>
It was setting the first IS when it should be setting the second. When I run after this change I get reasonable results.<br>
<br>
0 KSP Residual norm 2.469493285658e+02<br>
1 KSP Residual norm 1.335901394170e+01<br>
2 KSP Residual norm 5.623002716761e-01<br>
3 KSP Residual norm 5.201650164984e-02<br>
4 KSP Residual norm 3.763050190737e-03<br>
5 KSP Residual norm 3.024410926322e-04<br>
<span class="">Linear solve converged due to CONVERGED_RTOL iterations 5<br>
</span> 0 KSP Residual norm 2.469493285658e-03<br>
1 KSP Residual norm 1.335901394170e-04<br>
2 KSP Residual norm 5.623002716761e-06<br>
3 KSP Residual norm 5.201650164983e-07<br>
4 KSP Residual norm 3.763050190737e-08<br>
5 KSP Residual norm 3.024410926322e-09<br>
<span class="">Linear solve converged due to CONVERGED_RTOL iterations 5<br>
</span> 0 KSP Residual norm 2.469493285658e-08<br>
1 KSP Residual norm 1.335901394170e-09<br>
2 KSP Residual norm 5.623002716761e-11<br>
3 KSP Residual norm 5.201650164984e-12<br>
4 KSP Residual norm 3.763050190737e-13<br>
5 KSP Residual norm 3.024410926322e-14<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
> On Jul 3, 2016, at 10:40 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> wrote:<br>
><br>
> Barry,<br>
><br>
> GAMG must be setting up the ASM incorrectly, on empty processors, when using the aggregates. In asm.c, I see that osm->is_local is NULL for empty processors, and this causes a problem when this branch is taken by less than 8 processes (of 8):<br>
><br>
> if (osm->is_local) { // asm.c:282<br>
> ....<br>
> ierr = VecScatterCreate(vec,osm->is_local[i],osm->y_local[i],isl,&osm->prolongation[i]);CHKERRQ(ierr);<br>
><br>
> I get an error "MPI_Allreduce() called in different locations (code lines) on different processors ", because "vec" here is global. This gamg.c code must not be correct:<br>
><br>
> if (!sz) {<br>
> IS is;<br>
> ierr = ISCreateGeneral(PETSC_COMM_SELF, 0, NULL, PETSC_COPY_VALUES, &is);CHKERRQ(ierr);<br>
> ierr = PCASMSetLocalSubdomains(subpc, 1, &is, NULL);CHKERRQ(ierr);<br>
> ierr = ISDestroy(&is);CHKERRQ(ierr);<br>
> } else {<br>
> ....<br>
> }<br>
><br>
> I will look into why osm->is_local seems to be NULL here when it should have an empty IS.<br>
><br>
> Mark<br>
<br>
</div></div></blockquote></div><br></div>