<div dir="ltr">And I guess I am really doing two things here.<div><br></div><div>1) The solver that I am intending to use is SuperLU. I believe Barry got LU working in OMP threads a few years ago. My problems now are in Krylov. I could live with what I have now and just get Sherry to make SuperLU_dist not use MPI in serial. SuperLU does hang now.</div><div><br></div><div>2) While I am doing this grab low hanging fruit and expand this model to work with Krylov. GMRES has more problems but it looks like Richardson/cuSparse-ILU only has a problem that convergence testing is hosed.</div><div><br></div><div>I am open to other models, I have a specific problem and would like, as much as possible, to contribute to PETSc along the way.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 21, 2021 at 12:01 PM Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 21, 2021 at 11:25 AM Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> writes:<br>
<br>
> Yes, the problem is that each KSP solver is running in an OMP thread<br>
<br>
There can be more or less splits than OMP_NUM_THREADS. Each thread is still calling blocking operations.<br>
<br>
This is a concurrency problem, not a parallel efficiency problem. It can be solved with async interfaces </blockquote><div><br></div><div>I don't know how to do that. I want a GPU solver, probably superLU, and am starting with cuSparse ilu to get something running</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">or by making as many threads as splits and ensuring that you don't spin (lest contention kill performance). </blockquote><div><br></div><div>I don't get correctness with Richardson with > 1 OMP threads currently. This is on IBM with GNU.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">OpenMP is pretty orthogonal and probably not a good fit.<br></blockquote><div><br></div><div>Do you have an alternative?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> (So at this point it only works for SELF and its Landau so it is all I need). It looks like MPI reductions called with a comm_self are not thread safe (eg, the could say, this is one proc, thus, just copy send --> recv, but they don't)<br>
><br>
> On Thu, Jan 21, 2021 at 10:46 AM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
><br>
>> On Thu, Jan 21, 2021 at 10:34 AM Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br>
>><br>
>>> It looks like PETSc is just too clever for me. I am trying to get a<br>
>>> different MPI_Comm into each block, but PETSc is thwarting me:<br>
>>><br>
>><br>
>> It looks like you are using SELF. Is that what you want? Do you want a<br>
>> bunch of comms with the same group, but independent somehow? I am confused.<br>
>><br>
>> Matt<br>
>><br>
>><br>
>>> if (jac->use_openmp) {<br>
>>> ierr = KSPCreate(MPI_COMM_SELF,&ilink->ksp);CHKERRQ(ierr);<br>
>>> PetscPrintf(PETSC_COMM_SELF,"In PCFieldSplitSetFields_FieldSplit with<br>
>>> -------------- link: %p. Comms %p<br>
>>> %p\n",ilink,PetscObjectComm((PetscObject)pc),PetscObjectComm((PetscObject)ilink->ksp));<br>
>>> } else {<br>
>>> ierr =<br>
>>> KSPCreate(PetscObjectComm((PetscObject)pc),&ilink->ksp);CHKERRQ(ierr);<br>
>>> }<br>
>>><br>
>>> produces:<br>
>>><br>
>>> In PCFieldSplitSetFields_FieldSplit with -------------- link: 0x7e9cb4f0.<br>
>>> Comms 0x660c6ad0 0x660c6ad0<br>
>>> In PCFieldSplitSetFields_FieldSplit with -------------- link: 0x7e88f7d0.<br>
>>> Comms 0x660c6ad0 0x660c6ad0<br>
>>><br>
>>> How can I work around this?<br>
>>><br>
>>><br>
>>> On Thu, Jan 21, 2021 at 7:41 AM Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br>
>>><br>
>>>><br>
>>>><br>
>>>> On Wed, Jan 20, 2021 at 6:21 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:<br>
>>>><br>
>>>>><br>
>>>>><br>
>>>>> On Jan 20, 2021, at 3:09 PM, Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br>
>>>>><br>
>>>>> So I put in a temporary hack to get the first Fieldsplit apply to NOT<br>
>>>>> use OMP and it sort of works.<br>
>>>>><br>
>>>>> Preonly/lu is fine. GMRES calls vector creates/dups in every solve so<br>
>>>>> that is a big problem.<br>
>>>>><br>
>>>>><br>
>>>>> It should definitely not be creating vectors "in every" solve. But it<br>
>>>>> does do lazy allocation of needed restarted vectors which may make it look<br>
>>>>> like it is creating "every" vectors in every solve. You can<br>
>>>>> use -ksp_gmres_preallocate to force it to create all the restart vectors up<br>
>>>>> front at KSPSetUp().<br>
>>>>><br>
>>>><br>
>>>> Well, I run the first solve w/o OMP and I see Vec dups in cuSparse Vecs<br>
>>>> in the 2nd solve.<br>
>>>><br>
>>>><br>
>>>>><br>
>>>>> Why is creating vectors "at every solve" a problem? It is not thread<br>
>>>>> safe I guess?<br>
>>>>><br>
>>>><br>
>>>> It dies when it looks at the options database, in a Free in the<br>
>>>> get-options method to be exact (see stacks).<br>
>>>><br>
>>>> ======= Backtrace: =========<br>
>>>> /lib64/libc.so.6(cfree+0x4a0)[0x200021839be0]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(PetscFreeAlign+0x4c)[0x2000002a368c]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(PetscOptionsEnd_Private+0xf4)[0x2000002e53f0]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(+0x7c6c28)[0x2000008b6c28]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(VecCreate_SeqCUDA+0x11c)[0x20000052c510]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(VecSetType+0x670)[0x200000549664]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(VecCreateSeqCUDA+0x150)[0x20000052c0b0]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(+0x43c198)[0x20000052c198]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(VecDuplicate+0x44)[0x200000542168]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(VecDuplicateVecs_Default+0x148)[0x200000543820]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(VecDuplicateVecs+0x54)[0x2000005425f4]<br>
>>>><br>
>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib/libpetsc.so.3.014(KSPCreateVecs+0x4b4)[0x2000016f0aec]<br>
>>>><br>
>>>><br>
>>>><br>
>>>>><br>
>>>>> Richardson works except the convergence test gets confused, presumably<br>
>>>>> because MPI reductions with PETSC_COMM_SELF is not threadsafe.<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> One fix for the norms might be to create each subdomain solver with a<br>
>>>>> different communicator.<br>
>>>>><br>
>>>>><br>
>>>>> Yes you could do that. It might actually be the correct thing to do<br>
>>>>> also, if you have multiple threads call MPI reductions on the same<br>
>>>>> communicator that would be a problem. Each KSP should get a new MPI_Comm.<br>
>>>>><br>
>>>><br>
>>>> OK. I will only do this.<br>
>>>><br>
>>>><br>
>><br>
>> --<br>
>> What most experimenters take for granted before they begin their<br>
>> experiments is infinitely more interesting than any results to which their<br>
>> experiments lead.<br>
>> -- Norbert Wiener<br>
>><br>
>> <a href="https://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
>> <<a href="http://www.cse.buffalo.edu/~knepley/" rel="noreferrer" target="_blank">http://www.cse.buffalo.edu/~knepley/</a>><br>
>><br>
</blockquote></div></div>
</blockquote></div>