<div dir="ltr"><div dir="ltr"><div dir="ltr">If you want a processor independent solve with mumps use '-pc_type lu' and if you are configured with MUMPS it will give you a parallel LU solve. And don't use any overlap in DM. If want a local LU with global 'asm' or 'bjacobi' then you have an iterative solver and use something like -ksp_type gmres.</div><div dir="ltr"><br></div><div>Mark</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jul 23, 2023 at 11:50 AM 袁煕 <<a href="mailto:yuanxi@advancesoft.jp">yuanxi@advancesoft.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi,<div><br></div><div>I used PETSc to assemble a FEM stiff matrix of an overlapped (overlap=2) DMPlex and used the MUMPS solver to solve it. But I got different solution by using 1 CPU and MPI parallel computation. I am wondering if I missed some necessary step or setting during my implementation.</div><div><br></div><div> My calling process like follows</div><div><br></div><div>1. Generate matrix pkij from DMPlex dm_mesh</div><div><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:13.6px;margin-top:0.5rem;margin-bottom:16px;padding:16px;overflow:hidden auto;line-height:1.45;color:rgb(31,35,40);border-radius:6px">```
PetscCallA( DMCreateMatrix(dm_mesh, pkij, ierr) )<br> PetscCallA( MatSetBlockSize(pkij, 1, ierr))<br> if( overlap>0 ) call MatSetOption(pkij,MAT_IGNORE_OFF_PROC_ENTRIES,PETSC_TRUE,ierr)<br> PetscCallA( MatSetFromOptions(pkij, ierr))<br><br> PetscCallA(MatGetLocalSize(pkij, m, n, ierr));<br> PetscCallA(MatGetSize(pkij, gm, gn, ierr));<br> PetscCallA(MatGetBlockSize(pkij, bs, ierr));<br><br> !<br> ! Create a preallocator matrix with sizes (local and global) matching the jacobian A.<br> !<br> PetscCallA(MatCreate(PETSC_COMM_WORLD, preallocator, ierr));<br> PetscCallA(MatSetType(preallocator, MATPREALLOCATOR, ierr));<br> PetscCallA(MatSetSizes(preallocator, m, n, gm, gn, ierr));<br> PetscCallA(MatSetBlockSize(preallocator, bs, ierr));<br> PetscCallA(MatSetUp(preallocator, ierr));
```</pre><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:13.6px;margin-top:0.5rem;margin-bottom:16px;padding:16px;overflow:hidden auto;line-height:1.45;color:rgb(31,35,40);border-radius:6px">2. Do Matrix preallocation</pre></div><div><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:13.6px;margin-top:0.5rem;margin-bottom:16px;padding:16px;overflow:hidden auto;line-height:1.45;color:rgb(31,35,40);border-radius:6px">```<br> PetscCallA(MatCreate(PETSC_COMM_WORLD, preallocator, ierr));<br> PetscCallA(MatSetType(preallocator, MATPREALLOCATOR, ierr));<br> PetscCallA(MatSetSizes(preallocator, m, n, gm, gn, ierr));<br> PetscCallA(MatSetBlockSize(preallocator, bs, ierr));<br> PetscCallA(MatSetUp(preallocator, ierr));</pre><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:13.6px;margin-top:0.5rem;margin-bottom:16px;padding:16px;overflow:hidden auto;line-height:1.45;color:rgb(31,35,40);border-radius:6px"> .......</pre><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:13.6px;margin-top:0.5rem;margin-bottom:16px;padding:16px;overflow:hidden auto;line-height:1.45;color:rgb(31,35,40);border-radius:6px"> PetscCallA( MatPreallocatorPreallocate(preallocator, PETSC_TRUE, pkij, ierr) )<br> PetscCallA( MatDestroy(preallocator,ierr) )
```</pre><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:13.6px;margin-top:0.5rem;margin-bottom:16px;padding:16px;overflow:hidden auto;line-height:1.45;color:rgb(31,35,40);border-radius:6px">3. Assemble matrix pkij by calling MatSetValue of all overlapped elements.</pre><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:13.6px;margin-top:0.5rem;margin-bottom:16px;padding:16px;overflow:hidden auto;line-height:1.45;color:rgb(31,35,40);border-radius:6px">In my above implementation, I used MatSetOption(pkij,MAT_IGNORE_OFF_PROC_ENTRIES,PETSC_TRUE,ierr). Is that correct? Or even other options are needed?</pre><pre style="box-sizing:border-box;font-family:ui-monospace,SFMono-Regular,"SF Mono",Menlo,Consolas,"Liberation Mono",monospace;font-size:13.6px;margin-top:0.5rem;margin-bottom:16px;padding:16px;overflow:hidden auto;line-height:1.45;color:rgb(31,35,40);border-radius:6px">Thanks for </pre></div></div>
</blockquote></div>