<div dir="ltr"><div dir="ltr">On Tue, Feb 15, 2022 at 4:13 PM Bojan Niceno <<a href="mailto:bojan.niceno.scientist@gmail.com">bojan.niceno.scientist@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Dear PETSc users,<div><br></div><div><br></div><div>I have an in-house computational fluid dynamics (CFD) solver, written in Fortran 2008, parallelized with MPI with its own home-grown suite of linear solvers. The code is unstructured, performs domain decomposition with METIS and all communication buffers, I mean connectivity between processors, has been properly worked out.</div><div><br></div><div>A couple of weeks back, I decided to try out the PETSc suite of solvers. After some initial setbacks, I managed to compile my code with PETSc and have the sequential version working fine :-)</div><div><br></div><div>I have essentially using the following PETSc routines to get the code solving linear systems with PETSc: </div><div><br></div><div><i> </i>I set up the working space as follows:</div><div><br></div><div><font face="monospace"> call PetscInitialize(PETSC_NULL_CHARACTER, Pet % petsc_err)<br></font></div><div><font face="monospace" color="#000000"> call MatCreateSeqAij(PETSC_COMM_SELF, ...<br></font></div><div><font face="monospace" color="#000000"> </font><font face="monospace" color="#ff0000">call MatSeqAijSetColumnIndices(....</font></div><div><font face="monospace" color="#000000"> call VecCreateSeq(PETSC_COMM_SELF, ... ! Create Vector x</font></div><div><font face="monospace" color="#000000"> call VecCreateSeq(PETSC_COMM_SELF, ... ! Create Vector b</font></div><div><font face="monospace"> call KSPCreate(PETSC_COMM_SELF, ...</font></div><div><font face="monospace"><br></font></div><div><font face="arial, sans-serif">Then in order to solve a system, I do:</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="monospace"> call MatSetValue(Pet % petsc_A, ! Inside a loop through matrix entries<br> i-PETSC_ONE, <br> k-PETSC_ONE, ...</font></div><font face="monospace"> call MatAssemblyBegin(Pet % petsc_A, MAT_FINAL_ASSEMBLY, Pet % petsc_err)<br> call MatAssemblyEnd (Pet % petsc_A, MAT_FINAL_ASSEMBLY, Pet % petsc_err)<br></font><div><font face="monospace"> <br></font></div><div><font face="monospace"> call VecSetValue(Pet % petsc_x, ... ! Fill up x</font></div><div><div><font face="monospace"> call VecSetValue(Pet % petsc_b, ... ! Fill up b</font></div><font face="monospace"><div><font face="monospace"><br></font></div> call KSPSetType(Pet % petsc_ksp ... ! Set solver<br></font></div><div><font face="monospace"> call KSPGetPC(Pet % petsc_ksp, ... ! get preconditioner context</font></div><font face="monospace"> call PCSetType(Pet % petsc_pc, ... ! Set preconditioner</font><div><font face="monospace"><br> call KSPSetFromOptions(Pet % petsc_ksp, Pet % petsc_err) <br> call KSPSetUp (Pet % petsc_ksp, Pet % petsc_err) <br></font><div><font face="monospace"> </font></div><div><font face="monospace"> ! Finally solve</font></div><div><font face="monospace"> call KSPSolve(Pet % petsc_ksp, ...</font></div><div><font face="monospace"><br></font></div><div><font face="arial, sans-serif">Once this was up and running, I thought that in order to have the parallel version I will merely have to replace the "Seq" versions of the above functions, with their parallel counterparts. I was expecting to find the red function (</font><font face="monospace">MatSeqAijSetColumnIndices</font><font face="arial, sans-serif">) for parallel runs, but it doesn't seem to exist. I have found non-seq versions of some other functions (</font><font face="monospace">MatCreateAij</font><font face="arial, sans-serif">, </font><font face="monospace">VecCreateSeq</font><font face="arial, sans-serif">), but not something like </font><font face="monospace">MatAijSetColumnIndices</font><font face="arial, sans-serif">, which surprised me a bit, because I have this information in my code.</font></div><div><span style="font-family:arial,sans-serif"> </span></div><div><div><font face="arial, sans-serif">Is there a parallel counterpart of this function, and if there is none, what should it be replaced with? I understand that I will have to provide non-zeros in buffers (o_nnz), which is not a big issue, but how to provide information on columns for parallel version is not clear to me. In a nutshell, I would need a hint on which of the above functions could remain the same in parallel, and which should be replaced and with what?</font></div></div></div></div></blockquote><div><br></div><div>The reason that we do not have the same function is that the parallel storage format separately stores the diagonal and off-diagonal blocks, so you cannot</div><div>just copy in the indices. I think if you really need this, we could add something that would take the data you have in parallel and produce a nonzero structure, but</div><div>it could not share your array.</div><div><br></div><div>More generally, I would say things that intrusively touch the internal data structure, like MatSeqAijSetColumnIndices(), are not to be preferred, but we do have them because</div><div>some users want them. If I were doing the kind of conversion that you are, I would instead use MatPreallocator. This allows you to run your assembly loop twice, just inserting</div><div>values the first time, which allows the matrix to be preallocated, and then inserting again. The insertion time is usually small compared to computing the values. I have used</div><div>this in a few places in my code FEM code and it works efficiently.</div><div><br></div><div>Alternatively, you could just call <a href="https://petsc.org/main/docs/manualpages/Mat/MatXAIJSetPreallocation.html">https://petsc.org/main/docs/manualpages/Mat/MatXAIJSetPreallocation.html</a> with your preallocation data and let the column indices be</div><div>calculated. That is usually a very small overhead.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> Cheers,</div><div><br></div><div> Bojan</div><div><br></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>