<div dir="ltr"><div dir="ltr">On Thu, Feb 17, 2022 at 7:01 AM Bojan Niceno <<a href="mailto:bojan.niceno.scientist@gmail.com">bojan.niceno.scientist@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Dear all,<div><br></div><div>I am coupling my unstructured CFD solver with PETSc. At this moment, sequential version is working fine, but I obviously want to migrate to MPI parallel. My code is MPI parallel since ages.</div><div><br></div><div>Anyhow, as a part of the migration to parallel, I changed the matrix type from MATSEQAIJ to MATMPIAIJ. The code compiled, but when I executed it one processor, I received an error message that combination of matrix format does not support BICG solver and PCILU preconditoner. I took a look at the compatibility matrix (<a href="https://petsc.org/release/overview/linear_solve_table/#preconditioners" target="_blank">https://petsc.org/release/overview/linear_solve_table/#preconditioners</a>) and noticed that MATMPIAIJ supports only MKL CParadiso preconditioner which seems to belong to Intel.</div><div><br></div><div>I did some more reading and realised that I should probably continue with MATAIJ (which should work in sequential and parallel), but I am wondering why would there even be MATMPIAJ if it supports only one third-party preconditioner?</div></div></blockquote><div><br></div><div>1) MATAIJ is not a concrete type, it just creates MATSEQAIJ in serial and MATMPIAIJ in parallel</div><div><br></div><div>2) MATMPIAIJ supports many parallel direct solvers (see the end of <a href="https://petsc.org/main/docs/manual/ksp/">https://petsc.org/main/docs/manual/ksp/</a>), including</div><div><br></div><div> MUMPS</div><div> SuperLU_dist</div><div> Hypre (Euclid)</div><div> CPardiso</div><div><br></div><div>There are also parallel AMG solvers, parallel DD solvers, and Krylov solvers.</div><div><br></div><div>The complaint you got said that a serial LU was being used with a parallel matrix type, so using AIJ is the right solution.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div> Cheers,</div><div><br></div><div> Bojan Niceno</div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>