<div dir="ltr">The matrix only requires 175MB (upper triangular part). Not sure where the other extra memory comes from for np > 1.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 22, 2022 at 1:21 PM Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Tue, Mar 22, 2022 at 4:16 PM Sam Guo <<a href="mailto:sam.guo@cd-adapco.com" target="_blank">sam.guo@cd-adapco.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Here is one memory comparison (memory in MB)<div><table style="border-collapse:collapse;margin:5px 0px 5px 2px;width:auto;color:rgb(23,43,77);font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Oxygen,Ubuntu,"Fira Sans","Droid Sans","Helvetica Neue",sans-serif;font-size:14px"><tbody><tr><th style="border:1px solid rgb(193,199,208);background:rgb(244,245,247);padding:3px 4px;text-align:center"> </th><th style="border:1px solid rgb(193,199,208);background:rgb(244,245,247);padding:3px 4px;text-align:center">np=1</th><th style="border:1px solid rgb(193,199,208);background:rgb(244,245,247);padding:3px 4px;text-align:center">np=2</th><th style="border:1px solid rgb(193,199,208);background:rgb(244,245,247);padding:3px 4px;text-align:center">np=4</th><th style="border:1px solid rgb(193,199,208);background:rgb(244,245,247);padding:3px 4px;text-align:center">np=8</th><th style="border:1px solid rgb(193,199,208);background:rgb(244,245,247);padding:3px 4px;text-align:center">np=16</th></tr><tr><td style="border:1px solid rgb(193,199,208);padding:3px 4px">shell</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">1614</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">1720</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">1874</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">1673</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">1248</td></tr><tr><td style="border:1px solid rgb(193,199,208);padding:3px 4px">PETSc(using full matrix)</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">2108</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">2260</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">2364</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">2215</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">1734</td></tr><tr><td style="border:1px solid rgb(193,199,208);padding:3px 4px">PETSc(using symmetric matrix)</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">1750</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">2100</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">2189</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">2094</td><td style="border:1px solid rgb(193,199,208);padding:3px 4px">1727</td></tr></tbody></table>Those are the total water mark memory added. </div></div></blockquote><div><br></div><div>You should be able to directly read off the memory in the PETSc Mat structures (it is at the end of the log).</div><div>With a tool like massif you could also directly measure the MUMPS memory.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 22, 2022 at 1:10 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div><div> Sam,</div><div><br></div> MUMPS is a direct solver, as such, it requires much more memory than the original matrix (stored as a PETSc matrix) to store the factored matrix. The savings you will get by not having a PETSc copy of the matrix and a MUMPS copy of the matrix at the same time is unlikely to be significant. Do you have memory footprint measurements indicating that not having the PETSc copy of the matrix in memory will allow you to run measurably larger simulations? <div><br></div><div> Barry</div><div><br><div><br></div><div> <br><div><br><blockquote type="cite"><div>On Mar 22, 2022, at 3:58 PM, Sam Guo <<a href="mailto:sam.guo@cd-adapco.com" target="_blank">sam.guo@cd-adapco.com</a>> wrote:</div><br><div><div dir="ltr">The reason I want to use shell matrix is to reduce memory footprint. If I create a PETSc matrix and use MUMPS, I understand PETSc will create another copy of the matrix for MUMPS. Is there any way to avoid the extra copy of MUMPS?</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 22, 2022 at 12:53 PM Sam Guo <<a href="mailto:sam.guo@cd-adapco.com" target="_blank">sam.guo@cd-adapco.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Barry,<div> Thanks for the illustration. Is there an easy way to mimic the implementation using shell matrix? I have been studying how the sMvctx is created and it seems pretty involved.</div><div><br></div><div>Thanks,</div><div>Sam</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 21, 2022 at 2:48 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><br><div><br><blockquote type="cite"><div>On Mar 21, 2022, at 4:36 PM, Sam Guo <<a href="mailto:sam.guo@cd-adapco.com" target="_blank">sam.guo@cd-adapco.com</a>> wrote:</div><br><div><div dir="ltr">Barry,<div> Thanks. Could you elaborate? I try to implement the matrix-vector multiplication for a symmetric matrix using shell matrix.</div></div></div></blockquote><div><br></div><div> Consider with three ranks </div><div><br></div> (a) = ( A B D) (x)</div><div> (b) ( B' C E) (y)</div><div> (c) ( D' E' F) (w)</div><div><br></div><div> Only the ones without the ' are stored on the rank. So for example B is stored on rank 0.</div><div><br></div><div> Rank 0 computes A x and keeps it in a. Rank 1 computes Cy and keeps it in b Rank 2 computes Fw and keeps it in c</div><div><br></div><div> Rank 0 computes B'x and D'x. It puts the nonzero entries of these values as well as the values of x into slvec0</div><div><br></div><div> Rank 1 computes E'y and puts the nonzero entries as well as the values into slvec0</div><div><br></div><div> Rank 2 puts the values of we needed by the other ranks into slvec0</div><div><br></div><div> Rank 0 does B y_h + D z_h where it gets the y_h and z_h values from slvec1 and adds it to a</div><div><br></div><div> Rank 1 takes the B'x from slvec1 and adds it to b it then takes the E y_h values where the y_h are pulled from slvec1 and adds them b</div><div><br></div><div> Rank 2 takes the B'x and E'y from slvec0 and adds them to c. </div><div><br></div><div><br></div><div><blockquote type="cite"><div><div dir="ltr"><div><br></div><div>Thanks,</div><div>Sam</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 21, 2022 at 12:56 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div> The "trick" is that though "more" communication is needed to complete the product the communication can still be done in a single VecScatter instead of two separate calls to VecScatter. We simply pack both pieces of information that needs to be sent into a single vector. <div><br></div><div><pre width="80"> <font color="#B22222">/* copy x into the vec slvec0 */</font>
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_line1111">1111: </a> <a href="https://petsc.org/main/docs/manualpages/Vec/VecGetArray.html#VecGetArray" target="_blank">VecGetArray</a>(a->slvec0,&from);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_line1112">1112: </a> <a href="https://petsc.org/main/docs/manualpages/Vec/VecGetArrayRead.html#VecGetArrayRead" target="_blank">VecGetArrayRead</a>(xx,&x);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_line1114">1114: </a> <a href="https://petsc.org/main/docs/manualpages/Sys/PetscArraycpy.html#PetscArraycpy" target="_blank">PetscArraycpy</a>(from,x,bs*mbs);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_line1115">1115: </a> <a href="https://petsc.org/main/docs/manualpages/Vec/VecRestoreArray.html#VecRestoreArray" target="_blank">VecRestoreArray</a>(a->slvec0,&from);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_line1116">1116: </a> <a href="https://petsc.org/main/docs/manualpages/Vec/VecRestoreArrayRead.html#VecRestoreArrayRead" target="_blank">VecRestoreArrayRead</a>(xx,&x);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_line1118">1118: </a> <a href="https://petsc.org/main/docs/manualpages/PetscSF/VecScatterBegin.html#VecScatterBegin" target="_blank">VecScatterBegin</a>(a->sMvctx,a->slvec0,a->slvec1,<a href="https://petsc.org/main/docs/manualpages/Sys/ADD_VALUES.html#ADD_VALUES" target="_blank">ADD_VALUES</a>,<a href="https://petsc.org/main/docs/manualpages/Vec/SCATTER_FORWARD.html#SCATTER_FORWARD" target="_blank">SCATTER_FORWARD</a>);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_line1119">1119: </a> <a href="https://petsc.org/main/docs/manualpages/PetscSF/VecScatterEnd.html#VecScatterEnd" target="_blank">VecScatterEnd</a>(a->sMvctx,a->slvec0,a->slvec1,<a href="https://petsc.org/main/docs/manualpages/Sys/ADD_VALUES.html#ADD_VALUES" target="_blank">ADD_VALUES</a>,<a href="https://petsc.org/main/docs/manualpages/Vec/SCATTER_FORWARD.html#SCATTER_FORWARD" target="_blank">SCATTER_FORWARD</a>);</pre><div>If you create two symmetric matrices, one with SBAIJ and one with BAIJ and compare the time to do the product you will find that the SBAIJ is not significantly slower but does save memory.</div><div><br></div><div><br><blockquote type="cite"><div>On Mar 21, 2022, at 3:26 PM, Sam Guo <<a href="mailto:sam.guo@cd-adapco.com" target="_blank">sam.guo@cd-adapco.com</a>> wrote:</div><br><div><div dir="ltr">Using following example from the
<span style="font-family:"Times New Roman";font-size:inherit">MatCreateSBAIJ documentation</span><div><pre> 0 1 2 3 4 5 6 7 8 9 10 11
--------------------------
row 3 |. . . d d d o o o o o o
row 4 |. . . d d d o o o o o o
row 5 |. . . d d d o o o o o o
--------------------------</pre><pre><br></pre><pre>On a processor that owns rows 3, 4 and 5, rows 0-2 info are still needed. Is is that the processor that owns rows 0-2 will apply B symmetrical and then send the result </pre><pre>to the processor that owns 3-5?</pre><pre></pre></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 21, 2022 at 12:14 PM Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">PETSc stores parallel matrices as two serial matrices. One for the diagonal (d or A) block and one for the rest (o or B).<div>I would guess that for symmetric matrices it has a symmetric matrix for the diagonal and a full AIJ matrix for the (upper) off-diagonal.</div><div>So the <span style="white-space:pre-wrap">multtranspose is applying B symmetrically. This lower off-diagonal and the diagonal block can be done without communication.</span></div><div><span style="white-space:pre-wrap">Then the off processor values are collected, and the upper off-diagonal is applied.</span></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 21, 2022 at 2:35 PM Sam Guo <<a href="mailto:sam.guo@cd-adapco.com" target="_blank">sam.guo@cd-adapco.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I am most interested in how the lower triangular part is redistributed. It seems that SBAJI saves memory but requires more communication than BAIJ.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 21, 2022 at 11:27 AM Sam Guo <<a href="mailto:sam.guo@cd-adapco.com" target="_blank">sam.guo@cd-adapco.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Mark, thanks for the quick response. I am more interested in parallel implementation of MatMult for SBAIJ. I found following<div><pre width="80"><a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1094">1094: </a><strong><font color="#4169E1"><a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_MatMult_MPISBAIJ"></a><a href="https://petsc.org/main/docs/manualpages/Sys/PetscErrorCode.html#PetscErrorCode" target="_blank">PetscErrorCode</a> MatMult_MPISBAIJ(<a href="https://petsc.org/main/docs/manualpages/Mat/Mat.html#Mat" target="_blank">Mat</a> A,<a href="https://petsc.org/main/docs/manualpages/Vec/Vec.html#Vec" target="_blank">Vec</a> xx,<a href="https://petsc.org/main/docs/manualpages/Vec/Vec.html#Vec" target="_blank">Vec</a> yy)</font></strong>
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1095">1095: </a>{
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1096">1096: </a> Mat_MPISBAIJ *a = (Mat_MPISBAIJ*)A->data;
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1097">1097: </a> <a href="https://petsc.org/main/docs/manualpages/Sys/PetscErrorCode.html#PetscErrorCode" target="_blank">PetscErrorCode</a> ierr;
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1098">1098: </a> <a href="https://petsc.org/main/docs/manualpages/Sys/PetscInt.html#PetscInt" target="_blank">PetscInt</a> mbs=a->mbs,bs=A->rmap->bs;
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1099">1099: </a> <a href="https://petsc.org/main/docs/manualpages/Sys/PetscScalar.html#PetscScalar" target="_blank">PetscScalar</a> *from;
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1100">1100: </a> const <a href="https://petsc.org/main/docs/manualpages/Sys/PetscScalar.html#PetscScalar" target="_blank">PetscScalar</a> *x;
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1103">1103: </a> <font color="#B22222">/* diagonal part */</font>
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1104">1104: </a> (*a->A->ops->mult)(a->A,xx,a->slvec1a);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1105">1105: </a> <a href="https://petsc.org/main/docs/manualpages/Vec/VecSet.html#VecSet" target="_blank">VecSet</a>(a->slvec1b,0.0);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1107">1107: </a> <font color="#B22222">/* subdiagonal part */</font>
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1108">1108: </a> (*a->B->ops->multtranspose)(a->B,xx,a->slvec0b);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1110">1110: </a> <font color="#B22222">/* copy x into the vec slvec0 */</font>
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1111">1111: </a> <a href="https://petsc.org/main/docs/manualpages/Vec/VecGetArray.html#VecGetArray" target="_blank">VecGetArray</a>(a->slvec0,&from);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1112">1112: </a> <a href="https://petsc.org/main/docs/manualpages/Vec/VecGetArrayRead.html#VecGetArrayRead" target="_blank">VecGetArrayRead</a>(xx,&x);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1114">1114: </a> <a href="https://petsc.org/main/docs/manualpages/Sys/PetscArraycpy.html#PetscArraycpy" target="_blank">PetscArraycpy</a>(from,x,bs*mbs);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1115">1115: </a> <a href="https://petsc.org/main/docs/manualpages/Vec/VecRestoreArray.html#VecRestoreArray" target="_blank">VecRestoreArray</a>(a->slvec0,&from);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1116">1116: </a> <a href="https://petsc.org/main/docs/manualpages/Vec/VecRestoreArrayRead.html#VecRestoreArrayRead" target="_blank">VecRestoreArrayRead</a>(xx,&x);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1118">1118: </a> <a href="https://petsc.org/main/docs/manualpages/PetscSF/VecScatterBegin.html#VecScatterBegin" target="_blank">VecScatterBegin</a>(a->sMvctx,a->slvec0,a->slvec1,<a href="https://petsc.org/main/docs/manualpages/Sys/ADD_VALUES.html#ADD_VALUES" target="_blank">ADD_VALUES</a>,<a href="https://petsc.org/main/docs/manualpages/Vec/SCATTER_FORWARD.html#SCATTER_FORWARD" target="_blank">SCATTER_FORWARD</a>);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1119">1119: </a> <a href="https://petsc.org/main/docs/manualpages/PetscSF/VecScatterEnd.html#VecScatterEnd" target="_blank">VecScatterEnd</a>(a->sMvctx,a->slvec0,a->slvec1,<a href="https://petsc.org/main/docs/manualpages/Sys/ADD_VALUES.html#ADD_VALUES" target="_blank">ADD_VALUES</a>,<a href="https://petsc.org/main/docs/manualpages/Vec/SCATTER_FORWARD.html#SCATTER_FORWARD" target="_blank">SCATTER_FORWARD</a>);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1120">1120: </a> <font color="#B22222">/* supperdiagonal part */</font>
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1121">1121: </a> (*a->B->ops->multadd)(a->B,a->slvec1b,a->slvec1a,yy);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1122">1122: </a> <font color="#4169E1">return</font>(0);
<a name="m_-4112353598509699690_m_-3296622922807368344_m_-6721792415453726673_m_-6880554628075345529_m_-6002104729051566256_m_-1423813778787502823_m_-1097518819297141460_m_2758397486421008689_m_-8016313848878634063_line1123">1123: </a>}</pre><pre width="80"> I try to understand the algorithm.</pre><pre width="80"><br></pre><pre width="80">Thanks,</pre><pre width="80">Sam</pre></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 21, 2022 at 11:14 AM Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">This code looks fine to me and the code is in src/mat/impls/sbaij/seq/sbaij2.c</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 21, 2022 at 2:02 PM Sam Guo <<a href="mailto:sam.guo@cd-adapco.com" target="_blank">sam.guo@cd-adapco.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><span style="font-family:"Times New Roman";font-size:inherit">Dear PETSc dev team,</span><div><span style="font-family:"Times New Roman";font-size:inherit"> The documentation about MatCreateSBAIJ has following</span></div><div><span style="font-family:"Times New Roman";font-size:inherit">"It is recommended that one use the </span><a href="https://petsc.org/main/docs/manualpages/Mat/MatCreate.html#MatCreate" style="font-family:"Times New Roman";font-size:inherit" target="_blank">MatCreate</a><span style="font-family:"Times New Roman";font-size:inherit">(), </span><a href="https://petsc.org/main/docs/manualpages/Mat/MatSetType.html#MatSetType" style="font-family:"Times New Roman";font-size:inherit" target="_blank">MatSetType</a><span style="font-family:"Times New Roman";font-size:inherit">() and/or </span><a href="https://petsc.org/main/docs/manualpages/Mat/MatSetFromOptions.html#MatSetFromOptions" style="font-family:"Times New Roman";font-size:inherit" target="_blank">MatSetFromOptions</a><span style="font-family:"Times New Roman";font-size:inherit">(), MatXXXXSetPreallocation() paradigm instead of this routine directly. [MatXXXXSetPreallocation() is, for example, </span><a href="https://petsc.org/main/docs/manualpages/Mat/MatSeqAIJSetPreallocation.html#MatSeqAIJSetPreallocation" style="font-family:"Times New Roman";font-size:inherit" target="_blank">MatSeqAIJSetPreallocation</a><span style="font-family:"Times New Roman";font-size:inherit">]"</span><br></div><div><span style="font-family:"Times New Roman";font-size:inherit"> I currently call </span>MatCreateSBAIJ directly as follows:</div><div>MatCreateSBAIJ (with d_nnz and o_nnz)<br>MatSetValues (to add row by row)</div><div>MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);</div><div>MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);<br>MatSetOption(A, MAT_SYMMETRIC, PETSC_TRUE);<span style="font-family:"Times New Roman";font-size:inherit"><br></span></div><div><br></div><div> Two questions:</div><div> (1) I am wondering whether what I am doing is the most efficient. </div><div><br></div><div> (2) I try to find out how the matrix vector multiplication is implemented in PETSc for SBAIJ storage.</div><div><br></div><div>Thanks,</div><div>Sam</div></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</div></blockquote></div><br></div></div></blockquote></div>
</div></blockquote></div><br></div></blockquote></div>
</blockquote></div>
</div></blockquote></div><br></div></div></div></blockquote></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div>