<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div><br></div> Use VecCreate(), VecSetSizes(), VecSetType() and MatCreate(), MatSetSizes(), and MatSetType() instead of the convience functions VecCreateMPICUDA() and MatCreateShell().<div><br><div><br><blockquote type="cite"><div>On Sep 19, 2023, at 8:44 PM, Sreeram R Venkat <srvenkat@utexas.edu> wrote:</div><br class="Apple-interchange-newline"><div><div dir="ltr"><div>Thank you for your reply.</div><div><br></div><div>Let's call this matrix <b>M</b>: </div><div>(A B C D)</div><div>(E F G H)</div><div>(I J K L)</div><div><br></div><div>Now, instead of doing KSP with just <b>M</b>, what if I want <b>M^TM</b>? In this case, the matvec implementation would be as follows:</div><div><br></div><div><ul><li>same partitioning of blocks A, B, ..., L among the 12 MPI ranks</li><li>matvec looks like:</li></ul><div>(a) (w)</div></div><div>(b) = (<b>M^TM</b> ) (x)</div><div>(c) (y)</div><div>(d) (z)</div><div><ul><li>w, x, y, z stored on ranks A, B, C, D (as before)</li><li>a, b, c, d now also stored on ranks A, B, C, D</li></ul><div>Based on your message, I believe using a PetscLayout for both the (a,b,c,d) and (w,x,y,z) vector of (number of columns of A, number of columns of B, number of columns of C, number of columns of D,0,0,0,0,0,0,0,0,0) should work.</div><div><br></div><div><br></div><div>I see there are functions "VecSetLayout" and "MatSetLayouts" to set the PetscLayouts of the matrix and vectors. When I create the vectors (I need VecCreateMPICUDA) or matrix shell (with MatCreateShell), I need to pass the local and global sizes. I'm not sure what to do there.</div><div><br></div><div><br></div></div><div>Thanks,</div><div>Sreeram</div></div><div dir="auto"></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Sep 19, 2023, 7:13 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div> The PetscLayout local sizes for PETSc (a,b,c) vector (0,0,0,number of rows of D, 0,0,0, number of rows of H, 0,0,0,number of rows of L)<div><br></div><div> <br><div> The PetscLayout local sizes for PETSc (w,x,y,z) vector (number of columns of A, number of columns of B, number of columns of C, number of columns of D,0,0,0,0,0,0,0,0,0)</div><div><br></div><div> The left and right layouts of the shell matrix need to match the two above. </div><div><br></div><div> There is a huge problem. KSP is written assuming that the left vector layout is the same as the right vector layout. So it can do dot products MPI rank by MPI rank without needing to send individual vector values around.</div><div><br></div><div> I don't it makes sense to use PETSc with such vector decompositions as you would like.</div><div><br></div><div> Barry</div><div><br></div><div><br></div><div><br><blockquote type="cite"><div>On Sep 19, 2023, at 7:44 PM, Sreeram R Venkat <<a href="mailto:srvenkat@utexas.edu" rel="noreferrer" target="_blank">srvenkat@utexas.edu</a>> wrote:</div><br><div><div dir="ltr">With the example you have given, here is what I would like to do:<div><div><ul><li>12 MPI ranks</li><li>Each rank has one block (rank 0 has A, rank 1 has B, ..., rank 11 has L) - to make the rest of this easier I'll refer to the rank containing block A as "rank A", and so on</li><li>rank A, rank B, rank C, and rank D have w, x, y, z respectively - the first step of the custom matvec implementation broadcasts w to rank E and rank I (similarly x is broadcast to rank F and rank J ...)</li><li>at the end of the matvec computation, ranks D, H, and L have a, b, and c respectively</li></ul><div>Thanks,</div></div><div>Sreeram</div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Sep 19, 2023 at 6:23 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" rel="noreferrer" target="_blank">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div> ( a ) ( A B C D ) ( w )<div> ( b ) = ( E F G H ) ( x )<br><div> ( c ) ( I J K L ) ( y )</div><div> ( z )</div><div><br></div><div>I have no idea what "<span style="color:rgb(34,34,34)">The input vector is partitioned across each row, and the output vector is partitioned across each column" means.</span></div><div><span style="color:rgb(34,34,34)"><br></span></div><div><span style="color:rgb(34,34,34)">Anyways the shell matrix needs to live on MPI_COMM_WORLD, as do both the (a,b,c) and (w,x,y,z) vector. </span></div><div><span style="color:rgb(34,34,34)"><br></span></div><div><span style="color:rgb(34,34,34)">Now how many MPI ranks do you want to do the compution on? 12? </span></div><div><span style="color:rgb(34,34,34)">Do you want one matrix A .. Z on each rank?</span></div><div><span style="color:rgb(34,34,34)"><br></span></div><div><span style="color:rgb(34,34,34)">Do you want the (a,b,c) vector spread over all ranks? What about the </span><span style="color:rgb(34,34,34)">(w,x,y,z) vector?</span></div><div><span style="color:rgb(34,34,34)"><br></span></div><div><span style="color:rgb(34,34,34)"> Barry</span></div><div><span style="color:rgb(34,34,34)"><br></span></div><div><br></div><div><br><blockquote type="cite"><div>On Sep 19, 2023, at 4:42 PM, Sreeram R Venkat <<a href="mailto:srvenkat@utexas.edu" rel="noreferrer" target="_blank">srvenkat@utexas.edu</a>> wrote:</div><br><div><div dir="ltr"><div dir="ltr" class="gmail_signature"><div dir="ltr"><div style="color:rgb(34,34,34)">I have a custom implementation of a matrix-vector product that inherently relies on a 2D processor partitioning of the matrix. That is, if the matrix looks like:</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">A B C D</div><div style="color:rgb(34,34,34)">E F G H</div><div style="color:rgb(34,34,34)">I J K L</div></div></div></div></div></blockquote><blockquote type="cite"><div><div dir="ltr"><div dir="ltr" class="gmail_signature"><div dir="ltr"><div style="color:rgb(34,34,34)">in block form, we use 12 processors, each having one block. The input vector is partitioned across each row, and the output vector is partitioned across each column.</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">Each processor has 3 communicators: the WORLD_COMM, a ROW_COMM, and a COL_COMM. The ROW/COL communicators are used to do reductions over rows/columns of processors.</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">With this setup, I am a bit confused about how to set up the matrix shell. The "MatCreateShell" function only accepts one communicator. If I give the WORLD_COMM, the local/global sizes won't match since PETSc will try to multiply local_size * total_processors instead of local_size * processors_per_row (or col). I have gotten around this temporarily by giving ROW_COMM here instead. What I think happens is a different MatShell is created on each row, but when computing the matvec, they all work together. </div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">However, if I try to use KSP (CG) with this setup (giving ROW_COMM as the communicator), the process hangs. I believe this is due to the partitioning of the input/output vectors. The matvec itself is fine, but the inner products and other steps of CG fail. In fact, if I restrict to the case where I only have one row of processors, I am able to successfully use KSP. </div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">Is there a way to use KSP with this 2D partitioning setup when there are multiple rows of processors? I'd also prefer to work with one global MatShell object instead of this one object per row thing that I'm doing right now.</div><div style="color:rgb(34,34,34)"><br></div><div style="color:rgb(34,34,34)">Thanks for your help,</div><div style="color:rgb(34,34,34)">Sreeram</div></div></div></div>
</div></blockquote></div><br></div></div></blockquote></div>
</div></blockquote></div><br></div></div></blockquote></div>
</div></blockquote></div><br></div></body></html>