<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div><br></div> Perhaps if you provide a brief summary of what you would like to do and we may have ideas on how to achieve it. <div><br></div><div> Barry</div><div><br></div><div>Note: that MATNEST does require that all matrices live on all the MPI processes within the original communicator. That is if the original communicator has ranks 0,1, and 2 you cannot have a matrix inside MATNEST that only lives on ranks 1,2 but you could have it have 0 rows on rank zero so effectively it lives only on rank 1 and 2 (though its communicator is all three ranks).<br><div><br><blockquote type="cite"><div>On Mar 17, 2023, at 12:14 PM, Berger Clement <clement.berger@ens-lyon.fr> wrote:</div><br class="Apple-interchange-newline"><div><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><div style="font-size: 10pt; font-family: Verdana,Geneva,sans-serif"><p>It would be possible in the case I showed you but in mine that would actually be quite complicated, isn't there any other workaround ? I precise that I am not entitled to utilizing the MATNEST format, it's just that I think the other ones wouldn't work.</p>
<div>---<br>
<div class="pre" style="margin: 0; padding: 0; font-family: monospace">Clément BERGER<br> ENS de Lyon</div>
</div><p><br></p><p>Le 2023-03-17 15:48, Barry Smith a écrit :</p>
<blockquote type="cite" style="padding: 0 0.4em; border-left: #1010ff 2px solid; margin: 0"><!-- html ignored --><!-- head ignored --><!-- meta ignored -->
<div> </div>
You may be able to mimic what you want by not using PETSC_DECIDE but instead computing up front how many rows of each matrix you want stored on each MPI process. You can use 0 for on certain MPI processes for certain matrices if you don't want any rows of that particular matrix stored on that particular MPI process.
<div> </div>
<div> Barry</div>
<div><br>
<div><br>
<blockquote type="cite" style="padding: 0 0.4em; border-left: #1010ff 2px solid; margin: 0">
<div>On Mar 17, 2023, at 10:10 AM, Berger Clement <clement.berger@ens-lyon.fr> wrote:</div>
<div>
<div style="font-size: 10pt; font-family: Verdana,Geneva,sans-serif;"><p>Dear all,</p><p>I want to construct a matrix by blocs, each block having different sizes and partially stored by multiple processors. If I am not mistaken, the right way to do so is by using the MATNEST type. However, the following code</p><p>Call MatCreateConstantDiagonal(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,4,4,2.0E0_wp,A,ierr)<br>Call MatCreateConstantDiagonal(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,4,4,1.0E0_wp,B,ierr)<br>Call MatCreateNest(PETSC_COMM_WORLD,2,PETSC_NULL_INTEGER,2,PETSC_NULL_INTEGER,(/A,PETSC_NULL_MAT,PETSC_NULL_MAT,B/),C,ierr)</p><p>does not generate the same matrix depending on the number of processors. It seems that it starts by everything owned by the first proc for A and B, then goes on to the second proc and so on (I hope I am being clear).</p><p>Is it possible to change that ?</p><p>Note that I am coding in fortran if that has ay consequence.</p><p>Thank you,</p><p>Sincerely,</p>
<div>-- <br>
<div class="pre" style="margin: 0; padding: 0; font-family: monospace;">Clément BERGER<br> ENS de Lyon</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div></blockquote></div><br></div></body></html>