[petsc-users] Create a nest not aligned by processors

Berger Clement clement.berger at ens-lyon.fr
Fri Mar 17 12:19:43 CDT 2023


I have a matrix with four different blocks (2rows - 2columns). The block
sizes differ from one another, because they correspond to a different
physical variable. One of the block has the particularity that it has to
be updated at each iteration. This update is performed by replacing it
with a product of multiple matrices that depend on the result of the
previous iteration. Note that these intermediate matrices are not square
(because they also correspond to other types of variables), and that
they must be completely refilled by hand (i.e. they are not the result
of some simple linear operations). Finally, I use this final block
matrix to solve multiple linear systems (with different righthand
sides), so for now I use MUMPS as only the first solve takes time (but I
might change it). 

Considering this setting, I created each type of variable separately,
filled the different matrices, and created different nests of vectors /
matrices for my operations. When the time comes to use KSPSolve, I use
MatConvert on my matrix to get a MATAIJ compatible with MUMPS, I also
copy the few vector data I need from my nests in a regular Vector, I
solve, I get back my data in my nest and carry on with the operations
needed for my updates. 

Is that clear ? I don't know if I provided too many or not enough
details. 

Thank you

---
Clément BERGER
ENS de Lyon 

Le 2023-03-17 17:34, Barry Smith a écrit :

> Perhaps if you provide a brief summary of what you would like to do and we may have ideas on how to achieve it.  
> 
> Barry 
> 
> Note: that MATNEST does require that all matrices live on all the MPI processes within the original communicator. That is if the original communicator has ranks 0,1, and 2 you cannot have a matrix inside MATNEST that only lives on ranks 1,2 but you could have it have 0 rows on rank zero so effectively it lives only on rank 1 and 2 (though its communicator is all three ranks).
> 
> On Mar 17, 2023, at 12:14 PM, Berger Clement <clement.berger at ens-lyon.fr> wrote: 
> 
> It would be possible in the case I showed you but in mine that would actually be quite complicated, isn't there any other workaround ? I precise that I am not entitled to utilizing the MATNEST format, it's just that I think the other ones wouldn't work.
> 
> ---
> Clément BERGER
> ENS de Lyon 
> 
> Le 2023-03-17 15:48, Barry Smith a écrit : 
> You may be able to mimic what you want by not using PETSC_DECIDE but instead computing up front how many rows of each matrix you want stored on each MPI process. You can use 0 for on certain MPI processes for certain matrices if you don't want any rows of that particular matrix stored on that particular MPI process. 
> 
> Barry 
> 
> On Mar 17, 2023, at 10:10 AM, Berger Clement <clement.berger at ens-lyon.fr> wrote: 
> 
> Dear all, 
> 
> I want to construct a matrix by blocs, each block having different sizes and partially stored by multiple processors. If I am not mistaken, the right way to do so is by using the MATNEST type. However, the following code 
> 
> Call MatCreateConstantDiagonal(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,4,4,2.0E0_wp,A,ierr)
> Call MatCreateConstantDiagonal(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,4,4,1.0E0_wp,B,ierr)
> Call MatCreateNest(PETSC_COMM_WORLD,2,PETSC_NULL_INTEGER,2,PETSC_NULL_INTEGER,(/A,PETSC_NULL_MAT,PETSC_NULL_MAT,B/),C,ierr) 
> 
> does not generate the same matrix depending on the number of processors. It seems that it starts by everything owned by the first proc for A and B, then goes on to the second proc and so on (I hope I am being clear). 
> 
> Is it possible to change that ? 
> 
> Note that I am coding in fortran if that has ay consequence. 
> 
> Thank you, 
> 
> Sincerely,
> 
> -- 
> Clément BERGER
> ENS de Lyon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230317/c2d50f51/attachment.html>


More information about the petsc-users mailing list