[petsc-users] Create a nest not aligned by processors

Berger Clement clement.berger at ens-lyon.fr
Fri Mar 17 11:14:11 CDT 2023


It would be possible in the case I showed you but in mine that would
actually be quite complicated, isn't there any other workaround ? I
precise that I am not entitled to utilizing the MATNEST format, it's
just that I think the other ones wouldn't work.

---
Clément BERGER
ENS de Lyon 

Le 2023-03-17 15:48, Barry Smith a écrit :

> You may be able to mimic what you want by not using PETSC_DECIDE but instead computing up front how many rows of each matrix you want stored on each MPI process. You can use 0 for on certain MPI processes for certain matrices if you don't want any rows of that particular matrix stored on that particular MPI process. 
> 
> Barry 
> 
>> On Mar 17, 2023, at 10:10 AM, Berger Clement <clement.berger at ens-lyon.fr> wrote: 
>> 
>> Dear all, 
>> 
>> I want to construct a matrix by blocs, each block having different sizes and partially stored by multiple processors. If I am not mistaken, the right way to do so is by using the MATNEST type. However, the following code 
>> 
>> Call MatCreateConstantDiagonal(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,4,4,2.0E0_wp,A,ierr)
>> Call MatCreateConstantDiagonal(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,4,4,1.0E0_wp,B,ierr)
>> Call MatCreateNest(PETSC_COMM_WORLD,2,PETSC_NULL_INTEGER,2,PETSC_NULL_INTEGER,(/A,PETSC_NULL_MAT,PETSC_NULL_MAT,B/),C,ierr) 
>> 
>> does not generate the same matrix depending on the number of processors. It seems that it starts by everything owned by the first proc for A and B, then goes on to the second proc and so on (I hope I am being clear). 
>> 
>> Is it possible to change that ? 
>> 
>> Note that I am coding in fortran if that has ay consequence. 
>> 
>> Thank you, 
>> 
>> Sincerely,
>> 
>> -- 
>> Clément BERGER
>> ENS de Lyon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230317/a925dbc6/attachment.html>


More information about the petsc-users mailing list