[petsc-users] Distributing already assembled stiffness matrix

Jaganathan, Srikrishna srikrishna.jaganathan at fau.de
Wed Oct 18 07:18:26 CDT 2017


Thanks for your response, its helpful.

I do have few more questions, most of my matrices are of compressed row 
storage format.

1)So when I was creating sequentially , I just used  
MatCreateSeqAIJWithArrays , but the same for MPI version is quite 
confusing to use. I don't understand how to decide on the local rows(it 
would be really helpful if there is an example) .

2)When I also tried using MatSetValues it doesn't seem to use the same 
indexing as compressed row storage format.What type of indexing should 
be used when MatSetValues are used and called from rank 0 for CRS 
Matrices?

On 2017-10-18 13:33, Jed Brown wrote:
> Easiest is to assemble into a distributed matrix from rank 0.  So
> instead of calling MatCreate using PETSC_COMM_SELF, use a parallel
> communicator (like PETSC_COMM_WORLD).  It is fine if only rank 0 calls
> MatSetValues, but all processes must call MatAssemblyBegin/End.
> 
> "Jaganathan, Srikrishna" <srikrishna.jaganathan at fau.de> writes:
> 
>> Hello,
>> 
>> 
>> I have been trying to distribute a already existing stiffness matrix 
>> in
>> my FEM code to petsc parallel matrix object , but I am unable to find
>> any documentation regarding it. It was quite straightforward to create 
>> a
>> sequential petsc matrix object and everything was working as 
>> intended.I
>> have read some of the user comments in the mailing lists regarding
>> similar situation and most of the times the solution suggested is to
>> create stiffness matrix from the the mesh in distributed format. Since
>> its a little difficult in my case to pass the mesh data in the code , 
>> is
>> there anyway to distribute already existing stiffness matrix ?
>> 
>> Thanks and Regards
>> 
>> Srikrishna Jaganathan


More information about the petsc-users mailing list