[petsc-users] Distributing already assembled stiffness matrix
Mark Adams
mfadams at lbl.gov
Thu Oct 19 07:41:16 CDT 2017
On Wed, Oct 18, 2017 at 8:18 AM, Jaganathan, Srikrishna <
srikrishna.jaganathan at fau.de> wrote:
> Thanks for your response, its helpful.
>
> I do have few more questions, most of my matrices are of compressed row
> storage format.
>
> 1)So when I was creating sequentially , I just used
> MatCreateSeqAIJWithArrays , but the same for MPI version is quite confusing
> to use. I don't understand how to decide on the local rows(it would be
> really helpful if there is an example) .
>
You don't use local row indices (you but you don't want to). The code does
not change. As Jed says don't use MatCreateSeqAIJWithArrays. Just use
MatCreate. This is from ksp ex56.c:
/* create stiffness matrix */
ierr = MatCreate(comm,&Amat);CHKERRQ(ierr);
ierr = MatSetSizes(Amat,m,m,M,M);CHKERRQ(ierr);
if (!test_late_bs) {
ierr = MatSetBlockSize(Amat,3);CHKERRQ(ierr);
}
ierr = MatSetType(Amat,MATAIJ);CHKERRQ(ierr);
ierr = MatSeqAIJSetPreallocation(Amat,0,d_nnz);CHKERRQ(ierr);
ierr = MatMPIAIJSetPreallocation(Amat,0,d_nnz,0,o_nnz);CHKERRQ(ierr);
You can preallocate with an estimate (upper bound). See the documentation
for MatMPIAIJSetPreallocation (Google it).
You then run your code on one processor and PETSc will distribute it. Now
you just add values with (i,j,value) with MatSetValues (Google it). You
will find that it is very simple.
Now, this simple way will just chop your domain up in a dumb way. If you
have a regular grid then you will get a 1D partitioning, which will work
for a while. Otherwise you can partition this matrix. Bat that is another
story. You want to start with this simple way anyway.
>
> 2)When I also tried using MatSetValues it doesn't seem to use the same
> indexing as compressed row storage format.What type of indexing should be
> used when MatSetValues are used and called from rank 0 for CRS Matrices?
>
>
> On 2017-10-18 13:33, Jed Brown wrote:
>
>> Easiest is to assemble into a distributed matrix from rank 0. So
>> instead of calling MatCreate using PETSC_COMM_SELF, use a parallel
>> communicator (like PETSC_COMM_WORLD). It is fine if only rank 0 calls
>> MatSetValues, but all processes must call MatAssemblyBegin/End.
>>
>> "Jaganathan, Srikrishna" <srikrishna.jaganathan at fau.de> writes:
>>
>> Hello,
>>>
>>>
>>> I have been trying to distribute a already existing stiffness matrix in
>>> my FEM code to petsc parallel matrix object , but I am unable to find
>>> any documentation regarding it. It was quite straightforward to create a
>>> sequential petsc matrix object and everything was working as intended.I
>>> have read some of the user comments in the mailing lists regarding
>>> similar situation and most of the times the solution suggested is to
>>> create stiffness matrix from the the mesh in distributed format. Since
>>> its a little difficult in my case to pass the mesh data in the code , is
>>> there anyway to distribute already existing stiffness matrix ?
>>>
>>> Thanks and Regards
>>>
>>> Srikrishna Jaganathan
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20171019/6cfa5db7/attachment.html>
More information about the petsc-users
mailing list