[petsc-users] FEM Matrix Assembly of Overlapped Mesh

Mark Adams mfadams at lbl.gov
Sun Jul 23 11:41:23 CDT 2023


If you want a processor independent solve with mumps use '-pc_type lu' and
if you are configured with MUMPS it will give you a parallel LU solve. And
don't use any overlap in DM. If want a local LU with global 'asm' or
'bjacobi' then you have an iterative solver and use something like
-ksp_type gmres.

Mark

On Sun, Jul 23, 2023 at 11:50 AM 袁煕 <yuanxi at advancesoft.jp> wrote:

> Hi,
>
> I used PETSc to assemble a FEM stiff matrix of an overlapped (overlap=2)
> DMPlex and used the MUMPS solver to solve it. But I got different solution
> by using 1 CPU and MPI parallel computation. I am wondering if I missed
> some necessary step or setting during my implementation.
>
>     My calling process like follows
>
> 1. Generate matrix pkij from DMPlex dm_mesh
>
> ```
>       PetscCallA( DMCreateMatrix(dm_mesh, pkij, ierr) )
>       PetscCallA( MatSetBlockSize(pkij, 1, ierr))
>       if( overlap>0 ) call MatSetOption(pkij,MAT_IGNORE_OFF_PROC_ENTRIES,PETSC_TRUE,ierr)
>       PetscCallA( MatSetFromOptions(pkij, ierr))
>
>       PetscCallA(MatGetLocalSize(pkij, m, n, ierr));
>       PetscCallA(MatGetSize(pkij, gm, gn, ierr));
>       PetscCallA(MatGetBlockSize(pkij, bs, ierr));
>
>       !
>       ! Create a preallocator matrix with sizes (local and global) matching the jacobian A.
>       !
>       PetscCallA(MatCreate(PETSC_COMM_WORLD, preallocator, ierr));
>       PetscCallA(MatSetType(preallocator, MATPREALLOCATOR, ierr));
>       PetscCallA(MatSetSizes(preallocator, m, n, gm, gn, ierr));
>       PetscCallA(MatSetBlockSize(preallocator, bs, ierr));
>       PetscCallA(MatSetUp(preallocator, ierr));
> ```
>
> 2. Do Matrix preallocation
>
> ```
>       PetscCallA(MatCreate(PETSC_COMM_WORLD, preallocator, ierr));
>       PetscCallA(MatSetType(preallocator, MATPREALLOCATOR, ierr));
>       PetscCallA(MatSetSizes(preallocator, m, n, gm, gn, ierr));
>       PetscCallA(MatSetBlockSize(preallocator, bs, ierr));
>       PetscCallA(MatSetUp(preallocator, ierr));
>
>       .......
>
>       PetscCallA( MatPreallocatorPreallocate(preallocator, PETSC_TRUE, pkij, ierr) )
>       PetscCallA( MatDestroy(preallocator,ierr) )
> ```
>
> 3. Assemble matrix pkij by calling MatSetValue of all overlapped elements.
>
> In my above implementation, I used MatSetOption(pkij,MAT_IGNORE_OFF_PROC_ENTRIES,PETSC_TRUE,ierr). Is that correct? Or even other options are needed?
>
> Thanks for
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230723/0973ca10/attachment-0001.html>


More information about the petsc-users mailing list