[petsc-users] Possibilities to VecScatter to a sparse Vector-Format

Junchao Zhang junchao.zhang at gmail.com
Thu Sep 30 09:27:01 CDT 2021


On Thu, Sep 30, 2021 at 8:39 AM Hannes Phil Niklas Brandt <
s6hsbran at uni-bonn.de> wrote:

> Hello,
>
>
>
>
>
> I intend to compute a parallel Matrix-Vector-Product (via MPI) and
> therefore would like to scatter the entries of the input MPI-Vec v to a
> local vector containing all entries relevant to the current process.
>
>
>
> To achieve this I tried defining a VecScatter, which scatters from v to a
> sequential Vec v_seq (each process has it's own version of v_seq). However,
> storing v_seq (which has one entry for each global row, thus containing a
> large amount of zero-entries) may demand too much storage space (in
> comparison to my data-sparse Matrix-Storage-Format).
>
 What you said is exactly what petsc's MatMult does. It builds a VecScatter
object (aij->Mvctx), and has a local vector (aij->lvec).   It does not
communicate or store unneeded  remote entries. The code is at
https://gitlab.com/petsc/petsc/-/blob/main/src/mat/impls/aij/mpi/mmaij.c#L9


> I am interested in possibilities to scatter v to a sparse Vec-type to
> avoid storing unnecessary large amounts of zero-entries. Is there a sparse
> Vector format in Petsc compatible to the VecScatter procedure or is there
> another efficient way to compute Matrix-Vector-Products without usinglarge
> amounts of storage space on each process?
>
>
>
>
>
> Best Regards
>
> Hannes Brandt
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210930/0c435f37/attachment.html>


More information about the petsc-users mailing list