[petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process

Junchao Zhang junchao.zhang at gmail.com
Thu Jul 1 09:38:29 CDT 2021


Peder,
  PETSCSF_PATTERN_ALLTOALL only supports MPI_Alltoall (not Alltoallv), and
is only used by petsc internally at few places.
  I suggest you can go with Matt's approach. After it solves your problem,
you can distill an example to demo the communication pattern. Then we can
see how to efficiently support that in petsc.

  Thanks.
--Junchao Zhang


On Thu, Jul 1, 2021 at 7:42 AM Jed Brown <jed at jedbrown.org> wrote:

> Peder Jørgensgaard Olesen <pjool at mek.dtu.dk> writes:
>
> > Each process is assigned an indexed subset of the tasks (the tasks are
> of constant size), and, for each task index, the relevant data is scattered
> as a SEQVEC to the process (this is done for all processes in each step,
> using an adaption of the code in Matt's link). This way each process only
> receives just the data it needs to complete the task. While I'm currently
> working with very moderate size data sets I'll eventually need to handle
> something rather more massive, so I want to economize memory where possible
> and give each process only the data it needs.
>
> From the sounds of it, this pattern ultimately boils down to MPI_Gather
> being called P times where P is the size of the communicator. This will
> work okay when P is small, but it's much less efficient than calling
> MPI_Alltoall (or MPI_Alltoallv), which you can do by creating one PetscSF
> that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You
> can see an example.
>
>
> https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210701/95f323d7/attachment.html>


More information about the petsc-users mailing list