[petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process
Junchao Zhang
junchao.zhang at gmail.com
Fri Jul 2 21:42:48 CDT 2021
Peder,
Your example scatters a parallel vector to a sequential vector on one
rank. It is a pattern like MPI_Gatherv.
I want to see how you scatter parallel vectors to sequential vectors on
every rank.
--Junchao Zhang
On Fri, Jul 2, 2021 at 4:07 AM Peder Jørgensgaard Olesen <pjool at mek.dtu.dk>
wrote:
> Matt's method seems to work well, though instead of editing the actual
> function I put the relevant parts directly into my code. I made the small
> example attached here.
>
>
> I might look into Star Forests at some point, though it's not really
> touched upon in the manual (I will probably take a look at your paper,
> https://arxiv.org/abs/2102.13018).
>
>
> Med venlig hilsen / Best regards
>
> Peder
> ------------------------------
> *Fra:* Junchao Zhang <junchao.zhang at gmail.com>
> *Sendt:* 1. juli 2021 16:38:29
> *Til:* Jed Brown
> *Cc:* Peder Jørgensgaard Olesen; petsc-users at mcs.anl.gov
> *Emne:* Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on
> non-zeroth process
>
> Peder,
> PETSCSF_PATTERN_ALLTOALL only supports MPI_Alltoall (not Alltoallv), and
> is only used by petsc internally at few places.
> I suggest you can go with Matt's approach. After it solves your problem,
> you can distill an example to demo the communication pattern. Then we can
> see how to efficiently support that in petsc.
>
> Thanks.
> --Junchao Zhang
>
>
> On Thu, Jul 1, 2021 at 7:42 AM Jed Brown <jed at jedbrown.org> wrote:
>
>> Peder Jørgensgaard Olesen <pjool at mek.dtu.dk> writes:
>>
>> > Each process is assigned an indexed subset of the tasks (the tasks are
>> of constant size), and, for each task index, the relevant data is scattered
>> as a SEQVEC to the process (this is done for all processes in each step,
>> using an adaption of the code in Matt's link). This way each process only
>> receives just the data it needs to complete the task. While I'm currently
>> working with very moderate size data sets I'll eventually need to handle
>> something rather more massive, so I want to economize memory where possible
>> and give each process only the data it needs.
>>
>> From the sounds of it, this pattern ultimately boils down to MPI_Gather
>> being called P times where P is the size of the communicator. This will
>> work okay when P is small, but it's much less efficient than calling
>> MPI_Alltoall (or MPI_Alltoallv), which you can do by creating one PetscSF
>> that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You
>> can see an example.
>>
>>
>> https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210702/8ec21df9/attachment-0001.html>
More information about the petsc-users
mailing list