<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
</head>
<body>
<style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
<div id="divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Helvetica,sans-serif;" dir="ltr">
<p>I agree, using the loop you describe would definitely not be a clever way of doing it, nor is it at all what I was going for. The code with Matt's method indeed does what I needed. I'd be happy if it could be further optimized.</p>
<p><br>
</p>
<p>Med venlig hilsen / Best regards</p>
<p>Peder</p>
</div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>Fra:</b> Junchao Zhang <junchao.zhang@gmail.com><br>
<b>Sendt:</b> 4. juli 2021 05:36:29<br>
<b>Til:</b> Peder Jørgensgaard Olesen<br>
<b>Cc:</b> Jed Brown; petsc-users@mcs.anl.gov<br>
<b>Emne:</b> Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process</font>
<div> </div>
</div>
<div>
<div dir="ltr"><span style="color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif;font-size:16px">VecScatterCreateToAll() scatters the MPI vector to a sequential vector on every rank (as if each rank has a duplicate of the same sequential vector).</span>
<div><span style="color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif;font-size:16px">If the sample code you provided is what you want, it is fine and we just need to implement a minor optimization in petsc to make it efficient. But if you want to put
the scatter in a loop as follows, then it is a very bad code.</span></div>
<div><span style="color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif;font-size:16px"><br>
</span></div>
<div><span style="color:rgb(0,0,0);font-size:16px"><font face="monospace">for (p=0; p<size; i++) {</font></span></div>
<div>
<pre class="gmail-aLF-aPX-K0-aPE" style="margin-top:0px;margin-bottom:0px;white-space:pre-wrap;color:rgb(0,0,0);font-size:14px"> loc_tgt_size = 0;
if (rank == p){
loc_tgt_size = n;
}
ierr = VecCreateSeq(PETSC_COMM_SELF, loc_tgt_size, &tgt_vec); CHKERRQ(ierr);
ierr = VecZeroEntries(tgt_vec); CHKERRQ(ierr);
// Scatter source vector to target vector on one process
ierr = ISCreateStride(PETSC_COMM_SELF, loc_tgt_size, 0, 1, &is); CHKERRQ(ierr);
ierr = VecScatterCreate(src_vec, is, tgt_vec, is, &sctx); CHKERRQ(ierr);
ierr = VecScatterBegin(sctx, src_vec, tgt_vec, INSERT_VALUES, SCATTER_FORWARD); CHKERRQ(ierr);
ierr = VecScatterEnd(sctx, src_vec, tgt_vec, INSERT_VALUES, SCATTER_FORWARD); CHKERRQ(ierr);
</pre>
<font face="monospace"> ...</font></div>
<div><span style="color:rgb(0,0,0);font-size:16px"><font face="monospace">}</font></span></div>
<div><span style="color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif;font-size:16px"> </span>
<div>
<div>
<div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature">
<div dir="ltr">--Junchao Zhang</div>
</div>
</div>
<br>
</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sat, Jul 3, 2021 at 1:27 PM Peder Jørgensgaard Olesen <<a href="mailto:pjool@mek.dtu.dk">pjool@mek.dtu.dk</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div id="gmail-m_5608889046999411376divtagdefaultwrapper" style="font-size:12pt;color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif" dir="ltr">
<p><span style="font-size:12pt">Yeah, scattering a parallel vector to a sequential on one rank was exactly what I wanted to do (<span style="font-family:Calibri,Helvetica,sans-serif,EmojiFont,"Apple Color Emoji","Segoe UI Emoji",NotoColorEmoji,"Segoe UI Symbol","Android Emoji",EmojiSymbols;font-size:16px">apologies
if I didn't phrase that clearly</span>). A code like the one I shared does just what I
</span><span style="font-size:12pt">needed, replacing size-1 with the desired target rank <span style="font-family:Calibri,Helvetica,sans-serif,EmojiFont,"Apple Color Emoji","Segoe UI Emoji",NotoColorEmoji,"Segoe UI Symbol","Android Emoji",EmojiSymbols;font-size:16px">in
the if-statement</span>.</span><br>
</p>
<p><br>
</p>
<p>Isn't what you describe what VecScatterCreateToAll is for?</p>
<p><br>
</p>
<p><br>
</p>
<p>Med venlig hilsen / Best regards</p>
<p>Peder</p>
</div>
<hr style="display:inline-block;width:98%">
<div id="gmail-m_5608889046999411376divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>Fra:</b> Junchao Zhang <<a href="mailto:junchao.zhang@gmail.com" target="_blank">junchao.zhang@gmail.com</a>><br>
<b>Sendt:</b> 3. juli 2021 04:42:48<br>
<b>Til:</b> Peder Jørgensgaard Olesen<br>
<b>Cc:</b> Jed Brown; <a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><br>
<b>Emne:</b> Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process</font>
<div> </div>
</div>
<div>
<div dir="ltr">Peder,
<div> Your example scatters a parallel vector to a sequential vector on one rank. It is a pattern like MPI_Gatherv.</div>
<div> I want to see how you scatter parallel vectors to sequential vectors on every rank.</div>
<div><br>
</div>
<div>--Junchao Zhang<br>
</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, Jul 2, 2021 at 4:07 AM Peder Jørgensgaard Olesen <<a href="mailto:pjool@mek.dtu.dk" target="_blank">pjool@mek.dtu.dk</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div id="gmail-m_5608889046999411376gmail-m_-6084066952781686015divtagdefaultwrapper" style="font-size:12pt;color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif" dir="ltr">
<p>Matt's method seems to work well, though instead of editing the actual function I put the relevant parts directly into my code. I made the small example attached here.<br>
</p>
<p><br>
</p>
<p>I might look into Star Forests at some point, though it's not really touched upon in the manual (I will probably take a look at your paper,
<a href="https://arxiv.org/abs/2102.13018" id="gmail-m_5608889046999411376gmail-m_-6084066952781686015LPlnk80864" target="_blank">
https://arxiv.org/abs/2102.13018</a>).<br>
</p>
<p><br>
</p>
<p>Med venlig hilsen / Best regards</p>
<p>Peder<br>
</p>
</div>
<hr style="display:inline-block;width:98%">
<div id="gmail-m_5608889046999411376gmail-m_-6084066952781686015divRplyFwdMsg" dir="ltr">
<font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>Fra:</b> Junchao Zhang <<a href="mailto:junchao.zhang@gmail.com" target="_blank">junchao.zhang@gmail.com</a>><br>
<b>Sendt:</b> 1. juli 2021 16:38:29<br>
<b>Til:</b> Jed Brown<br>
<b>Cc:</b> Peder Jørgensgaard Olesen; <a href="mailto:petsc-users@mcs.anl.gov" target="_blank">
petsc-users@mcs.anl.gov</a><br>
<b>Emne:</b> Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process</font>
<div> </div>
</div>
<div>
<div dir="ltr">Peder,
<div> PETSCSF_PATTERN_ALLTOALL only supports MPI_Alltoall (not Alltoallv), and is only used by petsc internally at few places. </div>
<div> I suggest you can go with Matt's approach. After it solves your problem, you can distill an example to demo the communication pattern. Then we can see how to efficiently support that in petsc. </div>
<div><br>
</div>
<div> Thanks.</div>
<div>
<div>
<div dir="ltr">
<div dir="ltr">--Junchao Zhang</div>
</div>
</div>
<br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, Jul 1, 2021 at 7:42 AM Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Peder Jørgensgaard Olesen <<a href="mailto:pjool@mek.dtu.dk" target="_blank">pjool@mek.dtu.dk</a>> writes:<br>
<br>
> Each process is assigned an indexed subset of the tasks (the tasks are of constant size), and, for each task index, the relevant data is scattered as a SEQVEC to the process (this is done for all processes in each step, using an adaption of the code in Matt's
link). This way each process only receives just the data it needs to complete the task. While I'm currently working with very moderate size data sets I'll eventually need to handle something rather more massive, so I want to economize memory where possible
and give each process only the data it needs.<br>
<br>
>From the sounds of it, this pattern ultimately boils down to MPI_Gather being called P times where P is the size of the communicator. This will work okay when P is small, but it's much less efficient than calling MPI_Alltoall (or MPI_Alltoallv), which you can
do by creating one PetscSF that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You can see an example.<br>
<br>
<a href="https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151" rel="noreferrer" target="_blank">https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151</a><br>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</body>
</html>