<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On 5 March 2018 at 13:56, Åsmund Ervik <span dir="ltr"><<a href="mailto:Asmund.Ervik@sintef.no" target="_blank">Asmund.Ervik@sintef.no</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">As Jed suggests, computing the (re)partitioning is straightforward in my 1D case. We're not planning to move this to multiple dimensions (we have another type of solver for that).<br>
<br>
So if it's possible to expose the repartitioning code for DAs, I'd be very happy to go this route. Is it a lot of work to do this?<br></blockquote><div><br></div><div>Exposing the code for repartitioning<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span> the DA does not make much sense to me given the simplicity of the DMDA and given fact there other more general methods available in petsc.</div><div><br></div><div>If you want to write the repartition code yourself for a DMDA, it would be straight forward to follow the pattern in Telescope and simplify it for the 1D case.</div><div>Start with this file:</div><div> <a href="http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/telescope/telescope_dmda.c.html">http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/telescope/telescope_dmda.c.html</a><span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span><br></div><div>and look at </div><div> PCTelescopeSetUp_dmda()<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span></div><div><br></div><div>There are three parts to this function:</div><div><br></div><div>[1] Copying the definition of the parent DMDA for the new repartitioned DMDA (PCTelescopeSetUp_dmda_repart()<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span>)</div><div><br></div><div>[2] Defining the coordinates for the repartitioned DMDA (PCTelescopeSetUp_dmda_repart_coors()<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span>)</div><div><br></div><div>[3] Defining the permutation which maps vectors from the original DMDA to the repartitioned DMDA (PCTelescopeSetUp_dmda_permutation_2d()<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span>)</div><div><br></div><div>* Assuming you have the new layout defined (via Metis or MPI_Scan()), PCTelescopeSetUp_dmda_repart() would look basically the same expect you'd change the call<br></div><div> DMDASetOwnershipRanges(ctx->dmrepart,NULL,NULL,NULL);<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span><br></div><div>to use an array defining the new layout of the repartitioned DMDA</div><div><br></div><div>* PCTelescopeSetUp_dmda_repart_coors2d() is straight forward to modify for the 1D case by taking out the loop over j</div><div><br></div><div>* PCTelescopeSetUp_dmda_permutation_2d() is also straight forward to modify for the 1D by taking out the loop over j and not using dimensionally index variables (like startI[1]<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span>) associated the j direction. The following functions support 1D and can be re-used:</div><div> _DMDADetermineRankFromGlobalIJK()<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span><br></div><div> _DMDADetermineGlobalS0()<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span><br></div><div><br></div><div>Obviously you also wouldn't need guards like</div><div> if (isActiveRank(sred->psubcomm)) {<span class="sewwtk22ud4vx25"></span><span class="sewwurmyiilwgnl"></span><br></div><div><br></div><div><br></div><div>Thanks,</div><div> Dave</div><div><br></div><div><br></div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
I have another question on a similar-but-different problem for 3D, but I'll write a separate mail on it.<br>
<br>
Best regards,<br>
Åsmund<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
> -----Original Message-----<br>
> From: Jed Brown [mailto:<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>]<br>
> Sent: Monday, March 5, 2018 2:32 PM<br>
> To: Dave May <<a href="mailto:dave.mayhem23@gmail.com">dave.mayhem23@gmail.com</a>>; Åsmund Ervik<br>
> <<a href="mailto:Asmund.Ervik@sintef.no">Asmund.Ervik@sintef.no</a>><br>
> Cc: <a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
> Subject: Re: [petsc-users] Load balancing / redistributing a 1D DM<br>
><br>
> Dave May <<a href="mailto:dave.mayhem23@gmail.com">dave.mayhem23@gmail.com</a>> writes:<br>
><br>
> > For a 1D problem such as yours, I would use your favourite graph<br>
> > partitioner (Metis,Parmetis, Scotch) together with your cell based<br>
> > weighting and repartition the data yourself.<br>
><br>
> That's overkill in 1D. You can MPI_Allreduce(SUM) and MPI_Scan(SUM) the<br>
> weights, then find the transition indices in each subdomain. It'll be cheaper,<br>
> more intuitive/deterministic, and avoid the extra library dependency. Of<br>
> course if you think you may want to move to multiple dimensions, it would<br>
> make sense to consider DMPlex or DMForest.<br>
<br>
</div></div></blockquote></div><br></div></div>