<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">2017-03-17 22:52 GMT+03:00 Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Stefano,<br>
<br>
Thanks this is very helpful.<br>
<br>
---------------------<br>
Why not? here is my naive implementation with AlltoAll, which perform better in my case<br>
<br>
PetscErrorCode PetscGatherMessageLengths(MPI_<wbr>Comm comm,PetscMPIInt nsends,PetscMPIInt nrecvs,const PetscMPIInt ilengths[],PetscMPIInt **onodes,PetscMPIInt **olengths)<br>
{<br>
PetscErrorCode ierr;<br>
PetscMPIInt size,i,j;<br>
PetscMPIInt *all_lengths;<br>
<br>
PetscFunctionBegin;<br>
ierr = MPI_Comm_size(comm,&size);<wbr>CHKERRQ(ierr);<br>
ierr = PetscMalloc(size*sizeof(<wbr>PetscMPIInt),&all_lengths);<wbr>CHKERRQ(ierr);<br>
ierr = MPI_Alltoall((void*)ilengths,<wbr>1,MPI_INT,all_lengths,1,MPI_<wbr>INT,comm);CHKERRQ(ierr);<br>
ierr = PetscMalloc(nrecvs*sizeof(<wbr>PetscMPIInt),olengths);<wbr>CHKERRQ(ierr);<br>
ierr = PetscMalloc(nrecvs*sizeof(<wbr>PetscMPIInt),onodes);CHKERRQ(<wbr>ierr);<br>
for (i=0,j=0; i<size; i++) {<br>
if (all_lengths[i]) {<br>
(*olengths)[j] = all_lengths[i];<br>
(*onodes)[j] = i;<br>
j++;<br>
}<br>
}<br>
if (j != nrecvs) SETERRQ2(comm,PETSC_ERR_PLIB,"<wbr>Unexpected number of senders %d != %d",j,nrecvs);<br>
ierr = PetscFree(all_lengths);<wbr>CHKERRQ(ierr);<br>
PetscFunctionReturn(0);<br>
}<br>
-----------------------<br>
<br>
However I think this is only half the answer. If I look at VecScatterCreate_PtoS() for example it has<br>
<br>
ierr = PetscGatherNumberOfMessages(<wbr>comm,NULL,nprocs,&nrecvs);<wbr>CHKERRQ(ierr);<br>
ierr = PetscGatherMessageLengths(<wbr>comm,nsends,nrecvs,nprocs,&<wbr>onodes1,&olengths1);CHKERRQ(<wbr>ierr);<br>
ierr = PetscSortMPIIntWithArray(<wbr>nrecvs,onodes1,olengths1);<wbr>CHKERRQ(ierr);<br>
recvtotal = 0; for (i=0; i<nrecvs; i++) recvtotal += olengths1[i];<br>
<br>
/* post receives: */<br>
ierr = PetscMalloc3(recvtotal,&<wbr>rvalues,nrecvs,&source,nrecvs,<wbr>&recv_waits);CHKERRQ(ierr);<br>
count = 0;<br>
for (i=0; i<nrecvs; i++) {<br>
ierr = MPI_Irecv((rvalues+count),<wbr>olengths1[i],MPIU_INT,onodes1[<wbr>i],tag,comm,recv_waits+i);<wbr>CHKERRQ(ierr);<br>
count += olengths1[i];<br>
}<br>
<br>
/* do sends:<br>
1) starts[i] gives the starting index in svalues for stuff going to<br>
the ith processor<br>
*/<br>
nxr = 0;<br>
for (i=0; i<nx; i++) {<br>
if (owner[i] != rank) nxr++;<br>
}<br>
ierr = PetscMalloc3(nxr,&svalues,<wbr>nsends,&send_waits,size+1,&<wbr>starts);CHKERRQ(ierr);<br>
<br>
starts[0] = 0;<br>
for (i=1; i<size; i++) starts[i] = starts[i-1] + nprocs[i-1];<br>
for (i=0; i<nx; i++) {<br>
if (owner[i] != rank) svalues[starts[owner[i]]++] = bs*inidx[i];<br>
}<br>
starts[0] = 0;<br>
for (i=1; i<size+1; i++) starts[i] = starts[i-1] + nprocs[i-1];<br>
count = 0;<br>
for (i=0; i<size; i++) {<br>
if (nprocs[i]) {<br>
ierr = MPI_Isend(svalues+starts[i],<wbr>nprocs[i],MPIU_INT,i,tag,comm,<wbr>send_waits+count++);CHKERRQ(<wbr>ierr);<br>
}<br>
}<br>
<br>
<br>
So I need to (1) use your alltoall PetscGatherMessageLengths() but also (2) replace the sends and receives above with alltoallv();<br>
<br>
Is that correct? Did you also fix (2) or did fixing (1) help so much you didn't need to fix (2)?<br>
<br></blockquote><div><br></div><div>At that time I just fixed (1), not (2). My specific problem was not with timings per se, but with MPI (IntelMPI if I remember correctly) crashing when doing the rendez-vous with thousands of processes.</div><div> </div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Don't go to sleep yet, I may have more questions :-)<br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
> On Mar 17, 2017, at 2:32 PM, Stefano Zampini <<a href="mailto:stefano.zampini@gmail.com">stefano.zampini@gmail.com</a>> wrote:<br>
><br>
> Pierre,<br>
><br>
> I remember I had a similar problem some years ago when working with matrices with "process-dense" rows (i.e., when the off-diagonal part is shared by many processes). I fixed the issue by changing the implementation of PetscGatherMessageLenghts, from rendez-vous to all-to-all.<br>
><br>
> Barry, if you had access to petsc-maint, the title of the thread is "Problem with PetscGatherMessageLengths".<br>
><br>
> Hope this helps,<br>
> Stefano<br>
><br>
><br>
> 2017-03-17 22:21 GMT+03:00 Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>>:<br>
><br>
> > On Mar 17, 2017, at 4:04 AM, Pierre Jolivet <<a href="mailto:Pierre.Jolivet@enseeiht.fr">Pierre.Jolivet@enseeiht.fr</a>> wrote:<br>
> ><br>
> > On Thu, 16 Mar 2017 15:37:17 -0500, Barry Smith wrote:<br>
> >>> On Mar 16, 2017, at 10:57 AM, Pierre Jolivet <<a href="mailto:pierre.jolivet@enseeiht.fr">pierre.jolivet@enseeiht.fr</a>> wrote:<br>
> >>><br>
> >>> Thanks Barry.<br>
> >>> I actually tried the application myself with my optimized build + your option. I'm attaching two logs for a strong scaling analysis, if someone could spend a minute or two looking at the numbers I'd be really grateful:<br>
> >>> 1) MatAssembly still takes a rather long time IMHO. This is actually the bottleneck of my application. Especially on 1600 cores, the problem here is that I don't know if the huge time (almost a 5x slow-down w.r.t. the run on 320 cores) is due to MatMPIAIJSetPreallocationCSR (which I assumed beforehand was a no-op, but which is clearly not the case looking at the run on 320 cores) or the the option -pc_bjacobi_blocks 320 which also does one MatAssembly.<br>
> >><br>
> >> There is one additional synchronization point in the<br>
> >> MatAssemblyEnd that has not/cannot be removed. This is the<br>
> >> construction of the VecScatter; I think that likely explains the huge<br>
> >> amount of time there.<br>
><br>
> This concerns me<br>
><br>
> MatAssemblyEnd 2 1.0 7.5767e+01 1.0 0.00e+00 0.0 5.1e+06 9.4e+03 1.6e+01 64 0100 8 14 64 0100 8 14 0<br>
><br>
> I am thinking this is all the communication needed to set up the scatter. Do you have access to any performance profilers like Intel speedshop to see what is going on during all this time?<br>
><br>
><br>
> -vecscatter_alltoall uses alltoall in communication in the scatters but it does not use all to all in setting up the scatter (that is determining exactly what needs to be scattered at each time). I think this is the problem. We need to add more scatter set up code to optimize this case.<br>
><br>
><br>
><br>
> >><br>
> >>> 2) The other bottleneck is MatMult, which itself calls VecScatter. Since the structure of the matrix is rather dense, I'm guessing the communication pattern should be similar to an all-to-all. After having a look at the thread "VecScatter scaling problem on KNL", would you also suggest me to use -vecscatter_alltoall, or do you think this would not be appropriate for the MatMult?<br>
> >><br>
> >> Please run with<br>
> >><br>
> >> -vecscatter_view ::ascii_info<br>
> >><br>
> >> this will give information about the number of messages and sizes<br>
> >> needed in the VecScatter. To help decide what to do next.<br>
> ><br>
> > Here are two more logs. One with -vecscatter_view ::ascii_info which I don't really know how to analyze (I've spotted though that there are a couple of negative integers for the data counters, maybe you are using long instead of long long?), the other with -vecscatter_alltoall. The latter option gives a 2x speed-up for the MatMult, and for the PCApply too (which is weird to me because there should be no global communication with bjacobi and the diagonal blocks are only of size "5 processes" so the speed-up seems rather huge for just doing VecScatter for gathering and scattering the RHS/solution for all 320 MUMPS instances).<br>
><br>
> ok, this is good, it confirms that the large amount of communication needed in the scatters were a major problem and using the all to all helps. This is about all you can do about the scatter time.<br>
><br>
><br>
><br>
> Barry<br>
><br>
> ><br>
> > Thanks for your help,<br>
> > Pierre<br>
> ><br>
> >> Barry<br>
> >><br>
> >><br>
> >><br>
> >><br>
> >>><br>
> >>> Thank you very much,<br>
> >>> Pierre<br>
> >>><br>
> >>> On Mon, 6 Mar 2017 09:34:53 -0600, Barry Smith wrote:<br>
> >>>> I don't think the lack of the --with-debugging=no is important here.<br>
> >>>> Though he/she should use --with-debugging=no for production runs.<br>
> >>>><br>
> >>>> I think the reason for the "funny" numbers is that<br>
> >>>> MatAssemblyBegin and End in this case have explicit synchronization<br>
> >>>> points so some processes are waiting for other processes to get to the<br>
> >>>> synchronization point thus it looks like some processes are spending a<br>
> >>>> lot of time in the assembly routines when they are not really, they<br>
> >>>> are just waiting.<br>
> >>>><br>
> >>>> You can remove the synchronization point by calling<br>
> >>>><br>
> >>>> MatSetOption(mat, MAT_NO_OFF_PROC_ENTRIES, PETSC_TRUE); before<br>
> >>>> calling MatMPIAIJSetPreallocationCSR()<br>
> >>>><br>
> >>>> Barry<br>
> >>>><br>
> >>>>> On Mar 6, 2017, at 8:59 AM, Pierre Jolivet <<a href="mailto:Pierre.Jolivet@enseeiht.fr">Pierre.Jolivet@enseeiht.fr</a>> wrote:<br>
> >>>>><br>
> >>>>> Hello,<br>
> >>>>> I have an application with a matrix with lots of nonzero entries (that are perfectly load balanced between processes and rows).<br>
> >>>>> A end user is currently using a PETSc library compiled with the following flags (among others):<br>
> >>>>> --CFLAGS=-O2 --COPTFLAGS=-O3 --CXXFLAGS="-O2 -std=c++11" --CXXOPTFLAGS=-O3 --FFLAGS=-O2 --FOPTFLAGS=-O3<br>
> >>>>> Notice the lack of --with-debugging=no<br>
> >>>>> The matrix is assembled using MatMPIAIJSetPreallocationCSR and we end up with something like that in the -log_view:<br>
> >>>>> MatAssemblyBegin 2 1.0 1.2520e+002602.1 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 2 0 0 0 0 2 0<br>
> >>>>> MatAssemblyEnd 2 1.0 4.5104e+01 1.0 0.00e+00 0.0 8.2e+05 3.2e+04 4.6e+01 40 0 14 4 9 40 0 14 4 9 0<br>
> >>>>><br>
> >>>>> For reference, here is what the matrix looks like (keep in mind it is well balanced)<br>
> >>>>> Mat Object: 640 MPI processes<br>
> >>>>> type: mpiaij<br>
> >>>>> rows=10682560, cols=10682560<br>
> >>>>> total: nonzeros=51691212800, allocated nonzeros=51691212800<br>
> >>>>> total number of mallocs used during MatSetValues calls =0<br>
> >>>>> not using I-node (on process 0) routines<br>
> >>>>><br>
> >>>>> Are MatAssemblyBegin/<wbr>MatAssemblyEnd highly sensitive to the --with-debugging option on x86 even though the corresponding code is compiled with -O2, i.e., should I tell the user to have its PETSc lib recompiled, or would you recommend me to use another routine for assembling such a matrix?<br>
> >>>>><br>
> >>>>> Thanks,<br>
> >>>>> Pierre<br>
> >>> <AD-3D-320_7531028.o><AD-3D-<wbr>1600_7513074.o><br>
> > <AD-3D-1600_7533982_info.o><<wbr>AD-3D-1600_7533637_alltoall.o><br>
><br>
><br>
><br>
><br>
> --<br>
> Stefano<br>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Stefano</div>
</div></div>