On Fri, Mar 4, 2011 at 3:14 PM, M. Scot Breitenfeld <span dir="ltr"><<a href="mailto:brtnfld@uiuc.edu">brtnfld@uiuc.edu</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On 03/03/2011 12:18 PM, Matthew Knepley wrote:<br>
> On Wed, Mar 2, 2011 at 4:52 PM, M. Scot Breitenfeld <<a href="mailto:brtnfld@uiuc.edu">brtnfld@uiuc.edu</a><br>
</div><div class="im">> <mailto:<a href="mailto:brtnfld@uiuc.edu">brtnfld@uiuc.edu</a>>> wrote:<br>
><br>
</div><div class="im">> I don't number my global degree's of freedom from low-high<br>
> continuously<br>
> per processor as PETSc uses for ordering, but I use the natural<br>
> ordering<br>
> of the application, I then use AOcreateBasic to obtain the mapping<br>
> between the PETSc and my ordering.<br>
><br>
><br>
> I would suggest using the LocalToGlobalMapping functions, which are<br>
> scalable.<br>
> AO is designed for complete global permutations.<br>
</div>I don't understand how I can avoid not using AO if my global dof per<br>
processor are not arranged in PETSc global ordering (continuous row<br>
ordering, i.e. proc1_ 0-n, proc2_n+1:m, proc3_m+1:p, etc...). In the<br>
LocalToGlobalMapping routines, doesn't the "GlobalMapping" part mean<br>
PETSc ordering and not my application's ordering.<br>
<br>
I thought I understood the difference between AO and<br>
LocalToGlobalMapping but now I'm confused. I tried to use the<br>
LocalToGlobalMapping routines and the solution values are correct but<br>
the ordering corresponds the global node ordering, not how I partitioned<br>
the mesh. In other words, the values are returned in the same ordering<br>
as for a serial run, which makes sense since this is how PETSc orders<br>
the rows. If I had used PETSc ordering then this would be fine.<br>
<br>
Is the moral of the story, if I want to get scalability I need to<br>
rearrange my global dof in PETSc ordering so that I can use<br>
LocalToGlobalMapping?</blockquote><div><br></div><div>I am having a really hard time understanding what you want? If you want Natural</div><div>Ordering or any other crazy ordering on input/output go ahead and use AO there</div>
<div>because the non-scalability is amortized over the run. The PETSc ordering should</div><div>be used for all globally assembled structures in the solve because its efficient, and</div><div>there is no reason for the user to care about these structures. For integration/assembly,</div>
<div>use local orderings since that is all you need for a PDE. If you have an exotic</div><div>equation that really does need global information, I would like to hear about it, but</div><div>it would most likely be non-scalable on its own.</div>
<div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div class="h5">
><br>
> Thanks,<br>
><br>
> Matt<br>
><br>
><br>
> CALL VecGetOwnershipRange(b, low, high, ierr)<br>
><br>
> icnt = 0<br>
><br>
> DO mi = 1, mctr ! these are the nodes local to processor<br>
> mi_global = myglobal(mi)<br>
><br>
> irowx = 3*mi_global-2<br>
> irowy = 3*mi_global-1<br>
> irowz = 3*mi_global<br>
><br>
> mappings(icnt+1:icnt+3) = (/ &<br>
> nrow_global(row_from_dof(1,mi))-1, &<br>
> nrow_global(row_from_dof(2,mi))-1, &<br>
> nrow_global(row_from_dof(3,mi))-1 &<br>
> /)<br>
><br>
> petscOrdering(icnt+1:icnt+3) =<br>
> (/low+icnt,low+icnt+1,low+icnt+2/)<br>
><br>
> icnt = icnt + 3<br>
> END DO<br>
><br>
> CALL AOCreateBasic(PETSC_COMM_WORLD, icnt, mappings, petscOrdering,<br>
> toao, ierr)<br>
><br>
> DO mi = mctr+1, myn ! these are the ghost nodes not on this processor<br>
><br>
> mi_global = myglobal(mi)<br>
><br>
> mappings(icnt+1:icnt+3) = (/ &<br>
> nrow_global(row_from_dof(1,mi))-1, &<br>
> nrow_global(row_from_dof(2,mi))-1, &<br>
> nrow_global(row_from_dof(3,mi))-1 &<br>
> /)<br>
><br>
> icnt = icnt + 3<br>
> ENDDO<br>
> CALL AOApplicationToPetsc(toao, 3*myn, mappings, ierr)<br>
><br>
> CALL AODestroy(toao, ierr)<br>
><br>
> I then use mapping to input the values into the correct row as wanted<br>
> by PETSc<br>
><br>
><br>
> On 03/02/2011 04:29 PM, Matthew Knepley wrote:<br>
> > On Wed, Mar 2, 2011 at 4:25 PM, M. Scot Breitenfeld<br>
> <<a href="mailto:brtnfld@uiuc.edu">brtnfld@uiuc.edu</a> <mailto:<a href="mailto:brtnfld@uiuc.edu">brtnfld@uiuc.edu</a>><br>
</div></div><div><div></div><div class="h5">> > <mailto:<a href="mailto:brtnfld@uiuc.edu">brtnfld@uiuc.edu</a> <mailto:<a href="mailto:brtnfld@uiuc.edu">brtnfld@uiuc.edu</a>>>> wrote:<br>
> ><br>
> > Hi,<br>
> ><br>
> > First, thanks for the suggestion on using MPISBAIJ for my A<br>
> matrix, it<br>
> > seems to have cut down on my memory and assembly time. For a 1.5<br>
> > million<br>
> > dof problem:<br>
> ><br>
> > # procs: 2 4 8 16<br>
> > ----------------------------------------------------------------<br>
> > Assembly (sec): 245 124 63 86<br>
> > Solver (sec): 924 578 326 680<br>
> ><br>
> > Memory (GB): 2.5 1.4 .877 .565<br>
> ><br>
> > The problem I have is the amount of time it's taking in<br>
> AOCreateBasic,<br>
> > it takes longer then assembling,<br>
> ><br>
> > # procs: 2 4 8<br>
> 16<br>
> ><br>
> ---------------------------------------------------------------------<br>
> > AOCreateBasic (sec): .6 347 170 197<br>
> ><br>
> > Is there something that I can change or something I can look<br>
> for that<br>
> > might be causing this increase in time as I go from 2 to 4<br>
> processors<br>
> > (at least it scales from 4 to 8 processors). I read in the<br>
> archive<br>
> > that<br>
> > AOCreateBasic is not meant to be scalable so maybe there is<br>
> nothing I<br>
> > can do.<br>
> ><br>
> ><br>
> > Yes, this is non-scalable. What are you using it for?<br>
> ><br>
> > Matt<br>
> ><br>
> ><br>
> > Thanks,<br>
> > Scot<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their<br>
> > experiments is infinitely more interesting than any results to which<br>
> > their experiments lead.<br>
> > -- Norbert Wiener<br>
><br>
><br>
><br>
><br>
> --<br>
> What most experimenters take for granted before they begin their<br>
> experiments is infinitely more interesting than any results to which<br>
> their experiments lead.<br>
> -- Norbert Wiener<br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>