On Thu, Jul 7, 2011 at 4:55 PM, Marco Cisternino <span dir="ltr"><<a href="mailto:marco.cisternino@polito.it">marco.cisternino@polito.it</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi,<br>
I would like to understand better how VecScatterBegin and VecScatterEnd work.<br>
I solve an interface elliptic problem on a Cartesian grid introducing extra unknowns (intersections between the interface and the grid axes).<br>
Therefore, my linear system is augmented with extra condition on these new unknowns.<br>
I use a DA to manage the grid, but I have to use MatCreateMPIAIJ to build the matrix because of the nature of the problem.<br>
MatCreateMPIAIJ(proc->cart_<u></u>comm,proc->intersections[proc-<u></u>>rank],proc->intersections[<u></u>proc->rank],g_rows,g_cols,21,<u></u>PETSC_NULL,21,PETSC_NULL,&<u></u>fsolv->AA);<br>
VecCreateMPI(proc->cart_comm,<u></u>proc->intersections[proc-><u></u>rank],PETSC_DECIDE,&fsolv-><u></u>extended_P); <br>
VecCreateMPI(proc->cart_comm,<u></u>proc->intersections[proc-><u></u>rank],PETSC_DECIDE,&fsolv-><u></u>extended_RHS); <br>
<br>
where<br>
proc->intersections[proc-><u></u>rank] is the total number of unknowns for each processor in its sub-domain (grid points + intersections).<br>
g_rows=g_cols is the total number of unknowns in the entire computational domain (grid points + intersections).<br>
cart_comm is a Cartesian communicator.<br>
<br>
The arrangement of the unknowns is such that every processor has the rows of the matrix and of extended_P(the solution) relative to the actual unknowns in its sub-domain.<br>
I solve the system and then I call VecScatterBegin and VecScatterEnd:<br>
<br>
ierr=VecScatterCreate(fsolv-><u></u>extended_P,scatter->from,vec-><u></u>GP,scatter->to,&scatter-><u></u>scatt);<br>
ierr=VecScatterBegin(scatter-><u></u>scatt,fsolv->extended_P,vec-><u></u>GP,INSERT_VALUES,SCATTER_<u></u>FORWARD);<br>
ierr=VecScatterEnd(scatter-><u></u>scatt,fsolv->extended_P,vec-><u></u>GP,INSERT_VALUES,SCATTER_<u></u>FORWARD);<br>
<br>
in order to get in GP (made using DACreateGlobalVector(grid->da,<u></u>&vec->GP); ) only the solution on the grid points.<br>
It works, I mean I can get the right solution in GP, but the scattering doesn't scale at all!<br>
I would expect no communications during the scattering, doing what I do, but -log_summary shows me a number of MPI message growing with the number of processors.<br>
Every portion of GP contains only the grid points of a processor, while every portion of extended_P contains the same grid points plus the intersections in the relative sub-domain. which is the need for the communications doing such a scattering?<br>
</blockquote><div><br></div><div>There is just a mismatch somewhere in your indices. It should be easy to check the locally owned indices in</div><div>GP using</div><div><br></div><div> <a href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Vec/VecGetOwnershipRange.html">http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Vec/VecGetOwnershipRange.html</a></div>
<div><br></div><div>and compare to what you have.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
I don't know if I was clear enough. Please, ask me what you need to understand my problem.<br>
Thanks a lot.<br>
<br>
Best regards,<br>
<br>
Marco<br><font color="#888888">
<br>
-- <br>
Marco Cisternino<br>
PhD Student<br>
Politecnico di Torino<br>
Mobile:<a href="tel:%2B393281189696" value="+393281189696" target="_blank">+393281189696</a><br>
<a href="mailto:Email%3Amarco.cisternino@polito.it" target="_blank">Email:marco.cisternino@polito.<u></u>it</a><br>
<br>
</font></blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>