[petsc-users] scattering and communications

Matthew Knepley knepley at gmail.com
Thu Jul 7 16:32:28 CDT 2011


On Thu, Jul 7, 2011 at 4:55 PM, Marco Cisternino <marco.cisternino at polito.it
> wrote:

> Hi,
> I would like to understand better how VecScatterBegin and VecScatterEnd
> work.
> I solve an interface elliptic problem on a Cartesian grid introducing extra
> unknowns (intersections between the interface and the grid axes).
> Therefore, my linear system is augmented with extra condition on these new
> unknowns.
> I use a DA to manage the grid, but I have to use MatCreateMPIAIJ to build
> the matrix because of the nature of the problem.
>  MatCreateMPIAIJ(proc->cart_**comm,proc->intersections[proc-**
> >rank],proc->intersections[**proc->rank],g_rows,g_cols,21,**
> PETSC_NULL,21,PETSC_NULL,&**fsolv->AA);
> VecCreateMPI(proc->cart_comm,**proc->intersections[proc->**
> rank],PETSC_DECIDE,&fsolv->**extended_P);
> VecCreateMPI(proc->cart_comm,**proc->intersections[proc->**
> rank],PETSC_DECIDE,&fsolv->**extended_RHS);
>
> where
> proc->intersections[proc->**rank] is the total number of unknowns for each
> processor  in its sub-domain (grid points + intersections).
> g_rows=g_cols is the total number of unknowns in the entire computational
> domain (grid points + intersections).
> cart_comm is a Cartesian communicator.
>
> The arrangement of the unknowns is such that every processor has the rows
> of the matrix and of extended_P(the solution) relative to the actual
> unknowns in its sub-domain.
> I solve the system and then I call VecScatterBegin and VecScatterEnd:
>
>  ierr=VecScatterCreate(fsolv->**extended_P,scatter->from,vec->**
> GP,scatter->to,&scatter->**scatt);
>  ierr=VecScatterBegin(scatter->**scatt,fsolv->extended_P,vec->**
> GP,INSERT_VALUES,SCATTER_**FORWARD);
>  ierr=VecScatterEnd(scatter->**scatt,fsolv->extended_P,vec->**
> GP,INSERT_VALUES,SCATTER_**FORWARD);
>
> in order to get in GP (made using DACreateGlobalVector(grid->da,**&vec->GP);
> ) only the solution on the grid points.
> It works, I mean I can get the right solution in GP, but the scattering
> doesn't scale at all!
> I would expect no communications during the scattering, doing what I do,
> but -log_summary shows me a number of MPI message growing with the number of
> processors.
> Every portion of GP contains only the grid points of a processor, while
> every portion of extended_P contains the same grid points plus the
> intersections in the relative sub-domain. which is the need for the
> communications doing such a scattering?
>

There is just a mismatch somewhere in your indices. It should be easy to
check the locally owned indices in
GP using


http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Vec/VecGetOwnershipRange.html

and compare to what you have.

   Matt


> I don't know if I was clear enough. Please, ask me what you need to
> understand my problem.
> Thanks a lot.
>
> Best regards,
>
>    Marco
>
> --
> Marco Cisternino
> PhD Student
> Politecnico di Torino
> Mobile:+393281189696
> Email:marco.cisternino at polito.**it <Email%3Amarco.cisternino at polito.it>
>
>


-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110707/036a5db8/attachment.htm>


More information about the petsc-users mailing list