[petsc-users] scattering and communications

Marco Cisternino marco.cisternino at polito.it
Thu Jul 7 11:55:49 CDT 2011


Hi,
I would like to understand better how VecScatterBegin and VecScatterEnd 
work.
I solve an interface elliptic problem on a Cartesian grid introducing 
extra unknowns (intersections between the interface and the grid axes).
Therefore, my linear system is augmented with extra condition on these 
new unknowns.
I use a DA to manage the grid, but I have to use MatCreateMPIAIJ to 
build the matrix because of the nature of the problem.
   
MatCreateMPIAIJ(proc->cart_comm,proc->intersections[proc->rank],proc->intersections[proc->rank],g_rows,g_cols,21,PETSC_NULL,21,PETSC_NULL,&fsolv->AA);
VecCreateMPI(proc->cart_comm,proc->intersections[proc->rank],PETSC_DECIDE,&fsolv->extended_P); 

VecCreateMPI(proc->cart_comm,proc->intersections[proc->rank],PETSC_DECIDE,&fsolv->extended_RHS); 


where
proc->intersections[proc->rank] is the total number of unknowns for each 
processor  in its sub-domain (grid points + intersections).
g_rows=g_cols is the total number of unknowns in the entire 
computational domain (grid points + intersections).
cart_comm is a Cartesian communicator.

The arrangement of the unknowns is such that every processor has the 
rows of the matrix and of extended_P(the solution) relative to the 
actual unknowns in its sub-domain.
I solve the system and then I call VecScatterBegin and VecScatterEnd:

   
ierr=VecScatterCreate(fsolv->extended_P,scatter->from,vec->GP,scatter->to,&scatter->scatt);
   
ierr=VecScatterBegin(scatter->scatt,fsolv->extended_P,vec->GP,INSERT_VALUES,SCATTER_FORWARD);
   
ierr=VecScatterEnd(scatter->scatt,fsolv->extended_P,vec->GP,INSERT_VALUES,SCATTER_FORWARD);

in order to get in GP (made using 
DACreateGlobalVector(grid->da,&vec->GP); ) only the solution on the grid 
points.
It works, I mean I can get the right solution in GP, but the scattering 
doesn't scale at all!
I would expect no communications during the scattering, doing what I do, 
but -log_summary shows me a number of MPI message growing with the 
number of processors.
Every portion of GP contains only the grid points of a processor, while 
every portion of extended_P contains the same grid points plus the 
intersections in the relative sub-domain. which is the need for the 
communications doing such a scattering?
I don't know if I was clear enough. Please, ask me what you need to 
understand my problem.
Thanks a lot.

Best regards,

     Marco

-- 
Marco Cisternino
PhD Student
Politecnico di Torino
Mobile:+393281189696
Email:marco.cisternino at polito.it



More information about the petsc-users mailing list