[petsc-users] -log_summary for MatMult

Alexander Grayver agrayver at gfz-potsdam.de
Fri Jun 15 08:18:27 CDT 2012


On 15.06.2012 14:46, Matthew Knepley wrote:
> On Fri, Jun 15, 2012 at 8:31 PM, Alexander Grayver 
> <agrayver at gfz-potsdam.de <mailto:agrayver at gfz-potsdam.de>> wrote:
>
>     Matt,
>
>     According to that code:
>
>     486:*PetscErrorCode  MatMult_MPIDense(Mat  mat,Vec  xx,Vec  yy)*
>     487:{
>     488:   Mat_MPIDense   *mdn = (Mat_MPIDense*)mat->data;
>
>     492:  VecScatterBegin(mdn->Mvctx,xx,mdn->lvec,INSERT_VALUES,SCATTER_FORWARD);
>     493:   VecScatterEnd(mdn->Mvctx,xx,mdn->lvec,INSERT_VALUES,SCATTER_FORWARD);
>     494:   MatMult_SeqDense(mdn->A,mdn->lvec,yy);
>     495:   return(0);
>     496:}
>
>
>     Each process has its own local copy of the vector?
>
>
> I am not sure what your point is. VecScatter is just an interface that 
> has many implementations.

I'm trying to estimate the amount of data needs to be communicated over 
all processes during this operation.
In debugger I see VecScatter from the code above reduces to the 
MPI_Allgatherv and results in (assuming vector is distributed uniformly)

bytes_send_received = num_of_proc * ((num_of_proc - 1) * vec_size_local) 
* 2 * sizeof(PetscScalar)

Does that look reasonable?
Thanks.

-- 
Regards,
Alexander

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120615/300d7fc8/attachment.html>


More information about the petsc-users mailing list