[petsc-users] Time cost by Vec Assembly

Fande Kong fdkong.jd at gmail.com
Mon Jan 22 11:44:35 CST 2018

On Fri, Oct 7, 2016 at 10:30 PM, Jed Brown <jed at jedbrown.org> wrote:

> Barry Smith <bsmith at mcs.anl.gov> writes:
> >     There is still something wonky here, whether it is the MPI
> implementation or how PETSc handles the assembly. Without any values that
> need to be communicated it is unacceptably that these calls take so long.
> If we understood __exactly__ why the performance suddenly drops so
> dramatically we could perhaps fix it. I do not understand why.
> I guess it's worth timing.  If they don't have MPI_Reduce_scatter_block
> then it falls back to a big MPI_Allreduce.  After that, it's all
> point-to-point messaging that shouldn't suck and there actually
> shouldn't be anything to send or receive anyway.  The BTS implementation
> should be much smarter and literally reduces to a barrier in this case.

Hi Jed,

How to use the BTS implementation for Vec. For mat, we may just use

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180122/53a3cf3c/attachment.html>

More information about the petsc-users mailing list