Reuse matrix and vector

Jed Brown jed at
Tue Nov 10 04:51:05 CST 2009

jarunan at wrote:
> Total number of cells is 744872, divided into 40 blocks. In one
> processor, MatCreateMPIAIJWithArrays() takes 0.097 sec but 280 sec with
> 4 processors. Usually, this routine has no problem with small test case.
> It works the same for one or more than one processors.

This sounds like incorrect preallocation.  Is your PETSc built with
debugging?  Debug does some extra integrity checks that don't add
significantly to the time (although other Debug checks do), but it would
be useful to know that they pass.  In particular, it checks that your
rows are sorted.  If they are not sorted then PETSc's preallocation
would be wrong.  (I actually don't think this requirement enables
significantly faster implementation, so I'm tempted to change it to work
correctly with unsorted rows.)

You can also run with -info |grep malloc, there should be no mallocs in

> in the first iteration.
>     Mat Ap
>     call MatCreateMPIAIJWithArrays(PETSC_COMM_WORLD, istorf_no_ovcell,         &
>       istorf_no_ovcell, PETSC_DETERMINE, PETSC_DETERMINE, rowind, columnind,   &
>       A, Ap, ierr)
>     call MatAssemblyBegin(Ap,MAT_FINAL_ASSEMBLY,ierr)
>     call MatAssemblyEnd(Ap,MAT_FINAL_ASSEMBLY,ierr)

This assembly is superfluous (but harmless).

> Does the communication of MatCreateMPIAIJWithArrays() in parallel
> computation cost a lot? What could be the cause that
> MatCreateMPIAIJWithArrays() so slow in the first iteration?

There is no significant communication, it has to be preallocation.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 261 bytes
Desc: OpenPGP digital signature
URL: <>

More information about the petsc-users mailing list