[petsc-users] Preallocation (dnz, onz arrays) in sparse parallel matrix
Thibaut Appel
t.appel17 at imperial.ac.uk
Fri Oct 6 09:08:35 CDT 2017
Dear PETSc users,
I am trying to assemble a sparse matrix in parallel where my main
objective is efficiency and scalability.
Precisely, I am using MatMPIAIJSetPreallocation with diagonal entries
(dnz) and off-diagonal entries (onz) _arrays_ (non zero elements for
each rows) to allocate the memory needed.
Prior to the insertion of the elements in the matrix, I am doing a
primary loop to determine those arrays dnz and onz for each processor
owning its own set of rows. Ideally, this loop would look like
for irow=istart, iend-1, i++ ----> count dnz(irow) and onz(irow)
But it seems that you cannot call
MatGetOwnershipRange(Mat,istart,iend,ierr) before
MatMPIAIJSetPreallocation to get istart and iend. Why is that?
Which optimal approach should be followed to count your non-zero
elements for each processor? I saw two conversations where Barry Smith
suggested the use of MatPreallocateInitialize/Finalize or
PetscSplitOwnership, which means you have to determine yourself the rows
owned by each processor? Is that not contrary to the "PETSc spirit"?
Thanks for your help and have a nice weekend
Thibaut
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20171006/5e746007/attachment.html>
More information about the petsc-users
mailing list