<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Dear PETSc users,</p>
<p>I am trying to assemble a sparse matrix in parallel where my main
objective is efficiency and scalability.</p>
<p>Precisely, I am using MatMPIAIJSetPreallocation with diagonal
entries (dnz) and off-diagonal entries (onz) <u>arrays</u> (non
zero elements for each rows) to allocate the memory needed.</p>
<p>Prior to the insertion of the elements in the matrix, I am doing
a primary loop to determine those arrays dnz and onz for each
processor owning its own set of rows. Ideally, this loop would
look like</p>
<p> for irow=istart, iend-1, i++ ----> count dnz(irow) and
onz(irow)<br>
</p>
<p>But it seems that you cannot call
MatGetOwnershipRange(Mat,istart,iend,ierr) before
MatMPIAIJSetPreallocation to get istart and iend. Why is that?<br>
</p>
<p>Which optimal approach should be followed to count your non-zero
elements for each processor? I saw two conversations where Barry
Smith suggested the use of MatPreallocateInitialize/Finalize or
PetscSplitOwnership, which means you have to determine yourself
the rows owned by each processor? Is that not contrary to the
"PETSc spirit"?<br>
</p>
<p>Thanks for your help and have a nice weekend</p>
<p>Thibaut<br>
</p>
</body>
</html>