On Sat, Aug 13, 2011 at 2:57 PM, Josh Hykes <span dir="ltr"><<a href="mailto:jmhykes@ncsu.edu">jmhykes@ncsu.edu</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hello,
<div><br></div><div>I'm just starting to experiment with PETSc (v3.1), and I like the Python bindings provided by petsc4py (v1.1.2). So far things seem fairly straightforward, but I'm stumped on a small issue.</div>
<div><br></div>
<div>While creating a parallel AIJ matrix, I'd like to preallocate it using arrays d_nnz and o_nnz. As I understand it, these arrays correspond to the processor's local rows.</div><div>
<br></div><div>Currently I specify the global matrix size, and let PETSc decide on the decomposition of the rows. I'd like to ask PETSc what rows each processor has with the getOwnershipRange() function, and then do the preallocation. However, according to the error message</div>
<div><br></div><div>> [1] MatAnyAIJSetPreallocation() line 393 in petsc4py-1.1.2/src/include/custom.h</div><div>> [1] Operation done in wrong order</div><div>> [1] matrix is already preallocated</div><div><br></div>
<div>I'm not allowed to do it in this order.</div><div><br></div><div>Thus, my question is: is it possible to let PETSc figure out the row decomposition while still using d_nnz and o_nnz for the preallocation? I figure that I could resolve the problem by doing my own decomposition, but it'd be nice if I could let those details up to PETSc.</div>
</blockquote><div><br></div><div>You are correct. We require that preallocation is done at the same time as decomposition. There</div><div>are tricky dependencies in matrix creation. However, an easy workaround is to create a Vec at</div>
<div>the same time with the same global size, since it is guaranteed to have the same layout. I will look</div><div>into simplifying this if it is possible.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div>I'm including an example using petsc4py of what I'd like to do, run with 2 MPI processes.</div>
<div><br></div><div>I apologize if this is a dumb question. Thank you for your help.</div><div><br>
</div><div>-Josh</div><div>
<br></div><div><font face="'courier new', monospace"># -----------------------------------------------</font></div><div><div><font face="'courier new', monospace">from petsc4py import PETSc as petsc</font></div>
<div><font face="'courier new', monospace"><br></font></div><div><font face="'courier new', monospace">M, N = 4, 6</font></div><div><font face="'courier new', monospace"><br>
</font></div><div><font face="'courier new', monospace">global_d_nnz = [2, 1, 1, 2]</font></div><div><font face="'courier new', monospace">global_o_nnz = [1, 3, 2, 1]</font></div><div><font face="'courier new', monospace"><br>
</font></div><div>
<font face="'courier new', monospace">A = petsc.Mat()</font></div><div><font face="'courier new', monospace">A.create(petsc.COMM_WORLD)</font></div><div><font face="'courier new', monospace">A.setSizes([M, N])</font></div>
<div><font face="'courier new', monospace">A.setType('aij')</font></div><div><font face="'courier new', monospace"><br></font></div><div><font face="'courier new', monospace">i_start, i_end = A.getOwnershipRange()</font></div>
<div><font face="'courier new', monospace"><br></font></div><div><font face="'courier new', monospace">A.setPreallocationNNZ([global_d_nnz[i_start:i_end],</font></div><div><font face="'courier new', monospace"> global_o_nnz[i_start:i_end]]) # error occurs here</font></div>
</div><div><br></div>
</blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>