On Tue, Mar 31, 2009 at 3:58 PM, Nguyen, Hung V ERDC-ITL-MS <span dir="ltr"><<a href="mailto:Hung.V.Nguyen@usace.army.mil">Hung.V.Nguyen@usace.army.mil</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
All,<br>
<br>
I have a test case that each processor reads its owned part of matrix in csr<br>
format dumped out by CFD application.<br>
Note: the partitions of matrix were done by ParMetis.<br>
<br>
Code below shows how to insert data into PETSc matrix (gmap is globalmap).<br>
The solution from PETSc is very closed to CFD solution so I think it is<br>
correct.<br>
<br>
My question is whether the parallel partitioning of the matrix is determined<br>
by PETSc at runtime or is the same as ParMetis?<br>
<br>
Thank you,<br>
<br>
-hung<br>
---<br>
/* create a matrix object */<br>
MatCreateMPIAIJ(PETSC_COMM_WORLD, my_own, my_own,M,M, mnnz,</blockquote><div> ^^^^^^^^^^<br><br>You have determined the partitioning right here.<br>
<br> Matt<br> <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
PETSC_NULL, mnnz, PETSC_NULL, &A);<br>
<br>
for(i =0; i < my_own; i++) {<br>
int row = gmap[i];<br>
for (j = ia[i]; j < ia[i+1]; j++) {<br>
int col = ja[j];<br>
jj = gmap[col];<br>
MatSetValues(A,1,&row,1,&jj,&val[j], INSERT_VALUES);<br>
}<br>
}<br>
/* free temporary arrays */<br>
free(val); free(ja); free(ia);<br>
<br>
/* assemble the matrix and vectors*/<br>
MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);<br>
MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);<br>
<br>
<br>
</blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>