<div class="gmail_quote">On Mon, Jul 30, 2012 at 4:54 PM, Ronald M. Caplan <span dir="ltr"><<a href="mailto:caplanr@predsci.com" target="_blank">caplanr@predsci.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Yes that is correct. That is the updated code with each node storing its own values. See my previous email to Matt for the old version which segfaults with processors more than 1 and npts =25.</blockquote></div><br><div>
<div>$ mpiexec.hydra -n 2 ./petsctest</div><div> N: 46575</div><div> cores: 2</div><div> MPI TEST: My rank is: 0</div><div> MPI TEST: My rank is: 1</div><div> Rank 0 has range 0 and 23288</div>
<div> Rank 1 has range 23288 and 46575</div><div> Number of non-zero entries in matrix: 690339</div><div> Done setting matrix values...</div><div> between assembly</div><div> between assembly</div>
<div> PETSc y=Ax time: 199.342865 nsec/mp.</div><div> PETSc y=Ax flops: 0.415489674 GFLOPS.</div></div>