[petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X
Jed Brown
jedbrown at mcs.anl.gov
Mon Jul 30 18:57:40 CDT 2012
On Mon, Jul 30, 2012 at 4:54 PM, Ronald M. Caplan <caplanr at predsci.com>wrote:
> Yes that is correct. That is the updated code with each node storing its
> own values. See my previous email to Matt for the old version which
> segfaults with processors more than 1 and npts =25.
$ mpiexec.hydra -n 2 ./petsctest
N: 46575
cores: 2
MPI TEST: My rank is: 0
MPI TEST: My rank is: 1
Rank 0 has range 0 and 23288
Rank 1 has range 23288 and 46575
Number of non-zero entries in matrix: 690339
Done setting matrix values...
between assembly
between assembly
PETSc y=Ax time: 199.342865 nsec/mp.
PETSc y=Ax flops: 0.415489674 GFLOPS.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120730/eab85708/attachment.html>
More information about the petsc-users
mailing list