[petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X

Ronald M. Caplan caplanr at predsci.com
Mon Jul 30 19:10:19 CDT 2012


Hmmm   maybe its because I am on a Mac OS X?

 - Ron

On Mon, Jul 30, 2012 at 4:57 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> On Mon, Jul 30, 2012 at 4:54 PM, Ronald M. Caplan <caplanr at predsci.com>wrote:
>
>> Yes that is correct.  That is the updated code with each node storing its
>> own values.  See my previous email to Matt for the old version which
>> segfaults with processors more than 1 and npts =25.
>
>
> $ mpiexec.hydra -n 2 ./petsctest
>  N:        46575
>  cores:            2
>  MPI TEST:  My rank is:           0
>  MPI TEST:  My rank is:           1
>  Rank            0  has range            0  and        23288
>  Rank            1  has range        23288  and        46575
>  Number of non-zero entries in matrix:      690339
>  Done setting matrix values...
>  between assembly
>  between assembly
>  PETSc y=Ax time:      199.342865     nsec/mp.
>  PETSc y=Ax flops:    0.415489674     GFLOPS.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120730/c8fd5233/attachment.html>


More information about the petsc-users mailing list