[petsc-users] signal received error; MatNullSpaceTest; Stokes flow solver with pc fieldsplit and schur complement

Jed Brown jedbrown at mcs.anl.gov
Thu Oct 17 09:35:26 CDT 2013


Bishesh Khanal <bisheshkh at gmail.com> writes:

> Yes, I tried it just a while ago and this is happened I think. (Just to
> confirm, I have put the error message for this case at the very end of this
> reply.*)

Yes, that is the "friendly" way to run out of memory.

> I'm sorry but I'm a complete beginner with MPI and clusters; so what does
> one MPI rank per node means and what should I do to do that ? My guess is
> that I set one core per node and use multiple nodes in my job script file ?
> Or do I need to do something in the petsc code ?

No, it is in the way you launch the job.  It's usually called "processes
per node", often with a key like "ppn", "nppn", or "mppnppn".  See the
documentation for the machine or ask your facility how to do it.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20131017/d87ab293/attachment.pgp>


More information about the petsc-users mailing list