[petsc-users] import mesh.
Matthew Knepley
knepley at gmail.com
Wed Aug 21 13:58:31 CDT 2013
On Wed, Aug 21, 2013 at 9:19 AM, <obonnefon at paca.inra.fr> wrote:
> **
>
> Hello,
>
>
> I need again your help.
>
> In the ex12.c, I have replaced the line :
> ierr = DMPlexCreateBoxMesh(comm, dim, interpolate, dm);CHKERRQ(ierr);
> by
>
You probably want:
if (!rank) {
> ierr =
> DMPlexCreateFromCellList(comm,dim,obNbCells,obNbVertex,3,0,obCells,2,obVertex,dm);CHKERRQ(ierr);
> for (i=0;i<obNbBound;i++){
> ierr =DMPlexSetLabelValue(*dm, "marker", obBoundary[i]+obNbCells,
> 1);CHKERRQ(ierr);
> }
>
} else {
ierr = DMCreate(comm, dm);CHKERRQ(ierr);
ierr = DMSetType(*dm, DMPLEX);CHKERRQ(ierr);
ierr = DMPlexSetDimension(*dm, dim);CHKERRQ(ierr);
}
so that you have an empty mesh on proc 1. I should not crash, but I think
its getting confused because
you have a weirdly specified mesh on all procs.
Matt
>
> The result works with one process (mpirun -np 1 ./ex12), but not with 2
> (mpirun -np 2 ./ex12), it crashes during the partitioning (I'm using chaco):
>
> Do you have any clue about this?
>
> Thanks.
>
> Olivier B
>
> Following, the error message:
>
> $ mpirun -np 2 ./ex12
> [1]PETSC ERROR:
> -----------------------------------------------------------------------
> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [1]PETSC ERROR: or see
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
> [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS
> X to find memory corruption errors
> [1]PETSC ERROR: likely location of problem given in stack below
> [1]PETSC ERROR: --------------------- Stack Frames
> ------------------------------------
> [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not
> available,
> [1]PETSC ERROR: INSTEAD the line number of the start of the function
> [1]PETSC ERROR: is given.
> [1]PETSC ERROR: [1] PetscSFCreateEmbeddedSF line 863
> /home/olivierb/SOFT/petsc-3.4.2/src/vec/is/sf/interface/sf.c
> [1]PETSC ERROR: [1] PetscSFDistributeSection line 1755
> /home/olivierb/SOFT/petsc-3.4.2/src/vec/is/utils/vsectionis.c
> [1]PETSC ERROR: [1] DMPlexDistribute line 2771
> /home/olivierb/SOFT/petsc-3.4.2/src/dm/impls/plex/plex.c
> [1]PETSC ERROR: [1] CreateMesh line 371
> "unknowndirectory/"/home/olivierb/TP_PETSC/ex12.c
> [1]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> [1]PETSC ERROR: Signal received!
> [1]PETSC ERROR:
> -----------------------------------------------------------------------
> [1]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013
> [1]PETSC ERROR: See docs/changes/index.html for recent updates.
> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [1]PETSC ERROR: See docs/index.html for manual pages.
> [1]PETSC ERROR:
> ------------------------------------------------------------------------
> [1]PETSC ERROR: ./ex12 on a arch-linux2-c-debug named pcbiom38 by olivierb
> Wed Aug 21 16:02:08 2013
> [1]PETSC ERROR: Libraries linked from /home/olivierb/BUILD/DEBUG/PETSC/lib
> [1]PETSC ERROR: Configure run at Tue Aug 20 11:32:52
> 2013--------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with
> errorcode 59.
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending onexactly
> when Open MPI kills
> them.--------------------------------------------------------------------------
> [1]PETSC ERROR: Configure options --with-debugging=1 --download-fiat
> --download-scientificpython --download-generator --download-triangle
> --download-ctetgen --download-chaco
> --prefix=/home/olivierb/BUILD/DEBUG/PETSC
> [1]PETSC ERROR:
> ------------------------------------------------------------------------
> [1]PETSC ERROR: User provided function() line 0 in unknown directory
> unknown
> file--------------------------------------------------------------------------
> mpirun has exited due to process rank 1 with PID 4546 onnode pcbiom38
> exiting without calling "finalize". This mayhave caused other processes in
> the application to beterminated by signals sent by mpirun (as reported
> here).--------------------------------------------------------------------------
>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130821/e062c891/attachment.html>
More information about the petsc-users
mailing list