How to loop Petsc in Fortran?
Matthew Knepley
knepley at gmail.com
Mon Apr 24 12:01:36 CDT 2006
On 4/24/06, Letian Wang <letian.wang at ghiocel-tech.com> wrote:
>
> Dear All:
>
>
>
> Question 1):
>
>
>
> For an optimization task, I need to loop Petsc (I'm using Petsc-2.3.0).
> But I had problems to reinitialize Petsc after finalize, here is a simple
> FORTRAN program to explain my problem:
>
It is not possible to call MPI_Init() after an MPI_Finalize(). Therefore you
should only call PetscInitialize/Finalize() once.
> Question 2):
>
>
>
> Follow up my previous question, I also tried to Initiallize and Finalize
> Petsc only once and perform the do-loop between Petscinitialize and
> PetscFinalize. I used KSP CR solver with Prometheus PCs to solver large
> linear equations. After several loops, the program was interrupted by
> segmentation violation error. I suppose there was a memory leak somewhere.
> The error message is like this: Any suggestion for this? Thanks
>
This is a memory corruption problem. Use the debugger (-start_in_debugger)
to get a stack trace so at
least we know where the SEGV is occurring. Then we can try to fix it.
Thanks,
Matt
> *********Doing job -- nosort0001
>
>
>
> Task No. 1 Total CPU= 52.3
>
> ---------------------------------------------------
>
>
>
> *********Doing job -- nosort0002
>
>
>
> Task No. 2 Total CPU= 52.1
>
> ---------------------------------------------------
>
>
>
> *********Doing job -- nosort0003
>
> --------------------------------------------------------------------------
>
> Petsc Release Version 2.3.0, Patch 44, April, 26, 2005
>
> See docs/changes/index.html for recent updates.
>
> See docs/faq.html for hints about trouble shooting.
>
> See docs/index.html for manual pages.
>
> -----------------------------------------------------------------------
>
> ../feap on a linux-gnu named GPTnode3.cl.ghiocel-tech.com by ltwang Mon
> Apr 24 15:25:04 2006
>
> Libraries linked from /home/ltwang/Library/petsc-2.3.0/lib/linux-gnu
>
> Configure run at Tue Mar 14 11:19:49 2006
>
> Configure options --with-mpi-dir=/usr --with-debugging=0
> --download-spooles=1 --download-f-blas-lapack=1 --download-parmetis=1
> --download-prometheus=1 --with-shared=0
>
> -----------------------------------------------------------------------
>
> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
>
> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
>
> [1]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and
> run
>
> [1]PETSC ERROR: to get more information on the crash.
>
> [1]PETSC ERROR: User provided function() line 0 in unknown directory
> unknown file
>
> [1]PETSC ERROR: Signal received!
>
> [1]PETSC ERROR: !
>
> [cli_1]: aborting job:
>
> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1
>
> [cli_0]: aborting job:
>
> Fatal error in MPI_Allgather: Other MPI error, error stack:
>
> MPI_Allgather(949)........................: MPI_Allgather(sbuf=0xbffeea14,
> scount=1, MPI_INT, rbuf=0x8bf0a0c, rcount=1, MPI_INT, comm=0x84000000)
> failed
>
> MPIR_Allgather(180).......................:
>
> MPIC_Sendrecv(161)........................:
>
> MPIC_Wait(321)............................:
>
> MPIDI_CH3_Progress_wait(199)..............: an error occurred while
> handling an event returned by MPIDU_Sock_Wait()
>
> MPIDI_CH3I_Progress_handle_sock_event(422):
>
> MPIDU_Socki_handle_read(649)..............: connection failure
> (set=0,sock=2,errno=104:(strerror() not found))
>
> rank 1 in job 477 GPTMaster_53830 caused collective abort of all ranks
>
> exit status of rank 1: return code 59
>
>
>
>
>
>
>
> Letian
>
--
"Failure has a thousand explanations. Success doesn't need one" -- Sir Alec
Guiness
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20060424/624ca46b/attachment.htm>
More information about the petsc-users
mailing list