[petsc-users] DMUMPS_LOAD_RECV_MSGS
Smith, Barry F.
bsmith at mcs.anl.gov
Thu Feb 13 09:09:34 CST 2020
Given the 2040 either you or MUMPS is running out of communicators. Do you use your own communicators in your code and are you freeing them when you don't need them?
If it is not your code then it is MUMPs that is running out and you should contact them directly
RECURSIVE SUBROUTINE DMUMPS_LOAD_RECV_MSGS(COMM)
IMPLICIT NONE
INCLUDE 'mpif.h'
INCLUDE 'mumps_tags.h'
INTEGER IERR, MSGTAG, MSGLEN, MSGSOU,COMM
INTEGER :: STATUS(MPI_STATUS_SIZE)
LOGICAL FLAG
10 CONTINUE
CALL MPI_IPROBE( MPI_ANY_SOURCE, MPI_ANY_TAG, COMM,
& FLAG, STATUS, IERR )
IF (FLAG) THEN
KEEP_LOAD(65)=KEEP_LOAD(65)+1
KEEP_LOAD(267)=KEEP_LOAD(267)-1
MSGTAG = STATUS( MPI_TAG )
MSGSOU = STATUS( MPI_SOURCE )
IF ( MSGTAG .NE. UPDATE_LOAD) THEN
write(*,*) "Internal error 1 in DMUMPS_LOAD_RECV_MSGS",
& MSGTAG
CALL MUMPS_ABORT()
> On Feb 13, 2020, at 7:53 AM, Perceval Desforges <perceval.desforges at polytechnique.edu> wrote:
>
> Hello all,
>
> I have been running in a strange issue with petsc, and more specifically I believe Mumps is the problem.
>
> In my program, I run a loop where at the beginning of the loop I create an eps object, calculate the eigenvalues in a certain interval using the spectrum slicing method, store them, and then destroy the eps object. For some reason, whatever the problem size, if my loop has too many iterations (over 2040 I believe), the program will crash giving me this error :
>
> Internal error 1 in DMUMPS_LOAD_RECV_MSGS 0
>
> application called MPI_Abort(MPI_COMM_WORLD, -99) - process 0
>
> I am running the program in MPI over 20 processes.
>
> I don't really understand what this message means, does anybody know?
>
> Best regards,
>
> Perceval
>
More information about the petsc-users
mailing list