[mpich-discuss] exit code -1073741819

trymelz trymelz trymelz at yahoo.com
Thu Mar 20 14:30:26 CDT 2008


Hi,
 
Sure. After changing the calling convention to default in Intel Compiler, I successfully linked with fmpich2.lib. But again, it runs fine when the string length is passed at the end of argument list, and it crashes when the string length is passed after the string pointer.
 
Probably, more MPICH2 FORTRAN libraries are needed to be provided.

Linfa

----- Original Message ----
From: Jayesh Krishna <jayesh at mcs.anl.gov>
To: trymelz trymelz <trymelz at yahoo.com>
Cc: mpich-discuss at mcs.anl.gov
Sent: Thursday, March 20, 2008 2:08:46 PM
Subject: RE: [mpich-discuss] exit code -1073741819


Hi,
 Can you also try compiling your program with fmpich2.lib ?
 
Regards,
Jayesh



From: trymelz trymelz [mailto:trymelz at yahoo.com] 
Sent: Thursday, March 20, 2008 2:06 PM
To: Jayesh Krishna; mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] exit code -1073741819


Hi 
 
I have no problem to compile & run the sample fpi.f using fmpich2s.lib. Actually, I have identified the culprit: MPI_SEND/RECV work find with integer, but have problem with string.
 
The main reason I beleive is that, in Fortran, different compiler treats the string arguments differently: either add the string length as a invisible argument immediately after the string pointer, or the string lengh is added as a invisible arguments at the end of arguments list. So in my understanding MPI_SEND/RECV will have 28 byte of arguments for integer, but 32 byte of arguments for string because the invisible string length argument. 
 
It will cause crash if the positions of the invisible string length arguments are different in the .obj file and in fmpich2s.lib. The following program dimenstrate this
 
program main
implicit none
#include <mpif.h>
integer status(MPI_STATUS_SIZE)
character * 15 pcomand
integer MYID,mpimodbg,ierr
ierr=MPI_SUCCESS
call MPI_INIT(ierr)
call MPI_COMM_RANK( MPI_COMM_WORLD, MYID, ierr )
if(MYID .eq. 1)then
mpimodbg=99
pcomand='hello world'
call MPI_SEND(mpimodbg,1,MPI_INTEGER,0
* ,11,MPI_COMM_WORLD,ierr) 
write(*,*)'node',MYID,pcomand,mpimodbg
call MPI_SEND(pcomand,15,MPI_CHARACTER,0
* ,11,MPI_COMM_WORLD,ierr) 
write(*,*)'node',MYID,pcomand,mpimodbg
elseif(MYID.eq.0)then
mpimodbg=0
pcomand='hello'
call MPI_RECV(mpimodbg,1,MPI_INTEGER,1
* ,11,MPI_COMM_WORLD,status,ierr)
write(*,*)'node',MYID,pcomand,mpimodbg
call MPI_RECV(pcomand,15,MPI_CHARACTER,1
* ,11,MPI_COMM_WORLD,status,ierr)
write(*,*)'node',MYID,pcomand,mpimodbg
endif 
call MPI_FINALIZE(ierr)
end
 
When compiled with the string length as a invisible argument immediately after the string pointer (by default in Intel Compiler):
 
$ "/cygdrive/c/Program Files/MPICH2/bin/mpiexec.exe" -n 3 ./Debug/fortestargs.exe
 node           0 hello                    99
 node           1 hello world              99
forrtl: severe (157): Program Exception - access violation
Image              PC        Routine            Line        Source
fmpich2s.dll       1000507F  Unknown               Unknown  Unknown
fortestargs.exe    004A783D  Unknown               Unknown  Unknown
fortestargs.exe    0044AA53  Unknown               Unknown  Unknown
fortestargs.exe    0044A81D  Unknown               Unknown  Unknown
kernel32.dll       7C816FD7  Unknown               Unknown  Unknown
forrtl: severe (157): Program Exception - access violation
Image              PC        Routine            Line        Source
fmpich2s.dll       1000492B  Unknown               Unknown  Unknown
fortestargs.exe    004A783D  Unknown               Unknown  Unknown
fortestargs.exe    0044AA53  Unknown               Unknown  Unknown
fortestargs.exe    0044A81D  Unknown               Unknown  Unknown
kernel32.dll       7C816FD7  Unknown               Unknown  Unknown
job aborted:
rank: node: exit code[: error message]
0: abyss: 157: process 0 exited without calling finalize
1: abyss: 157: process 1 exited without calling finalize
2: abyss: 123
 
When compiled with the string length as a invisible argument at the end of arguments list:
 
$ "/cygdrive/c/Program Files/MPICH2/bin/mpiexec.exe" -n 3 ./Debug/fortestargs.exe

 node           1 hello world              99
 node           1 hello world              99
 node           0 hello                    99
 node           0 hello world              99
 
 
Thanks all  for your help.
 
Linfa


----- Original Message ----
From: Jayesh Krishna <jayesh at mcs.anl.gov>
To: trymelz trymelz <trymelz at yahoo.com>
Cc: mpich-discuss at mcs.anl.gov
Sent: Thursday, March 20, 2008 9:15:21 AM
Subject: RE: [mpich-discuss] exit code -1073741819


Hi,
    Can you compile & run the sample fpi.f (provided in MPICH2\examples in your installation dir) using fmpich2s.lib ?
 
Regards,
Jayesh




From: owner-mpich-discuss at mcs.anl.gov [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of trymelz trymelz
Sent: Wednesday, March 19, 2008 6:30 PM
To: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] exit code -1073741819


Hi
 
The MPI program is too big to be sent. Here is what I got from the debugger
 
Before calling PMPI_ISEND:
Symbol Value Type
  buf(1) 3 INTEGER(4) 
  cnt 1 INTEGER(4) 
  datatype 1275069467 INTEGER(4) 
  dest 0 INTEGER(4) 
  tag 24 INTEGER(4) 
  comm 1140850688 INTEGER(4) 
  reqs(1) 1243468 INTEGER(4) 
  ierr 0 INTEGER(4)
 
call PMPI_ISEND(buf,cnt,datatype,dest,tag,comm,reqs(1),ierr)

Unhandled exception at 0x003b41c9 in EXEC.exe:0xC0000005:
Access violation writing location 0x00000000
 
After the break at the crash, all the arguments becomes "undefined address". 
 
When I check the library, I got
$ nm fmpich2s.lib | grep MPI_SEND@
00000000 T _MPI_SEND at 28
00000000 I __imp__MPI_SEND at 28
00000000 T _MPI_SEND at 32
00000000 I __imp__MPI_SEND at 32
00000000 T _PMPI_SEND at 28
00000000 I __imp__PMPI_SEND at 28
00000000 T _PMPI_SEND at 32
00000000 I __imp__PMPI_SEND at 32

I am wondering why there are two definitions:  _MPI_SEND at 28 and _MPI_SEND at 32.  And I could not find corresponding source code for _MPI_SEND at 32. Any comments? Thanks.

Linfa


 
----- Original Message ----
From: Jayesh Krishna <jayesh at mcs.anl.gov>
To: trymelz trymelz <trymelz at yahoo.com>
Cc: mpich-discuss at mcs.anl.gov
Sent: Wednesday, March 19, 2008 2:32:59 PM
Subject: RE: [mpich-discuss] exit code -1073741819


Hi,
 Can you send us your MPI program ?
 You can also try to debug your program by setting the error handler to MPI_ERRORS_RETURN (MPI_Comm_set_errhandler() ) & using MPI_Error_string() to get the description of the error code.
 
Regards,
Jayesh




From: owner-mpich-discuss at mcs.anl.gov [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of trymelz trymelz
Sent: Wednesday, March 19, 2008 2:11 PM
To: mpich-discuss at mcs.anl.gov
Subject: [mpich-discuss] exit code -1073741819


Hi all,
 
I got an strange error when I run my parallel program.
 
job aborted:
rank: node: exit code[: error message]
0: abyss: -1073741819: process 0 exited without calling finalize
1: abyss: -1073741819: process 1 exited without calling finalize
2: abyss: -1073741819: process 2 exited without calling finalize
 
My program works as expected under Linux. But when I porting it to windows I got above error. It is compiled by VC8 and Intel FORTRAN 10.0.
I didn't built my own MPICH2, but link my program with fmpich2s.lib and mpi.lib which were installed from mpich2-1.0.6p1-win32-ia32.msi.
 
This error happened inside the call to MPI_SEND. Does anyone have an idea about this error?
 
 
Linfa



Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.





Looking for last minute shopping deals? Find them fast with Yahoo! Search.





Looking for last minute shopping deals? Find them fast with Yahoo! Search.


      ____________________________________________________________________________________
Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  http://tools.search.yahoo.com/newsearch/category.php?category=shopping
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20080320/ae07ba3e/attachment.htm>


More information about the mpich-discuss mailing list