[mpich-discuss] Code crashes when compiling with GFORTRAN but not when compiling with PGF90

Michael Ahlmann mahlmann at ucdavis.edu
Mon Dec 1 17:20:23 CST 2008


Rajeev,

    Thanks for the reply.  I already tried doing that but to no avail.  
After further digging I have found the root cause of the problem as a 
variable declaration.  If I declare a particular variable as global I 
get the previous error, but if I don't, the code runs just fine (recall 
this is just with GFORTRAN and not PGF90...).  So I am thinking it may 
be an inconsistency with regard to precision or something of the like, 
but I haven't been able to find it yet.

-Michael

Rajeev Thakur wrote:
> It's strange. We test with gfortran regularly. Just again make sure you have
> compiled and built correctly with gfortran, i.e., both F77 and F90 are set
> to gfortran before running configure. Then run make clean and make.
>
> Rajeev
>
>   
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov 
>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of 
>> Michael Ahlmann
>> Sent: Monday, December 01, 2008 1:11 PM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: [mpich-discuss] Code crashes when compiling with 
>> GFORTRAN but not when compiling with PGF90
>>
>> I recently built a new cluster and loaded it with MPICH2 
>> (1.08) and both 
>> the PGF90 and GFORTRAN compilers.  When I compile MPICH2 and my code 
>> with PGF90, everything runs without a hitch; however, if I 
>> compile both 
>> with GFORTRAN, the code crashes after 1 iteration unless I 
>> run it on 2 
>> or less processors.  The strange part is that I have successfully 
>> compiled and run the code on multiple processors using GFORTRAN and 
>> MPICH2 (1.06) on another cluster, so I am somewhat baffled.  
>> The error 
>> message is the following:
>>
>> Fatal error in MPI_Waitall: Invalid argument, error stack:
>> MPI_Waitall(257): MPI_Waitall(count=8, req_array=0xa0476f8, 
>> status_array=(nil)) failed
>> MPI_Waitall(106): Null pointer in parameter array_of_statuses[cli_0]: 
>> aborting job:
>> Fatal error in MPI_Waitall: Invalid argument, error stack:
>> MPI_Waitall(257): MPI_Waitall(count=8, req_array=0xa0476f8, 
>> status_array=(nil)) failed
>> MPI_Waitall(106): Null pointer in parameter array_of_statuses
>> rank 0 in job 21  cluster.engr.ucdavis.edu_45821   caused collective 
>> abort of all ranks
>>  exit status of rank 0: return code 1
>>
>> Has anybody seen a similar problem?  Thanks for the help,
>>
>> -Michael
>>
>>     
>
>
>   




More information about the mpich-discuss mailing list