[MPICH] error 66, MPI 1.04p1
Rajeev Thakur
thakur at mcs.anl.gov
Mon Aug 28 16:46:51 CDT 2006
You are doing too many Isends without calling MPI_Wait to free the requests.
You need to add MPI_Wait.
Rajeev
> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of chong tan
> Sent: Monday, August 28, 2006 3:10 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: [MPICH] error 66, MPI 1.04p1
>
> run this liktle test and get error 66.
>
> #define MPICH_IGNORE_CXX_SEEK
> #include "stdio.h"
> #include "mpi.h"
>
>
>
> #define COUNT 200000
> #define DATA_SIZE 2048
>
> main( int *argc, char ***argv )
> {
> int rank, nproc ;
> int buf[ DATA_SIZE ] ;
> int **inbuf ;
> int i, j ;
> char *processor_name ;
>
> processor_name = new char [ MPI_MAX_PROCESSOR_NAME
> ] ;
> MPI_Init( argc, argv ) ;
> rank = MPI::COMM_WORLD.Get_rank( );
> nproc = MPI::COMM_WORLD.Get_size( );
>
>
> inbuf = (int **)new int* [ nproc ] ;
> for( i = 0 ; i < nproc ; i++ ) {
> inbuf[ i ] = new int[ DATA_SIZE ] ;
> }
> if( rank == 0 ) {
> for( i = 0 ; i < COUNT ; i++ ) {
> for( j = 1 ; j < nproc ; j++ ) {
> MPI::COMM_WORLD.Isend( buf, DATA_SIZE,
> MPI::INT, j, 999 ) ;
> MPI::COMM_WORLD.Recv( inbuf[ j ],
> DATA_SIZE, MPI::INT, j, 999 ) ;
> }
> }
> printf( "MASTER END..\n" ) ;
> } else {
> for( i = 0 ; i < COUNT ; i++ ) {
> for( j = 1 ; j < nproc ; j++ ) {
> MPI::COMM_WORLD.Isend( buf, DATA_SIZE,
> MPI::INT, j, 999 ) ;
> MPI::COMM_WORLD.Recv( inbuf[ j ],
> DATA_SIZE, MPI::INT, j, 999 ) ;
> }
> }
> printf( "SLAGE %d END..\n", rank ) ;
> }
> MPI::Finalize() ;
> }
>
> mpicxx main.cc
> mpiexec -n 2 a.out
> INTERNAL ERROR: Invalid error class (66) encountered
> while returning from
> MPI_Recv. Please file a bug report. No error stack
> is available.
> [cli_0]: aborting job:
> Fatal error in MPI_Recv: Error message texts are not
> available
>
>
>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam? Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>
>
More information about the mpich-discuss
mailing list