[MPICH] error 66, MPI 1.04p1

Rajeev Thakur thakur at mcs.anl.gov
Mon Aug 28 22:07:51 CDT 2006


In C++, the request is the value returned by the method. See
http://www.mpi-forum.org/docs/mpi-20-html/node291.htm

Rajeev
 

> -----Original Message-----
> From: chong tan [mailto:chong_guan_tan at yahoo.com] 
> Sent: Monday, August 28, 2006 6:57 PM
> To: Rajeev Thakur; mpich-discuss at mcs.anl.gov
> Subject: RE: [MPICH] error 66, MPI 1.04p1
> 
> thanks for pointing that out.  However, in C++, it is
> possible to call Isend without the MPI_Request
> parameter, in which case there is nothing to wait for.
>  If Isend should always be 'wait' ed for, then the
> mpicxx header file should always require the Request
> parameter.
> 
> The application I am working on performs inter-process
> data exchange millions of time.  at any given time,
> each process knows what to send/recieve from who.  I
> am  them doing Isend, then Recv.  Once a process
> recieve the data it need, it is free to move on. It
> would be nice if Isend need not be waited for if there
> is not Request parameter.
> 
> tan
> 
> --- Rajeev Thakur <thakur at mcs.anl.gov> wrote:
> 
> > An Isend is required to be completed by a Wait. The
> > Wait frees the request
> > object returned by the Isend.
> > 
> > Rajeev 
> > 
> > > -----Original Message-----
> > > From: chong tan [mailto:chong_guan_tan at yahoo.com] 
> > > Sent: Monday, August 28, 2006 5:40 PM
> > > To: Rajeev Thakur; mpich-discuss at mcs.anl.gov
> > > Subject: RE: [MPICH] error 66, MPI 1.04p1
> > > 
> > > 
> > > Inst Recv supposed to force a sync ? 
> > > tan
> > > 
> > > 
> > > --- Rajeev Thakur <thakur at mcs.anl.gov> wrote:
> > > 
> > > > You are doing too many Isends without calling
> > > > MPI_Wait to free the requests.
> > > > You need to add MPI_Wait.
> > > > 
> > > > Rajeev 
> > > > 
> > > > > -----Original Message-----
> > > > > From: owner-mpich-discuss at mcs.anl.gov 
> > > > > [mailto:owner-mpich-discuss at mcs.anl.gov] On
> > Behalf
> > > > Of chong tan
> > > > > Sent: Monday, August 28, 2006 3:10 PM
> > > > > To: mpich-discuss at mcs.anl.gov
> > > > > Subject: [MPICH] error 66, MPI 1.04p1
> > > > > 
> > > > > run this liktle test and get error 66.
> > > > > 
> > > > > #define MPICH_IGNORE_CXX_SEEK
> > > > > #include "stdio.h"
> > > > > #include "mpi.h"
> > > > > 
> > > > > 
> > > > > 
> > > > > #define COUNT           200000
> > > > > #define DATA_SIZE       2048
> > > > > 
> > > > > main( int *argc, char ***argv )
> > > > > {
> > > > >    int  rank, nproc ;
> > > > >    int  buf[ DATA_SIZE ] ;
> > > > >    int  **inbuf ;
> > > > >    int  i, j ;
> > > > >    char *processor_name ;
> > > > > 
> > > > >    processor_name = new char [
> > > > MPI_MAX_PROCESSOR_NAME
> > > > > ] ;
> > > > >    MPI_Init( argc, argv ) ;
> > > > >    rank = MPI::COMM_WORLD.Get_rank( );
> > > > >    nproc = MPI::COMM_WORLD.Get_size( );
> > > > > 
> > > > > 
> > > > >    inbuf = (int **)new int* [ nproc ] ;
> > > > >    for( i = 0 ; i < nproc ; i++ ) {
> > > > >         inbuf[ i ] = new int[ DATA_SIZE ] ;
> > > > >    }
> > > > >    if( rank == 0 ) {
> > > > >       for( i = 0 ; i < COUNT ; i++ ) {
> > > > >         for( j = 1 ; j < nproc ; j++ ) {
> > > > >             MPI::COMM_WORLD.Isend( buf,
> > DATA_SIZE,
> > > > > MPI::INT, j, 999 ) ;
> > > > >             MPI::COMM_WORLD.Recv( inbuf[ j ],
> > > > > DATA_SIZE, MPI::INT, j, 999 ) ;
> > > > >         }
> > > > >       }
> > > > >       printf( "MASTER END..\n" ) ;
> > > > >    } else {
> > > > >       for( i = 0 ; i < COUNT ; i++ ) {
> > > > >         for( j = 1 ; j < nproc ; j++ ) {
> > > > >             MPI::COMM_WORLD.Isend( buf,
> > DATA_SIZE,
> > > > > MPI::INT, j, 999 ) ;
> > > > >             MPI::COMM_WORLD.Recv( inbuf[ j ],
> > > > > DATA_SIZE, MPI::INT, j, 999 ) ;
> > > > >         }
> > > > >       }
> > > > >       printf( "SLAGE %d END..\n", rank ) ;
> > > > >    }
> > > > >    MPI::Finalize() ;
> > > > > }
> > > > > 
> > > > > mpicxx main.cc
> > > > > mpiexec -n 2 a.out
> > > > > INTERNAL ERROR: Invalid error class (66)
> > > > encountered
> > > > > while returning from
> > > > > MPI_Recv.  Please file a bug report.  No error
> > > > stack
> > > > > is available.
> > > > > [cli_0]: aborting job:
> > > > > Fatal error in MPI_Recv: Error message texts
> > are
> > > > not
> > > > > available
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > >
> > __________________________________________________
> > > > > Do You Yahoo!?
> > > > > Tired of spam?  Yahoo! Mail has the best spam
> > > > protection around 
> > > > > http://mail.yahoo.com 
> > > > > 
> > > > > 
> > > > 
> > > > 
> > > 
> > > 
> > > __________________________________________________
> > > Do You Yahoo!?
> > > Tired of spam?  Yahoo! Mail has the best spam
> > protection around 
> > > http://mail.yahoo.com 
> > > 
> > > 
> > 
> > 
> 
> 
> __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around 
> http://mail.yahoo.com 
> 
> 




More information about the mpich-discuss mailing list