<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=gb2312" http-equiv=Content-Type>
<STYLE>.hmmessage P {
        PADDING-BOTTOM: 0px; MARGIN: 0px; PADDING-LEFT: 0px; PADDING-RIGHT: 0px; PADDING-TOP: 0px
}
BODY.hmmessage {
        FONT-FAMILY: Verdana; FONT-SIZE: 10pt
}
</STYLE>
<META name=GENERATOR content="MSHTML 8.00.6001.18854"></HEAD>
<BODY class=hmmessage>
<DIV dir=ltr align=left><SPAN class=140565308-23122009><FONT color=#0000ff
face=Arial>Make sure there is no mpif.h file in any of the application
directories.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=140565308-23122009><FONT color=#0000ff
face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=140565308-23122009><FONT color=#0000ff
face=Arial>Rajeev</FONT></SPAN></DIV><BR>
<BLOCKQUOTE
style="BORDER-LEFT: #0000ff 2px solid; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px">
<DIV dir=ltr lang=en-us class=OutlookMessageHeader align=left>
<HR tabIndex=-1>
<FONT face=Tahoma><B>From:</B> mpich-discuss-bounces@mcs.anl.gov
[mailto:mpich-discuss-bounces@mcs.anl.gov] <B>On Behalf Of
</B>LSB<BR><B>Sent:</B> Wednesday, December 23, 2009 2:39 AM<BR><B>To:</B>
mpich-discuss@mcs.anl.gov<BR><B>Subject:</B> Re: [mpich-discuss] how to deal
with these errors?<BR></FONT><BR></DIV>
<DIV></DIV>Hi Correa,<BR> <BR>Thank you for your reply. Meet you here
agian, haha<BR>I have reorgnized my questions, and posted it on
the CGD forum.<BR><BR>I have edited the related
directories of the MPI include files and library after you answered me through
the ccsm maillist. However, the "invalid communicator" is still there. I
am sure my CCSM3 Makefiles now points to MPICH2 include files and the library
directories are also have been changed to associate to MPICH2. Is there
any other reason that can lead to this problem?<BR> <BR>I said
that I can "run" CCSM3 by delete any one of the five components. I infact
meant that in this situation I can see these process names by the command
"top". But if I run all the five, I can not see these
process names with "top", and the error message as
mentioned before will appear.<BR> <BR>The resolution I use is
T31_gx3v5. I asked for two nodes(each node with 16G memory and 16 cpus). But
I am not sure whether these resources are enough for CCSM3 running
or not.<BR> <BR>Thanks for your help.<BR> <BR>Liu.
S<BR> <BR> <BR> <BR>> Date: Tue, 22 Dec 2009 16:30:40
-0500<BR>> From: gus@ldeo.columbia.edu<BR>> To:
mpich-discuss@mcs.anl.gov<BR>> Subject: Re: [mpich-discuss] how to deal
with these errors?<BR>> <BR>> Hi Liu<BR>> <BR>> As I mentioned,
probably to you, in the CCSM3 forum:<BR>> <BR>> **<BR>> <BR>>
Regarding 1),<BR>> the "Invalid communicator" error is often produced by
the use<BR>> of a wrong mpi.h or mpif.h include files, i.e.,<BR>>
include files from another MPI that may be in your system.<BR>> <BR>> If
you search this mailing list archives, or the OpenMPI mailing list<BR>>
archives, you will find other postings reporting this error.<BR>> <BR>>
For instance, in one of our computers here, the MPICH-1<BR>> mpi.h has
this:<BR>> <BR>> #define MPI_COMM_WORLD 91<BR>> <BR>> whereas the
MPICH2 mpi.h has something else:<BR>> <BR>> #define MPI_COMM_WORLD
((MPI_Comm)0x44000000)<BR>> <BR>> As you can see, eve n MPI_COMM_WORLD
is different on MPICH-1 and MPICH2.<BR>> You cannot patch this by
hand.<BR>> You must use the correct mpi.h/mpif.h, associated to
your<BR>> mpicc and mpif90.<BR>> <BR>> You may want to compile
everything again fresh.<BR>> Object files and modules that were built with
the wrong mpi.h<BR>> will only cause you headaches, and the "Invalid
communicator"<BR>> error will never go away.<BR>> Get rid of them before
you restart.<BR>> Do make clean/cleanall, or make cleandist.<BR>> Even
better: simply start from a fresh tarball.<BR>> <BR>> To compile, you
should preferably use the MPICH2<BR>> compiler wrappers mpif90 and
mpicc.<BR>> <BR>> Wherever the CCSM3 Makefiles point to MPI include
files,<BR>> make sure the directories are those of MPICH2, not any other
MPI.<BR>> <BR>> Likewise for the MPI library directories:<BR>> they
must be those associated to MPICH2.<BR>> <BR>> To save you headaches,
you can u se full path names to<BR>> the MPICH2 mpicc and mpif90.<BR>>
<BR>> You may need to compile the ESMF library separately,<BR>> as their
makefiles seem to be hardwired not to use the MPI compiler<BR>>
wrappers.<BR>> <BR>> **<BR>> <BR>> As for 2), CCSM3 is an MPMD
program with 5 executables.<BR>> It cannot work correctly if you delete one
of them.<BR>> You actually eliminated the flux coupler, which
coordinates<BR>> the work of all other four components.<BR>> The other
components only talk to the coupler.<BR>> Therefore, what probably
happens<BR>> is that the other four executables are waiting<BR>> forever
for the flux coupler to answer.<BR>> <BR>> **<BR>> <BR>> As for
3), besides requiring a substantial number of CPUs,<BR>> CCSM3 also needs a
significant amount of memory.<BR>> On how many nodes, and with how much
memory on each,<BR>> are you trying to run the job?<BR>> Which
resolution (T42, T31, T85)?<BR>& gt; <BR>> In any case, increasing the
number of processors<BR>> will not solve the MPI error message of
1),<BR>> which requires using the correct mpi.h.<BR>> <BR>>
**<BR>> <BR>> Only question 1) is a general MPI/MPICH question.<BR>>
Questions 2) and 3) are specific CCSM3 issues.<BR>> It may be more
productive to discuss them in the CCSM3 forum.<BR>> <BR>> In any case,
let's hope you can get additional help here also.<BR>> <BR>> **<BR>>
<BR>> I hope this helps.<BR>> Gus Correa<BR>>
---------------------------------------------------------------------<BR>>
Gustavo Correa<BR>> Lamont-Doherty Earth Observatory - Columbia
University<BR>> Palisades, NY, 10964-8000 - USA<BR>>
---------------------------------------------------------------------<BR>>
<BR>> LSB wrote:<BR>> > Hi everyone,<BR>> > <BR>> > I
want to run Community Climate System Model on our machine under <BR>> >
MPICH2. I compile d it successfully. However, I got some error message
<BR>> > about mpi during runnig it.<BR>> > <BR>> > 1) In the
run script, I asked for 32 cpus ( use PBS batch system). After <BR>> >
starting up mpd daemons, I wrote " <BR>> >
/mnt/storage-space/disk1/mpich/bin/mpiexec -l -n 2 $EXEROOT/all/cpl : -n
<BR>> > 2 $EXEROOT/all/csim : -n 8 $EXEROOT/all/clm : -n 4
$EXEROOT/all/pop : -n <BR>> > 16 $EXEROOT/all/cam" . <BR>> > The
process is over quite quickly after I qsub it. With error message <BR>>
> like:<BR>> > rank 5 in job 1 compute-0-10.local_46741 caused
collective abort of <BR>> > all ranks<BR>> > exit status of rank
5: return code 1<BR>> > AND<BR>> > 14: Fatal error in
MPI_Cart_shift: Invalid communicator, error stack:<BR>> > 14:
MPI_Cart_shift(172): MPI_Cart_shift(MPI_COMM_NULL, direction=1, <BR>> >
displ=1, source=0x2582aa0, dest=0x2582aa4) failed<BR>> > 14:
MPI_Cart_shif t(80).: Null communi cator<BR>> > 15: Fatal error in
MPI_Cart_shift: Invalid communicator, error stack:<BR>> > 15:
MPI_Cart_shift(172): MPI_Cart_shift(MPI_COMM_NULL, direction=1, <BR>> >
displ=1, source=0x2582aa0, dest=0x2582aa4) failed<BR>> > 5: Assertion
failed in file helper_fns.c at line 337: 0<BR>> > 15:
MPI_Cart_shift(80).: Null communicator<BR>> > 5: memcpy argument memory
ranges overlap, dst_=0xf2c37f4 src_=0xf2c37f4 <BR>> > len_=4<BR>>
> 9: Assertion failed in file helper_fns.c at line 337: 0<BR>> >
5:<BR>> > 9: memcpy argument memory ranges overlap, dst_=0x1880ce64
<BR>> > src_=0x1880ce64 len_=4<BR>> > 5: internal ABORT - process
5<BR>> > 9:<BR>> > 9: internal ABORT - process 9<BR>> > 4:
Assertion failed in file helper_fns.c at line 337: 0<BR>> > 4: memcpy
argument memory ranges overlap, dst_=0x1c9615d0 <BR>> > src_=0x1c9615d0
len_=4<BR>> > 4:<BR>> > 4: internal ABORT - process 4<BR>> >
<BR>> > 2) What quite puzzeled me is that if I delete any one of the
five (cpl, <BR>> > csim, clm, pop, cam ) , the model can running
sucsessfully. For example, <BR>> > delete "cpl", I wro te "<BR>> >
/mnt/storage-space/disk1/mpich/bin/mpiexec -l -n 2 $EXEROOT/all/csim :
<BR>> > -n 8 $EXEROOT/all/clm : -n 4 $EXEROOT/all/pop : -n 16
$EXEROOT/all/cam" <BR>> > will be ok.<BR>> > but if I run all of
the five at the same time, the error message as <BR>> > mentioned above
will appear.<BR>> > <BR>> > 3) If ask for a few more cpus, things
may become better, I guess. So I <BR>> > have a try . Ask for 34 cpus
but still use 2+2+8+4+16=32 cpus, mpi <BR>> > error message still
exists.<BR>> > <BR>> > How should I solve the problem?<BR>>
> Anyone can give some suggestions?<BR>> > <BR>> > Thanks in
advace!<BR>> > <BR>> > <BR>> > L. S<BR>> > <BR>>
_______________________________________________<BR>> mpich-discuss mailing
list<BR>> mpich-discuss@mcs.anl.gov<BR>>
https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss<BR><BR>
<HR>
更多热辣资讯尽在新版MSN首页! <A href="http://cn.msn.com/" target=_new>立刻访问!</A>
</BLOCKQUOTE></BODY></HTML>