Dear Anthony<div><br></div><div>Hi</div><div> Many thanks for informing me. I`m using OpenMP compiler, but I could not understand this part:</div><div><br></div><div><span class="Apple-style-span" style="font-family: arial, sans-serif; font-size: 13px; background-color: rgb(255, 255, 255); ">so you don't need to set it before using it.</span></div>
<div><font class="Apple-style-span" face="arial, sans-serif">Is there any line that I should cancel?<br></font></div><div><font class="Apple-style-span" face="arial, sans-serif"><br></font></div><div><font class="Apple-style-span" face="arial, sans-serif">Best Regards<br>
</font></div><div><div class="gmail_quote">Sincerely</div><div class="gmail_quote">Amin</div><div class="gmail_quote"><br></div><div class="gmail_quote"><br></div><div class="gmail_quote">On Wed, Sep 14, 2011 at 12:43 AM, Anthony Chan <span dir="ltr"><<a href="mailto:chan@mcs.anl.gov">chan@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="im"><br>
> [hx001:18110] *** An error occurred in MPI_Wait<br>
> [hx001:18110] *** on communicator MPI_COMM_WORLD<br>
> [hx001:18110] *** MPI_ERR_TRUNCATE: message truncated<br>
> [hx001:18110] *** MPI_ERRORS_ARE_FATAL (goodbye)<br>
> [hx001:18111] *** An error occurred in MPI_Wait<br>
> [hx001:18111] *** on communicator MPI_COMM_WORLD<br>
> [hx001:18111] *** MPI_ERR_TRUNCATE: message truncated<br>
> [hx001:18111] *** MPI_ERRORS_ARE_FATAL (goodbye)<br>
> mpirun noticed that job rank 0 with PID 18109 on node hx001 exited on signal 15 (Terminated).<br>
> 1 additional process aborted (not shown)<br>
<br>
</div>Are you using OpenMPI or MPICH2?<br>
MPI_Request is set by MPI, so you don't need to set it before using it.<br>
<br>
A.Chan<br>
<div><div></div><div class="h5"><br>
----- Original Message -----<br>
> Hi<br>
><br>
> I`m running a program including th following subroutine:<br>
> c---------------------SUBROUTINE Particle_Passing Begin Here<br>
> --------------------------<br>
> SUBROUTINE Particle_Passing(ions,lecs,xi,yi,zi,xe,ye,ze<br>
> & ,ui,vi,wi,ue,ve,we,mh,Max_p,buf_size,CRye,CRze,CRue<br>
> & ,CP_send,CP_recv,CRxi,CRyi,CRzi,CRui,CRvi,CRwi,CRxe<br>
> & ,CRve,CRwe,CP_Rx,CP_Ry,CP_Rz,CP_Ru,CP_Rv,CP_Rw,CP_Sx<br>
> & ,CLxi,CLyi,CLzi,CLui,CLvi,CLwi,CLxe,CLye,CLze,CLue<br>
> & ,CLve,CLwe,CP_Sy,CP_Sz,CP_Su,CP_Sv,CP_Sw,LABEL_p<br>
> & ,ionsR,lecsR,ionsL,lecsL,mpass,kstrt,Nproc,ierr)<br>
><br>
> Integer ions,lecs,mh,mpass<br>
> REAL xe,ye,ze,ue,ve,we<br>
> REAL xi,yi,zi,ui,vi,wi<br>
> DIMENSION xe(mh),ye(mh),ze(mh),ue(mh),ve(mh),we(mh)<br>
> DIMENSION xi(mh),yi(mh),zi(mh),ui(mh),vi(mh),wi(mh)<br>
><br>
> Integer MAX_p<br>
> C Real MAX_p<br>
><br>
><br>
><br>
> integer::ionsR,lecsR,ionsL,lecsL,buf_size<br>
> integer::LABEL_p,CP_send,CP_recv<br>
> REAL,DIMENSION(mpass)::CRxi,CRyi,CRzi,CRui,CRvi,CRwi<br>
> REAL,DIMENSION(mpass)::CRxe,CRye,CRze,CRue,CRve,CRwe<br>
> REAL,DIMENSION(mpass)::CLxi,CLyi,CLzi,CLui,CLvi,CLwi<br>
> REAL,DIMENSION(mpass)::CLxe,CLye,CLze,CLue,CLve,CLwe<br>
> REAL,DIMENSION(buf_size)::CP_Sx,CP_Sy,CP_Sz<br>
> REAL,DIMENSION(buf_size)::CP_Su,CP_Sv,CP_Sw<br>
> REAL,DIMENSION(buf_size)::CP_Rx,CP_Ry,CP_Rz<br>
> REAL,DIMENSION(buf_size)::CP_Ru,CP_Rv,CP_Rw<br>
><br>
> REAL,DIMENSION(6,buf_size)::CP_S6,CP_R6<br>
><br>
> integer kstrt, Nproc<br>
> integer ierr<br>
><br>
> c common block for parallel processing<br>
> integer nprocc, lgrp, lstat, mreal, mint, mcplx, mdouble, lworld<br>
> c lstat = length of status array<br>
> parameter(lstat=10)<br>
> c lgrp = current communicator<br>
> c mreal = default datatype for reals<br>
> common /PPARMS/ nprocc, lgrp, mreal, mint, mcplx, mdouble, lworld<br>
> c local data<br>
> integer j, ks, moff, kl, kr<br>
> integer istatus, msid,msid1,msid2,msid3,msid4<br>
> integer msid5,msid6,msid7<br>
> dimension istatus(lstat)<br>
><br>
> integer nypmx<br>
> C integer status1(10)<br>
> integer, DIMENSION(1):: mypm, iwork1<br>
> C dimension mypm(1),iwork1(1)<br>
><br>
> mypm(1)=ionsR<br>
><br>
> call PPIMAX(mypm,iwork1,1)<br>
> nypmx=mypm(1)<br>
><br>
> Max_p=nypmx/buf_size+1<br>
> C Max_p=MAXVAL(ionsR)/buf_size+1 !change<br>
> LABEL_p=0<br>
> do 200 n_p=1,Max_p<br>
> CP_send=0<br>
> do i=1,buf_size<br>
> n=LABEL_p+i<br>
> CP_Sx(i)=CRXi(n)<br>
> CP_Sy(i)=CRyi(n)<br>
> CP_Sz(i)=CRzi(n)<br>
><br>
> CP_Su(i)=CRui(n)<br>
> CP_Sv(i)=CRvi(n)<br>
> CP_Sw(i)=CRwi(n)<br>
> end do<br>
> CP_send=buf_size<br>
> if(n.gt.ionsR)then<br>
> CP_send=ionsR - LABEL_p<br>
> end if<br>
> LABEL_p=n<br>
><br>
> do i=1,buf_size<br>
> CP_S6(1,i)=CP_Sx(i)<br>
> CP_S6(2,i)=CP_Sy(i)<br>
> CP_S6(3,i)=CP_Sz(i)<br>
><br>
> CP_S6(4,i)=CP_Su(i)<br>
> CP_S6(5,i)=CP_Sv(i)<br>
> CP_S6(6,i)=CP_Sw(i)<br>
> end do<br>
><br>
> C CP_Rx=CSHIFT(CP_Sx,-1,2) !change<br>
> C CP_Rx=CSHIFT(CP_Sx,-1,2) !change<br>
> c CP_Ry=CSHIFT(CP_Sy,-1,2) !change<br>
> C CP_Rz=CSHIFT(CP_Sz,-1,2) !change<br>
> c CP_Ru=CSHIFT(CP_Su,-1,2) !change<br>
> C CP_Rv=CSHIFT(CP_Sv,-1,2) !change<br>
> C CP_Rw=CSHIFT(CP_Sw,-1,2) !change<br>
><br>
> C CP_recv=CSHIFT(CP_send,-1,1) !change<br>
> C CP_recv=buf_size<br>
><br>
> ks = kstrt - 1<br>
> moff = FZ_strd*mz+2 !my*mz + 2<br>
> c copy to guard cells<br>
> kr = ks + 1<br>
> if (kr.ge.Nproc) kr = kr - Nproc<br>
> kl = ks - 1<br>
> if (kl.lt.0) kl = kl + Nproc<br>
><br>
><br>
><br>
><br>
> call MPI_IRECV(CP_recv,1,mint,kl,1,lgrp,msid<br>
> & ,ierr)<br>
> call MPI_SEND(CP_send,1,mint,kr,1,lgrp,ierr)<br>
> call MPI_WAIT(msid,istatus,ierr)<br>
> print*,kstrt,lecsR,kr,CP_recv,CP_send,11111113<br>
> call MPI_IRECV(CP_R6,CP_recv,mreal,kl,2,lgrp,msid1<br>
> & ,ierr)<br>
> call MPI_SEND(CP_S6,CP_recv,mreal,kr,2,lgrp,ierr)<br>
> call MPI_WAIT(msid1,istatus,ierr)<br>
><br>
> if (CP_recv.gt.0) then<br>
> do n=1,CP_recv<br>
> ions=ions+1<br>
> xi(ions)=CP_R6(1,n)<br>
> yi(ions)=CP_R6(2,n)<br>
> zi(ions)=CP_R6(3,n)<br>
><br>
> ui(ions)=CP_R6(4,n)<br>
> vi(ions)=CP_R6(5,n)<br>
> wi(ions)=CP_R6(6,n)<br>
> end do<br>
> End if<br>
> 110 continue<br>
> C.......................<br>
> 200 continue<br>
><br>
> C------------------------------<br>
><br>
> mypm(1)=lecsR<br>
> call PPIMAX(mypm,iwork1,1)<br>
> nypmx=mypm(1)<br>
><br>
><br>
> Max_p=nypmx/buf_size+1<br>
> C Max_p=MAXVAL(lecsR)/buf_size+1 !change<br>
><br>
> LABEL_p=0<br>
> do 400 n_p=1,Max_p<br>
> CP_send=0<br>
> do i=1,buf_size<br>
> n=LABEL_p+i<br>
> CP_Sx(i)=CRXe(n)<br>
> CP_Sy(i)=CRye(n)<br>
> CP_Sz(i)=CRze(n)<br>
><br>
> CP_Su(i)=CRue(n)<br>
> CP_Sv(i)=CRve(n)<br>
> CP_Sw(i)=CRwe(n)<br>
> end do<br>
> CP_send=buf_size<br>
> if(n.gt.lecsR)then<br>
> CP_send=lecsR - LABEL_p<br>
> end if<br>
><br>
> LABEL_p=n<br>
><br>
> C CP_Rx=CSHIFT(CP_Sx,-1,2) !change<br>
> C CP_Rx=CSHIFT(CP_Sx,-1,2) !change<br>
> C CP_Ry=CSHIFT(CP_Sy,-1,2) !change<br>
> C CP_Rz=CSHIFT(CP_Sz,-1,2) !change<br>
> C CP_Ru=CSHIFT(CP_Su,-1,2) !change<br>
> C CP_Rv=CSHIFT(CP_Sv,-1,2) !change<br>
> C CP_Rw=CSHIFT(CP_Sw,-1,2) !change<br>
><br>
> C CP_recv=CSHIFT(CP_send,-1,1) !change<br>
><br>
> do i=1,buf_size<br>
> CP_S6(1,i)=CP_Sx(i)<br>
> CP_S6(2,i)=CP_Sy(i)<br>
> CP_S6(3,i)=CP_Sz(i)<br>
><br>
> CP_S6(4,i)=CP_Su(i)<br>
> CP_S6(5,i)=CP_Sv(i)<br>
> CP_S6(6,i)=CP_Sw(i)<br>
> end do<br>
><br>
><br>
> call MPI_IRECV(CP_recv,1,mint,kl,3,lgrp,msid2<br>
> & ,ierr)<br>
> call MPI_SEND(CP_send,1,mint,kr,3,lgrp,ierr)<br>
> call MPI_WAIT(msid2,istatus,ierr)<br>
><br>
> call MPI_IRECV(CP_R6,CP_recv,mreal,kl,4,lgrp,msid3<br>
> & ,ierr)<br>
> call MPI_SEND(CP_S6,CP_recv,mreal,kr,4,lgrp,ierr)<br>
> call MPI_WAIT(msid3,istatus,ierr)<br>
><br>
> if (CP_recv.gt.0) then<br>
> do n=1,CP_recv<br>
> lecs=lecs+1<br>
> xe(lecs)=CP_R6(1,n)<br>
> ye(lecs)=CP_R6(2,n)<br>
> ze(lecs)=CP_R6(3,n)<br>
><br>
> ue(lecs)=CP_R6(4,n)<br>
> ve(lecs)=CP_R6(5,n)<br>
> we(lecs)=CP_R6(6,n)<br>
> end do<br>
> End if<br>
> 310 continue<br>
> C.......................<br>
> 400 continue<br>
> C------------------<br>
> mypm(1)=ionsL<br>
> call PPIMAX(mypm,iwork1,1)<br>
> nypmx=mypm(1)<br>
> Max_p=nypmx/buf_size+1<br>
> C Max_p=MAXVAL(ionsL)/buf_size+1 !change<br>
> LABEL_p=0<br>
> do 600 n_p=1,Max_p<br>
> CP_send=0<br>
> do i=1,buf_size<br>
> n=LABEL_p+i<br>
> CP_Sx(i)=CLXi(n)<br>
> CP_Sy(i)=CLyi(n)<br>
> CP_Sz(i)=CLzi(n)<br>
><br>
> CP_Su(i)=CLui(n)<br>
> CP_Sv(i)=CLvi(n)<br>
> CP_Sw(i)=CLwi(n)<br>
> end do<br>
> CP_send=buf_size<br>
> if(n.gt.ionsL)then<br>
> CP_send=ionsL - LABEL_p<br>
> end if<br>
> LABEL_p=n<br>
><br>
> do i=1,buf_size<br>
> CP_S6(1,i)=CP_Sx(i)<br>
> CP_S6(2,i)=CP_Sy(i)<br>
> CP_S6(3,i)=CP_Sz(i)<br>
><br>
> CP_S6(4,i)=CP_Su(i)<br>
> CP_S6(5,i)=CP_Sv(i)<br>
> CP_S6(6,i)=CP_Sw(i)<br>
> end do<br>
><br>
><br>
> C CP_Rx=CSHIFT(CP_Sx,+1,2) !change<br>
> C CP_Rx=CSHIFT(CP_Sx,+1,2) !change<br>
> C CP_Ry=CSHIFT(CP_Sy,+1,2) !change<br>
> C CP_Rz=CSHIFT(CP_Sz,+1,2) !change<br>
> C CP_Ru=CSHIFT(CP_Su,+1,2) !change<br>
> C CP_Rv=CSHIFT(CP_Sv,+1,2) !change<br>
> C CP_Rw=CSHIFT(CP_Sw,+1,2) !change<br>
><br>
> C CP_recv=CSHIFT(CP_send,+1,1) !change<br>
><br>
> call MPI_IRECV(CP_recv,1,mint,kr,5,lgrp,msid4<br>
> & ,ierr)<br>
> call MPI_SEND(CP_send,1,mint,kl,5,lgrp,ierr)<br>
> call MPI_WAIT(msid4,istatus,ierr)<br>
><br>
> call MPI_IRECV(CP_R6,CP_recv,mreal,kr,6,lgrp,msid5<br>
> & ,ierr)<br>
> call MPI_SEND(CP_S6,CP_recv,mreal,kl,6,lgrp,ierr)<br>
> call MPI_WAIT(msid5,istatus,ierr)<br>
><br>
> if (CP_recv.gt.0) then<br>
> do n=1,CP_recv<br>
> ions=ions+1<br>
> xi(ions)=CP_R6(1,n)<br>
> yi(ions)=CP_R6(2,n)<br>
> zi(ions)=CP_R6(3,n)<br>
><br>
> ui(ions)=CP_R6(4,n)<br>
> vi(ions)=CP_R6(5,n)<br>
> wi(ions)=CP_R6(6,n)<br>
> end do<br>
> End if<br>
> 510 continue<br>
> C.......................<br>
> 600 continue<br>
> C--------------------<br>
> mypm(1)=lecsL<br>
> call PPIMAX(mypm,iwork1,1)<br>
> nypmx=mypm(1)<br>
> Max_p=nypmx/buf_size+1<br>
> C Max_p=MAXVAL(lecsL)/buf_size+1 !change<br>
> LABEL_p=0<br>
> do 800 n_p=1,Max_p<br>
> CP_send=0<br>
> do i=1,buf_size<br>
> n=LABEL_p+i<br>
> CP_Sx(i)=CLXe(n)<br>
> CP_Sy(i)=CLye(n)<br>
> CP_Sz(i)=CLze(n)<br>
><br>
> CP_Su(i)=CLue(n)<br>
> CP_Sv(i)=CLve(n)<br>
> CP_Sw(i)=CLwe(n)<br>
> end do<br>
> CP_send=buf_size<br>
> if(n.gt.lecsL)then<br>
> CP_send=lecsL - LABEL_p<br>
> end if<br>
> LABEL_p=n<br>
><br>
> C CP_Rx=CSHIFT(CP_Sx,+1,2) !change<br>
> C CP_Rx=CSHIFT(CP_Sx,+1,2) !change<br>
> C CP_Ry=CSHIFT(CP_Sy,+1,2) !change<br>
> C CP_Rz=CSHIFT(CP_Sz,+1,2) !change<br>
> C CP_Ru=CSHIFT(CP_Su,+1,2) !change<br>
> C CP_Rv=CSHIFT(CP_Sv,+1,2) !change<br>
> C CP_Rw=CSHIFT(CP_Sw,+1,2) !change<br>
><br>
> C CP_recv=CSHIFT(CP_send,-1,1) !change<br>
><br>
> do i=1,buf_size<br>
> CP_S6(1,i)=CP_Sx(i)<br>
> CP_S6(2,i)=CP_Sy(i)<br>
> CP_S6(3,i)=CP_Sz(i)<br>
><br>
> CP_S6(4,i)=CP_Su(i)<br>
> CP_S6(5,i)=CP_Sv(i)<br>
> CP_S6(6,i)=CP_Sw(i)<br>
> end do<br>
><br>
> call MPI_IRECV(CP_recv,1,mint,kr,7,lgrp,msid6<br>
> & ,ierr)<br>
> call MPI_SEND(CP_send,1,mint,kl,7,lgrp,ierr)<br>
> call MPI_WAIT(msid6,istatus,ierr)<br>
><br>
> call MPI_IRECV(CP_R6,CP_recv,mreal,kr,8,lgrp,msid7<br>
> & ,ierr)<br>
> call MPI_SEND(CP_S6,CP_recv,mreal,kl,8,lgrp,ierr)<br>
> call MPI_WAIT(msid7,istatus,ierr)<br>
><br>
> C print*,mreal,mint,lgrp,CP_send,msid,1111114<br>
> C print*,kstrt,kl,kr,CP_recv,CP_send,11111113<br>
><br>
> if (CP_recv.gt.0) then<br>
> do n=1,CP_recv<br>
> lecs=lecs+1<br>
> xe(lecs)=CP_R6(1,n)<br>
> ye(lecs)=CP_R6(2,n)<br>
> ze(lecs)=CP_R6(3,n)<br>
><br>
> ue(lecs)=CP_R6(4,n)<br>
> ve(lecs)=CP_R6(5,n)<br>
> we(lecs)=CP_R6(6,n)<br>
> end do<br>
> End if<br>
> 710 continue<br>
> C.......................<br>
> 800 continue<br>
> End Subroutine<br>
> c---------------------SUBROUTINE Particle_Passing End Here<br>
> --------------------------<br>
> And I`m receiving the following error:<br>
><br>
> [hx001:18110] *** An error occurred in MPI_Wait<br>
> [hx001:18110] *** on communicator MPI_COMM_WORLD<br>
> [hx001:18110] *** MPI_ERR_TRUNCATE: message truncated<br>
> [hx001:18110] *** MPI_ERRORS_ARE_FATAL (goodbye)<br>
> [hx001:18111] *** An error occurred in MPI_Wait<br>
> [hx001:18111] *** on communicator MPI_COMM_WORLD<br>
> [hx001:18111] *** MPI_ERR_TRUNCATE: message truncated<br>
> [hx001:18111] *** MPI_ERRORS_ARE_FATAL (goodbye)<br>
> mpirun noticed that job rank 0 with PID 18109 on node hx001 exited on<br>
> signal<br>
> 15 (Terminated).<br>
> 1 additional process aborted (not shown)<br>
><br>
> is it possible also to be cause of the following routine:<br>
> call PPIMAX(mypm,iwork1,1)<br>
><br>
> subroutine PPIMAX(if,ig,nxp)<br>
> c this subroutine finds parallel maximum for each element of a vector<br>
> c that is, if(j,k) = maximum as a function of k of if(j,k)<br>
> c at the end, all processors contain the same maximum.<br>
> c if = input and output integer data<br>
> c ig = scratch integer array<br>
> c nxp = number of data values in vector<br>
> implicit none<br>
> integer if, ig<br>
> integer nxp<br>
> dimension if(nxp), ig(nxp)<br>
> c common block for parallel processing<br>
> integer nproc, lgrp, lstat, mreal, mint, mcplx, mdouble, lworld<br>
> integer msum, mmax<br>
> parameter(lstat=10)<br>
> c lgrp = current communicator<br>
> c mint = default datatype for integers<br>
> common /PPARMS/ nproc, lgrp, mreal, mint, mcplx, mdouble, lworld<br>
> c mmax = MPI_MAX<br>
> common /PPARMSX/ msum, mmax<br>
> c local data<br>
> integer j, ierr<br>
> c find maximum<br>
> call MPI_ALLREDUCE(if,ig,nxp,mint,mmax,lgrp,ierr)<br>
> c copy output from scratch array<br>
> do 10 j = 1, nxp<br>
> if(j) = ig(j)<br>
> 10 continue<br>
> return<br>
> end<br>
> c-----------------------------------------------------------------------<br>
><br>
> BTW, should I have a pre-default of msids (msid,msid1,msid2,...msid7)?<br>
><br>
> Best Regards<br>
> Sincerely<br>
> Amin<br>
><br>
</div></div>> _______________________________________________<br>
> mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
_______________________________________________<br>
mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote></div><br></div>