<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ascii">
<META NAME="Generator" CONTENT="MS Exchange Server version 6.5.7036.0">
<TITLE>RE: [mpich-discuss] Problem sometimes when running on winxp on >=2 processes and MPE_IBCAST</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->
<P><FONT SIZE=2> Hi,<BR>
Please find my observations below,<BR>
<BR>
1) As Anthony pointed out you don't have to call MPI_Barrier() in a loop for all processes (see usage of MPI collectives).<BR>
2) When running the program with more than 4 procs, some array accesses are out of bounds (Try re-compiling your program with Run time checking for "Array and String bounds" --> If you are using VS check out "Configuration Properties" --> Fortran --> Runtime --> * for setting the runtime checking)<BR>
<BR>
Regards,<BR>
Jayesh<BR>
<BR>
-----Original Message-----<BR>
From: owner-mpich-discuss@mcs.anl.gov [<A HREF="mailto:owner-mpich-discuss@mcs.anl.gov">mailto:owner-mpich-discuss@mcs.anl.gov</A>] On Behalf Of Anthony Chan<BR>
Sent: Wednesday, May 07, 2008 11:13 AM<BR>
To: mpich-discuss@mcs.anl.gov<BR>
Subject: Re: [mpich-discuss] Problem sometimes when running on winxp on >=2 processes and MPE_IBCAST<BR>
<BR>
<BR>
May not be related to the error that you saw. You shouldn't call MPI_Barrier and MPI_Bcast with a do loop over processes.<BR>
<BR>
A.Chan<BR>
----- "Ben Tay" <zonexo@gmail.com> wrote:<BR>
<BR>
> Hi Rajeev,<BR>
><BR>
> I've attached the code. Thank you very much.<BR>
><BR>
> Regards.<BR>
><BR>
> Rajeev Thakur wrote:<BR>
> > Can you send us the code?<BR>
> ><BR>
> > MPE_IBCAST is not a part of the MPI standard. There is no equivalent<BR>
> for it<BR>
> > in MPICH2. You could spawn a thread that calls MPI_Bcast though<BR>
> (after<BR>
> > following all the caveats of MPI and threads as defined in the<BR>
> standard).<BR>
> ><BR>
> > Rajeev<BR>
> ><BR>
> > <BR>
> >> -----Original Message-----<BR>
> >> From: owner-mpich-discuss@mcs.anl.gov<BR>
> >> [<A HREF="mailto:owner-mpich-discuss@mcs.anl.gov">mailto:owner-mpich-discuss@mcs.anl.gov</A>] On Behalf Of Ben Tay<BR>
> >> Sent: Wednesday, May 07, 2008 10:25 AM<BR>
> >> To: mpich-discuss@mcs.anl.gov<BR>
> >> Subject: [mpich-discuss] Problem sometimes when running on winxp on<BR>
> >> >=2 processes and MPE_IBCAST<BR>
> >><BR>
> >> Hi,<BR>
> >><BR>
> >> I tried to run a mpi code which is copied from an example by the RS<BR>
> >> 6000 book. It is supposed to broadcast and synchronize all values.<BR>
> >> When I ran it on my school's linux servers, there is no problem.<BR>
> >> However, if I run it on my own winxp, on >=2 processes, sometimes<BR>
> >> it work, other times I get the error:<BR>
> >><BR>
> >> [01:3216].....ERROR:result command received but the wait_list is<BR>
> >> empty.<BR>
> >> [01:3216]...ERROR:unable to handle the command: "cmd=result<BR>
> >> src=1 dest=1<BR>
> >> tag=7 c<BR>
> >> md_tag=3 cmd_orig=dbget ctx_key=1 value="port=1518<BR>
> >> description=gotchama-16e5ed i<BR>
> >> fname=192.168.1.105 " result=DBS_SUCCESS "<BR>
> >> [01:3216].ERROR:error closing the unknown context socket:<BR>
> >> generic socket failure , error stack:<BR>
> >> MPIDU_Sock_wait(2603): The I/O operation has been aborted because<BR>
> >> of either a th read exit or an application request.<BR>
> >> (errno 995) [01:3216]..ERROR:sock_op_close returned while unknown<BR>
> >> context is in<BR>
> >> state: SMPD_<BR>
> >> IDLE<BR>
> >><BR>
> >> Or<BR>
> >><BR>
> >> [01:3308].....ERROR:result command received but the wait_list is<BR>
> >> empty.<BR>
> >> [01:3308]...ERROR:unable to handle the command: "cmd=result<BR>
> >> src=1 dest=1<BR>
> >> tag=15<BR>
> >> cmd_tag=5 cmd_orig=barrier ctx_key=0 result=DBS_SUCCESS "<BR>
> >> [01:3308]..ERROR:sock_op_close returned while unknown context is<BR>
> in<BR>
> >> state: SMPD_<BR>
> >> IDLE<BR>
> >><BR>
> >> There is no problem if I run on 1 process. If it's >=4, then the<BR>
> >> error happens all the time. Moreover, it's a rather simple code and<BR>
> >> so there shouldn't be anything wrong with it.<BR>
> >> Why is this so?<BR>
> >><BR>
> >> Btw, the RS 6000 book also mention a routine called MPE_IBCAST,<BR>
> >> which is a non-blocking version of MPI_BCAST. Is there a similar<BR>
> >> routine in MPICH2?<BR>
> >><BR>
> >> Thank you very much<BR>
> >><BR>
> >> Regards.<BR>
> >><BR>
> >><BR>
> >><BR>
> >><BR>
> >><BR>
> >> <BR>
> ><BR>
> ><BR>
> > <BR>
><BR>
><BR>
> program mpi_test2<BR>
><BR>
> ! test to show updating for i,j double loop (partial continuous data)<BR>
> for specific req data only<BR>
><BR>
> ! ie update u(2:6,2:6) values instead of all u values, also for struct<BR>
> data<BR>
><BR>
> ! FVM use<BR>
><BR>
> implicit none<BR>
><BR>
> include "mpif.h" <BR>
><BR>
> integer, parameter :: size_x=8,size_y=8<BR>
><BR>
> integer :: i,j,k,ierr,rank,nprocs,u(size_x,size_y)<BR>
><BR>
> integer :: jsta,jend,jsta2,jend1,inext,iprev,isend1,irecv1,isend2<BR>
><BR>
> integer :: irecv2,is,ie,js,je<BR>
><BR>
> integer, allocatable :: jjsta(:), jjlen(:),jjreq(:),u_tmp(:,:)<BR>
><BR>
> INTEGER istatus(MPI_STATUS_SIZE)<BR>
><BR>
><BR>
><BR>
> call MPI_Init(ierr)<BR>
><BR>
> call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr)<BR>
> <BR>
> call MPI_Comm_size(MPI_COMM_WORLD,nprocs,ierr)<BR>
><BR>
> allocate (jjsta(0:nprocs-1),jjlen(0:nprocs-1),jjreq(0:nprocs-1))<BR>
><BR>
> is=3; ie=6; js=3; je=6<BR>
><BR>
> allocate (u_tmp(is:ie,js:je))<BR>
><BR>
><BR>
><BR>
> do k = 0, nprocs - 1<BR>
><BR>
> call para_range(js,je, nprocs, k, jsta, jend)<BR>
><BR>
> jjsta(k) = jsta<BR>
> <BR>
> jjlen(k) = (ie-is+1) * (jend - jsta + 1)<BR>
><BR>
> end do<BR>
><BR>
> call para_range(js, je, nprocs, rank , jsta, jend)<BR>
><BR>
> do j=jsta,jend<BR>
><BR>
> do i=is,ie<BR>
><BR>
> u(i,j)=(j-1)*size_x+i<BR>
><BR>
> <BR>
><BR>
> end do<BR>
><BR>
> end do<BR>
><BR>
> do j=jsta,jend<BR>
><BR>
> do i=is,ie<BR>
><BR>
> u_tmp(i,j)=u(i,j)<BR>
><BR>
> <BR>
><BR>
> end do<BR>
><BR>
> end do<BR>
><BR>
> do k=0,nprocs-1<BR>
><BR>
> call MPI_Barrier(MPI_COMM_WORLD,ierr)<BR>
><BR>
> if (k==rank) then<BR>
><BR>
> print *, rank<BR>
><BR>
> write (*,'(8i5)') u<BR>
><BR>
> <BR>
><BR>
> end if<BR>
><BR>
> end do<BR>
><BR>
> do k = 0, nprocs - 1<BR>
><BR>
> <BR>
><BR>
> call MPI_BCAST(u_tmp(is,jjsta(k)), jjlen(k), MPI_Integer,k,<BR>
> MPI_COMM_WORLD, ierr)<BR>
><BR>
> end do<BR>
><BR>
><BR>
><BR>
><BR>
> deallocate (jjsta, jjlen, jjreq)<BR>
><BR>
> u(is:ie,js:je)=u_tmp(is:ie,js:je)<BR>
><BR>
><BR>
><BR>
> do k=0,nprocs-1<BR>
><BR>
> call MPI_Barrier(MPI_COMM_WORLD,ierr)<BR>
><BR>
> if (k==rank) then<BR>
><BR>
> print *, rank<BR>
><BR>
> write (*,'(8i5)') u<BR>
><BR>
> <BR>
><BR>
> end if<BR>
><BR>
> end do<BR>
><BR>
><BR>
><BR>
><BR>
> call MPI_Finalize(ierr)<BR>
><BR>
> contains<BR>
><BR>
> subroutine para_range(n1, n2, nprocs, irank, ista, iend)<BR>
> ! block distribution<BR>
><BR>
> integer n1 !The lowest value of the iteration variable (IN)<BR>
><BR>
> integer n2 !The highest value of the iteration variable (IN)<BR>
><BR>
> integer nprocs !The number of processes (IN)<BR>
><BR>
> integer irank !The rank for which you want to know the range of<BR>
> iterations(IN)<BR>
><BR>
> integer ista !The lowest value of the iteration variable that process<BR>
> irank executes (OUT)<BR>
><BR>
> integer iend !The highest value of the iteration variable that process<BR>
> irank executes (OUT)<BR>
><BR>
> integer iwork1,iwork2<BR>
><BR>
> iwork1 = (n2 - n1 + 1) / nprocs<BR>
><BR>
> iwork2 = mod(n2 - n1 + 1, nprocs)<BR>
><BR>
> ista = irank * iwork1 + n1 + min(irank, iwork2)<BR>
><BR>
> iend = ista + iwork1 - 1<BR>
><BR>
> if (iwork2 > irank) iend = iend + 1<BR>
><BR>
> end subroutine para_range<BR>
><BR>
> end program mpi_test2<BR>
<BR>
<BR>
</FONT>
</P>
</BODY>
</HTML>