<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Times New Roman; font-size: 12pt; color: #000000'><font class="Apple-style-span" size="3">Thanks Rajeev; I'm still just learning MPI, so the MPI_IN_PLACE error was caused by my own naivete.</font><div style="color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-size: 12pt; "><br></div><div><font class="Apple-style-span" size="3">The non</font>compliant<font class="Apple-style-span" size="3"> code was:</font></div><div style="color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-size: 12pt; "><br></div><div><div> CALL MPI_GATHER(lc_convex(mygid+1), 1, MPI_INTEGER, lc_convex(mygid+1), &</div><div> 1, MPI_INTEGER, 0, myComm, ierr)</div><div><br></div><div>which I replaced with</div><div><br></div><div> if (mygid == 0) then</div><div> CALL MPI_GATHER(MPI_IN_PLACE, 1, MPI_INTEGER, lc_convex(mygid+1), &</div><div> 1, MPI_INTEGER, 0, myComm, ierr)</div><div> else </div><div> CALL MPI_GATHER(lc_convex(mygid+1), 1, MPI_INTEGER, MPI_IN_PLACE, &</div><div> 1, MPI_INTEGER, 0, myComm, ierr)</div><div> end if</div><div style="color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-size: 12pt; "><br></div></div><div><font class="Apple-style-span" size="3">since this is the only place where gathering occurs. I assume this is the only way to fix the non</font>compliant<font class="Apple-style-span" size="3"> code, but it is certainly not as "pretty". There were a few other areas requiring related fixes, but all is working now.</font></div><div style="color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-size: 12pt; "><br></div><div style="color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-size: 12pt; "><p class="MsoNormal">Thanks for all your help!!</p><p class="MsoNormal">-joe</p><p class="MsoNormal"><br></p><hr id="zwchr"><b>From: </b>"Rajeev Thakur" <thakur@mcs.anl.gov><br><b>To: </b>mpich-discuss@mcs.anl.gov<br><b>Sent: </b>Monday, May 9, 2011 1:52:40 PM<br><b>Subject: </b>Re: [mpich-discuss] MPICH2 internal errors on Win 7 x64<br><br>It probably means that the data that the root sends to itself in MPI_Gather may not already be in the right location in recvbuf.<br><br>Rajeev<br><br><br>On May 9, 2011, at 12:41 PM, Joe Vallino wrote:<br><br>> Rajeev, et al.<br>> <br>> The use of MPI_IN_PLACE did allow MPCHI2 to run w/o errors (thanks). Interestingly, the test problem generates a different (and incorrect) answer from what it should now. Intel MPI also produces the same incorrect answer when using MPI_IN_PLACE, but produces the correct result when violating the MPI 2.2 standard regarding identical sbuf and rbuf .<br>> <br>> Any ideas pop to mind with this situation? Anything else magical with MPI_IN_PLACE ?<br>> <br>> cheers<br>> -joe<br>> <br>> From: "Rajeev Thakur" <thakur@mcs.anl.gov><br>> To: mpich-discuss@mcs.anl.gov<br>> Sent: Monday, May 9, 2011 10:58:33 AM<br>> Subject: Re: [mpich-discuss] MPICH2 internal errors on Win 7 x64<br>> <br>> The error check was added in a recent version of MPICH2.<br>> <br>> Rajeev<br>> <br>> On May 9, 2011, at 9:50 AM, Joe Vallino wrote:<br>> <br>> > Thanks Rajeev. I'll take a look at that, but I wonder why the code runs fine on Intel MPI which is based on MPICH2. <br>> > <br>> > cheers,<br>> > -joe<br>> > <br>> > From: "Rajeev Thakur" <thakur@mcs.anl.gov><br>> > To: mpich-discuss@mcs.anl.gov<br>> > Sent: Monday, May 9, 2011 9:55:11 AM<br>> > Subject: Re: [mpich-discuss] MPICH2 internal errors on Win 7 x64<br>> > <br>> > The code is passing the same buffer as sendbuf and recvbuf to MPI_Gather, which is not allowed. You need to use MPI_IN_PLACE as described in the MPI standard (see MPI 2.2 for easy reference).<br>> > <br>> > Rajeev<br>> > <br>> > <br>> > On May 8, 2011, at 6:46 PM, Joe Vallino wrote:<br>> > <br>> > > Hi,<br>> > > <br>> > > I've installed MPICH2 (1.3.2p1, Windows EM64T binaries) on a Window 7 x64 machine (2 sockets, 4 cores each). MPICH2 works fine for simple tests, be when I attempt to run a more complex use of MPI, I get various internal MPI errors, such as:<br>> > > <br>> > > Fatal error in PMPI_Gather: Invalid buffer pointer, error stack:<br>> > > PMPI_Gather(863): MPI_Gather(sbuf=0000000000BC8040, scount=1, MPI_INTEGER, rbuf=0000000000BC8040, rcount=1, MPI_INTEGER, root=0, comm=0x84000004) failed<br>> > > PMPI_Gather(806): Buffers must not be aliased<br>> > > <br>> > > job aborted:<br>> > > rank: node: exit code[: error message]<br>> > > 0: ECO37: 1: process 0 exited without calling finalize<br>> > > 1: ECO37: 123<br>> > > <br>> > > The errors occur regardless if using x32 or x64 builds.<br>> > > <br>> > > The code I'm tying to run is pVTDIRECT (see TOMS package 897 on netlib.org), and the above errors are produced by running the simple test routine that comes with the package. Since the package can be easily compiled and run, this should allow others to confirm the problem, if anyone is feeling so motivated :)<br>> > > <br>> > > As an attempt to confirm the problem is with MPICH2 build, I installed a commercial MPI build (csWMPI II), which works fine with the TOMS package, so this would indicate the problem is with MPICH2.<br>> > > <br>> > > Since the TOMS package uses Fortran 95, and I'm using the latest Intel ifort compiler with VS2008, I tried to build MPICH2 from the 1.3.2p1 source, but after banging my head on that for a day w/o success, I decided to see if anyone has any suggestions here (or if anyone can confirm the problem with the TOMS package under Windows MPICH2 release).<br>> > > <br>> > > - Can anyone point me to a Win x64 build that used newer versions of intel fortran (V 11 or 12) and/or more recent releases of Windows SDK, which seem to be the main wild cards in the build process?<br>> > > <br>> > > - I will continue to try and build MPICH2 for windows, but I suspect I will not succeed given my *cough* skills.<br>> > > <br>> > > Thanks!<br>> > > -joe<br>> > > _______________________________________________<br>> > > mpich-discuss mailing list<br>> > > mpich-discuss@mcs.anl.gov<br>> > > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss<br>> > <br>> > _______________________________________________<br>> > mpich-discuss mailing list<br>> > mpich-discuss@mcs.anl.gov<br>> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss<br>> > _______________________________________________<br>> > mpich-discuss mailing list<br>> > mpich-discuss@mcs.anl.gov<br>> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss<br>> <br>> _______________________________________________<br>> mpich-discuss mailing list<br>> mpich-discuss@mcs.anl.gov<br>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss<br>> _______________________________________________<br>> mpich-discuss mailing list<br>> mpich-discuss@mcs.anl.gov<br>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss<br><br>_______________________________________________<br>mpich-discuss mailing list<br>mpich-discuss@mcs.anl.gov<br>https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss<br></div></div></body></html>