Hi,<br><br>I think i have a well defined problem i wonder if someone can give an advice.<br><br>I am running an MPI code (MPICH2) on a linux cluster. <br><br>When i run the code with no IO inside the main runtime loop. It works perfectly fine on any number of processors and nodes. When i start dumping data during the loop<br>
the code get stuck for ever after the second or third file. <br><br>Everything works perfectly fine on a single processor though.<br><br>Now, to make things clearer: When i use ROMIO (MPI IO) it got stuck during opining the file (MPI_FILE_OPEN). <br>
<br>However, i tried regular open(....) at the root processor and then reducing the data to the root processor and write there. The code got stuck at the MPI_REDUCE call.<br><br>If i eliminated all the processors communication and just do dump IO (both ways), it runs fine but it slows down significantly when dumping the data.<br>
<br>I tried on three different clusters(all linux) and i got the same problem.<br><br>I don't want say what i think is the problem, i don't really know, but if the experts here can help i will appreciate.<br><br> Thanks<br>
<br><div class="gmail_quote">On Sun, Mar 14, 2010 at 12:50 PM, hossam <span dir="ltr"><<a href="mailto:hossam.elasrag@gmail.com">hossam.elasrag@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi,<br><br>This is my first post to the mpi-forum. I have a runtime
problem related to MPI using MPICH2. I wonder if this is the correct
forum to post my problem.<br>Please let me know so i can proceed forward
with my problem details.<br>
<br>Thanks<br clear="all"><br><br><br>
</blockquote></div><br><br clear="all"><br><br><br>