Hi Rob, Geoff, all<br><br>I have got it work. The problem was located at file permission. Always i have to change the file owner to nobody and nogroup, because the other PC seems to allow writes only with this latter owner.
<br><br>Now, when i change the owner, it works, and didnt log messages about io permission denied anymore.<br><br>Info argument, the fourth argument of the function MPI_File_open, could be used to pass MPI ROMIO hints?<br>
<br>I have been obtaining low values about bandwith (15MB/s - local and non local processes) compared to only local processes situations (350 MB/s). Is it normal? Would be SSH as rpc caller the reason for this? The previous measures were obtained with execution using 2 processes and using mpirun -np 2 iotest.
<br><br>The network here is 100mbits ethernet, with local disc at 7200rpm and the other with 5400rpm both IDE interface.<br><br>Thanks for attention, i appreciate a lot, furthermore the emails of this list are helping me a lot.
<br><br>Luiz Mendes<br><br><div><span class="gmail_quote">2007/1/17, Robert Latham <<a href="mailto:robl@mcs.anl.gov">robl@mcs.anl.gov</a>>:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
On Tue, Jan 16, 2007 at 08:54:00AM -0400, Luiz Mendes wrote:<br>> I have two PCS here. One of them, the master node and the another one that<br>> is a slave.<br>><br>> I installed mpich 1.2.7p1 and configured the two machines in
<br>> Machine.LINUXfile.<br>><br>> Well, but when i test io operations with 2 processes, the slave process<br>> doesnt write. Why?<br>><br>> i have installed mpich 1.2.7p1 on the 2 pcs in the same folder,
<br>> /usr/share/MPI-1, and after i export /usr/share/MPI-1 from the master node<br>> to the slave node, mouting it in /usr/share/MPI-1 in slave PC.<br>><br>> What should i do to make it works?<br><br>Nothing sounds wrong from your description. If your test program is
<br>small enough, you should post it and maybe we can either reproduce the<br>bug on our end, or point out something you may have overlooked in your<br>MPI code.<br><br>> And, do you know something about use pvfs2+mpich
1.2.7p1?<br><br>The PVFS support in MPICH-1.2.7p1 is fair, but not as good as what you<br>can find in MPICH2-1.0.5. The version in MPICH2 contains many bug<br>fixes and performance improvements.<br><br>==rob<br><br>--<br>
Rob Latham<br>Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF<br>Argonne National Lab, IL USA B29D F333 664A 4280 315B<br></blockquote></div><br>