[mpich-discuss] Fwd: LUSTRE. ADIOI_Set_lock:: Function not implemented
William Gropp
wgropp at illinois.edu
Thu Aug 9 08:19:21 CDT 2012
This was sent to the wrong list.
Bill
William Gropp
Director, Parallel Computing Institute
Deputy Director for Research
Institute for Advanced Computing Applications and Technologies
Paul and Cynthia Saylor Professor of Computer Science
University of Illinois Urbana-Champaign
Begin forwarded message:
> From: David Ayuso <davidayuso at live.com>
> Subject: LUSTRE. ADIOI_Set_lock:: Function not implemented
> Date: August 9, 2012 6:09:18 AM MDT
> To: "fpmpi at lists.mcs.anl.gov" <fpmpi at lists.mcs.anl.gov>
>
> We have problem running a MPI program in a LUSTRE machine (the same program works perfectly in an IBM machine). When calling:
>
> CALL MPI_FILE_WRITE_AT_ALL (IHT, MOFF, BT, LDI*LDJ, &
> & MPI_DOUBLE_PRECISION, MPI_STATUS_IGNORE, IERR)
>
> The program stops with the following error message:
>
> File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type F_WRLCK/1,whence 0) with return value FFFFFFFF and errno 26.
> - If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching).
> - If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option.
> ADIOI_Set_lock:: Function not implemented
> ADIOI_Set_lock:offset 0, length 76236552
>
> We have tried to solve it by calling MPI_INFO_SET to try to enable/disable the file locking:
>
> call mpi_info_create(info, ierr)
> !call mpi_info_set(info, "romio_ds_write", "disable", ierr)
> !call mpi_info_set(info, "romio_cb_write", "enable", ierr)
> call mpi_info_set(info,"romio_lustre_ds_in_coll","disable",ierr)
>
> Nothing works! The program keeps stopping with the very same error message! :-(
>
> Help please!!!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120809/dea1bb01/attachment.html>
More information about the mpich-discuss
mailing list