[petsc-users] MPI_FILE_OPEN in PETSc crashes

Barry Smith bsmith at mcs.anl.gov
Fri Sep 12 16:15:44 CDT 2014


  Maybe put the code that does the MPI IO in a separate file that gets called and does not need to also have all the PETSc includes

  Barry

On Sep 12, 2014, at 4:09 PM, Danyang Su <danyang.su at gmail.com> wrote:

> On 12/09/2014 12:10 PM, Barry Smith wrote:
>> On Sep 12, 2014, at 1:34 PM, Danyang Su <danyang.su at gmail.com> wrote:
>> 
>>> Hi There,
>>> 
>>> I have some parallel mpi output codes that works fine without PETSc but crashes when compiled with PETSc. To make the problem easy, I test the following example which has the same problem. This example is modified form http://www.mcs.anl.gov/research/projects/mpi/usingmpi2/examples/starting/io3f_f90.htm. It works without PETSc but if I comment out "use mpi" and add PETSc include, it crashes at MPI_FILE_OPEN because of access violation.
>>   You should not comment out use mpi; you need that to use the MPI calls you are making!
>> 
>>   You absolutely should have an implicit none at the beginning of your program so the Fortran compiler reports undeclared variables
> Done
>> 
>>> Shall I rewrite all the MPI Parallel output with PetscBinaryOpen or PetscViewerBinaryOpen relative functions?
>>    No.  You should be able to use the MPI IO with PETSc code.
>> 
>>    First figure out how to get the example working with the use mpi  PLUS petsc include files.
> The problem is there is a lot of name conflict if both "use mpi" and "petsc include" are included. I will consider rewrite these routines.
>> 
>>   Let us know if you have any problems ASAP
>> 
>>    Barry
>> 
>>> Considering the parallel I/O efficiency, which is more preferable?
>>> 
>>> Thanks and regards,
>>> 
>>> Danyang
>>> 
>>> PROGRAM main
>>>     ! Fortran 90 users can (and should) use
>>>     !use mpi
>>>     ! instead of include 'mpif.h' if their MPI implementation provides a
>>>     ! mpi module.
>>>     !include 'mpif.h'
>>>          !For PETSc, use the following "include"s
>>> #include <finclude/petscsys.h>
>>> #include <finclude/petscviewer.h>
>>> #include <finclude/petscviewer.h90>
>>>     integer ierr, i, myrank, BUFSIZE, thefile
>>>     parameter (BUFSIZE=10)
>>>     integer buf(BUFSIZE)
>>>     integer(kind=MPI_OFFSET_KIND) disp
>>>       call MPI_INIT(ierr)
>>>     call MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr)
>>>       do i = 1, BUFSIZE
>>>         buf(i) = myrank * BUFSIZE + i
>>>     enddo
>>>          write(*,'(a,1x,i6,1x,a,1x,10(i6,1x))') "myrank", myrank, "buf",buf
>>>          call MPI_FILE_OPEN(MPI_COMM_WORLD, 'testfile.txt', &
>>>                        MPI_MODE_CREATE + MPI_MODE_WRONLY, &
>>>                        MPI_INFO_NULL, thefile, ierr)
>>>     ! assume 4-byte integers
>>>     disp = myrank * BUFSIZE * 4
>>>          !Use the following two functions
>>>     !call MPI_FILE_SET_VIEW(thefile, disp, MPI_INTEGER, &
>>>     !                       MPI_INTEGER, 'native', &
>>>     !                       MPI_INFO_NULL, ierr)
>>>     !call MPI_FILE_WRITE(thefile, buf, BUFSIZE, MPI_INTEGER, &
>>>     !                    MPI_STATUS_IGNORE, ierr)
>>>          !Or use the following one function
>>>     call MPI_FILE_WRITE_AT(thefile, disp, buf, BUFSIZE, MPI_INTEGER, &
>>>                         MPI_STATUS_IGNORE, ierr)
>>>          call MPI_FILE_CLOSE(thefile, ierr)
>>>     call MPI_FINALIZE(ierr)
>>>  END PROGRAM main



More information about the petsc-users mailing list