[petsc-users] How to extract a slice at a given coordinate and view it ?

Timothée Nicolas timothee.nicolas at gmail.com
Sat Aug 29 23:12:26 CDT 2015


Hi,

I finally found a quite simple solution using MPI_WRITE_FILE_AT, even
though there may be a more efficient one. Since it is only used to write a
single file with reduced dimensions, it should not be a hindrance. I
finally don't use Petsc viewers neither a secondary 2D DMDA. The routine
looks like this, if anyone is interested :

subroutine WriteVectorSelectK0(da,X,k0,filename,flg_open,ierr)
  implicit none
  DM  :: da
  Vec :: X
  PetscErrorCode :: ierr
  PetscViewer    :: viewer
  PetscBool      :: flg_open
  PetscScalar, pointer :: gX(:,:,:,:)
  PetscScalar    :: LocalArray(user%dof,user%mr,user%mz)
  character(*)   :: filename
  PetscInt       :: thefile
  integer(kind=MPI_OFFSET_KIND) :: offset
  PetscInt       :: i,j
  ! The slice to select in the Z direction
  PetscInt       :: k0

  ! access the array, indexed with global indices
  call DMDAVecGetArrayF90(da,X,gX,ierr)

  ! open a file
  call MPI_FILE_OPEN(MPI_COMM_WORLD, filename, &
       MPI_MODE_WRONLY + MPI_MODE_CREATE,        &
       MPI_INFO_NULL, thefile, ierr)

  ! only the processes which contain data on the slice k=k0 are interesting
and will write to the file
  if (k0.ge.user%phis .and. k0.le.user%phie) then
     do j=user%zs,user%ze
           offset = user%dof*((j-1)*user%mr+(user%rs-1))*8
           ! Indexing of gX : one has to subtract 1 because of the C like
indexing resulting from DMDAVecGetArrayF90
           call MPI_FILE_WRITE_AT(thefile,offset,gX(:,user%rs-1,j-1,k0-1),
&
                &                 user%dof,
MPI_DOUBLE_PRECISION,            &
                &                 MPI_STATUS_IGNORE, ierr)
     end do
  end if

  call MPI_FILE_CLOSE(thefile, ierr)

  ! restore the array
  call DMDAVecRestoreArrayF90(da,X,gX,ierr)

end subroutine WriteVectorSelectK0


Unfortunately, I did not find a way to avoid the loop on j, which are in
principle quite inefficient. That is because when you reach the end of the
local block in the first direction (user%re in my example), the next point
where j is incremented by 1 is not contiguous in memory in the written
file. So you have to change the offset and use a new call to
MPI_FILE_WRITE_AT.

Two things that one should be careful with :

1. By default, MPI-IO does not seem to include the 4 or 8 bytes at the
beginning of the file which are often added by FORTRAN (for example Petsc
Viewer add 8 bytes, which include one integer about the size of the data
written in the file). When you read your file outside of FORTRAN (e.g. with
python), you should be careful about this difference.

2. On my machine, Petsc Viewer writes the data in big endian. However, the
routine above gives me little endian (however this may be machine
dependent).

Best

Timothée





2015-08-29 17:46 GMT+09:00 <timothee.nicolas at gmail.com>:

> I see. I will have a look. Regarding memory there would be no problem to
> put everything on process 0 I believe. If I can't figure out how to process
> with your routine, I will go back to the initial try with mpi_allgather.
>
> Thx
>
> Timothee
>
> Sent from my iPhone
>
> > On 2015/08/29, at 12:40, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> >
> >  I wrote a routine DMDAGetRay() that pulls a 1 dimensional slice out of
> a 2d DMDA and puts it on process 0. It uses AOApplicationToPetsc() so is
> not truly scalable but perhaps you could take a look at that. Since you
> say  "(I have lots of points in phi, so this can happen easily)" it may be
> ok for you to just stick the 2d slice on process 0 and then save it?
> >
> >  Barry
> >
> > Without using AOApplicationToPetsc() or something similar yes it is in
> general a nightmare.
> >
> >
> >> On Aug 28, 2015, at 10:15 PM, Timothée Nicolas <
> timothee.nicolas at gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> I have been thinking for several hours about this problem and can't
> find an efficient solution, however I imagine this must be possible somehow
> with Petsc.
> >>
> >> My problem is the following :
> >>
> >> I work in 3D (R,Z,Phi) which makes my data quite heavy and I don't want
> to save all the data of all my fields, even just once in a while. Instead,
> I would like to save in a binary file a slice at a given angle, say phi=0.
> >>
> >> As I did not find if it's natively possible in Petsc, I considered
> creating a second 2D DMDA, on which I can create 2D vectors and view them
> with the binary viewer. So far so good. However, upon creating the 2D DMDA,
> naturally the distribution of processors does not correspond to the
> distribution of the 3D DMDA. So I was considering creating global arrays,
> filling them with the data of the 3D array in phi=0, then doing an
> MPI_allgather to give the information to all the processors, to be able to
> read the array and fill the 2D Petsc Vector with it. So the code would be
> something along the lines of :
> >>
> >>  PetscScalar, pointer :: gX2D(:,:,:)
> >>  PetscScalar, pointer :: gX(:,:,:,:)
> >>  ! LocalArray is locally filled
> >>  ! It is transmitted to GlobalArray via MPI_Allgather
> >>  real(8)          :: LocalArray(user%dof,user%mr,user%mz)
> >>  real(8)          :: GlobalArray(user%dof,user%mr,user%mz)
> >>
> >>  call DMDAVecGetArrayF90(da_phi0,X2D,gX2D,ierr)
> >>  call DMDAVecGetArrayF90(da,X,gX,ierr)
> >>
> >>  do k = user%phis,user%phie
> >>     do j = user%zs,user%ze
> >>        do i = user%rs,user%re
> >>           do l=1,user%dof
> >>              if (k.eq.phi_print) then
> >>                 ! Numbering obtained with DMDAGetArrayF90 differs from
> usual
> >>                 LocalArray(l,i,j) = gX(l-1,i-1,j-1,k-1)
> >>              end if
> >>           end do
> >>        end do
> >>     end do
> >>  end do
> >>
> >>  nvals = user%dof*user%rm*user%zm
> >>
> >>  call MPI_AllGather(LocalArray(1,user%rs,user%zs),        &
> >>       &             nvals,MPI_REAL,                        &
> >>       &             GlobalArray,       &
> >>       &             nvals,MPI_REAL,MPI_COMM_WORLD,ierr)
> >>
> >>  do j = zs2D,ze2D
> >>     do i = rs2D,re2D
> >>        do l=1,user%dof
> >>           gX2D(l-1,i-1,j-1) = GlobalArray(l,i,j)
> >>        end do
> >>     end do
> >>  end do
> >>
> >> call DMDAVecRestoreArrayF90(da_phi0,X2D,gX2D,ierr)
> >>  call DMDAVecRestoreArrayF90(da,X,gX,ierr)
> >>
> >> The problem is that MPI_allgather is not at all that simple. Exchanging
> array information is much more complicated that I had anticipated ! See
> this long post on stackoverflow :
> >>
> >>
> http://stackoverflow.com/questions/17508647/sending-2d-arrays-in-fortran-with-mpi-gather
> >>
> >> I could probably get it to work eventually, but it's pretty
> complicated, and I was wondering if there was not a simpler alternative I
> could not see. Besides, I am concerned about what could happen if the
> number of processors is so large that the 2D Vector gets less than 2 points
> per processor (I have lots of points in phi, so this can happen easily).
> Then Petsc would complain.
> >>
> >> Does anybody have ideas ?
> >>
> >> Best
> >>
> >> Timothée
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150830/96c1a1c1/attachment.html>


More information about the petsc-users mailing list