[petsc-users] parallel HDF5 output of DMDA data with dof>1

Matteo Semplice matteo.semplice at uninsubria.it
Thu Jul 15 10:44:06 CDT 2021


Hi.

When I write (HDF5 viewer) a vector associated to a DMDA with 1 dof, the 
output is independent of the number of cpus used.

However, for a DMDA with dof=2, the output seems to be correct when I 
run on 1 or 2 cpus, but is scrambled when I run with 4 cpus. Judging 
from the ranges of the data, each field gets written to the correct 
part, and its the data witin the field that is scrambled. Here's my MWE:

#include <petscversion.h>
#include <petscdmda.h>
#include <petscviewer.h>
#include <petscsys.h>
#include <petscviewerhdf5.h>

int main(int argc, char **argv) {

   PetscErrorCode ierr;
   ierr = PetscInitialize(&argc,&argv,(char*)0,help); CHKERRQ(ierr);
   PetscInt Nx=11;
   PetscInt Ny=11;
   PetscScalar dx = 1.0 / (Nx-1);
   PetscScalar dy = 1.0 / (Ny-1);
   DM dmda;
   ierr = DMDACreate2d(PETSC_COMM_WORLD,
                       DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,
                       DMDA_STENCIL_STAR,
                       Nx,Ny, //global dim
                       PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim
                       2,1, //dof, stencil width
                       NULL, NULL, //n nodes per direction on each cpu
                       &dmda);      CHKERRQ(ierr);
   ierr = DMSetFromOptions(dmda); CHKERRQ(ierr);
   ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr);
   ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0); 
CHKERRQ(ierr);
   ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr);
   ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr);
   DMDALocalInfo daInfo;
   ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr);
   IS *is;
   DM *daField;
   ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is, &daField); 
CHKERRQ(ierr);
   Vec U0;
   ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr);

   //Initial data
   typedef struct{ PetscScalar s,c;} data_type;
   data_type **u;
   ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr);
   for (PetscInt j=daInfo.ys; j<daInfo.ys+daInfo.ym; j++){
     PetscScalar y = j*dy;
     for (PetscInt i=daInfo.xs; i<daInfo.xs+daInfo.xm; i++){
       PetscScalar x = i*dx;
       u[j][i].s = x+2.*y;
       u[j][i].c = 10. + 2.*x*x+y*y;
     }
   }
   ierr = DMDAVecRestoreArray(dmda,U0,&u); CHKERRQ(ierr);

   PetscViewer viewer;
   ierr = 
PetscViewerHDF5Open(PETSC_COMM_WORLD,"solutionSC.hdf5",FILE_MODE_WRITE,&viewer); 
CHKERRQ(ierr);
   Vec uField;
   ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr);
   PetscObjectSetName((PetscObject) uField, "S");
   ierr = VecView(uField,viewer); CHKERRQ(ierr);
   ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr);
   ierr = VecGetSubVector(U0,is[1],&uField); CHKERRQ(ierr);
   PetscObjectSetName((PetscObject) uField, "C");
   ierr = VecView(uField,viewer); CHKERRQ(ierr);
   ierr = VecRestoreSubVector(U0,is[1],&uField); CHKERRQ(ierr);
   ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr);

   ierr = PetscFinalize();
   return ierr;
}

and my xdmf file

<?xml version="1.0" ?>
<Xdmf xmlns:xi="http://www.w3.org/2001/XInclude" Version="2.0">
   <Domain>
     <Grid GridType="Collection" CollectionType="Temporal">
       <Time TimeType="List">
         <DataItem Dimensions="1">1.0</DataItem>
       </Time>
       <Grid GridType="Uniform" Name="domain">
         <Topology TopologyType="2DCoRectMesh" Dimensions="11 11"/>
         <Geometry GeometryType="ORIGIN_DXDY">
           <DataItem Format="XML" NumberType="Float" Dimensions="2">0.0 
0.0</DataItem>
           <DataItem Format="XML" NumberType="Float" Dimensions="2">0.1 
0.1</DataItem>
         </Geometry>
         <Attribute Name="S" Center="Node" AttributeType="Scalar">
           <DataItem Format="HDF" Precision="8" Dimensions="11 
11">solutionSC.hdf5:/S</DataItem>
         </Attribute>
         <Attribute Name="C" Center="Node" AttributeType="Scalar">
           <DataItem Format="HDF" Precision="8" Dimensions="11 
11">solutionSC.hdf5:/C</DataItem>
         </Attribute>
       </Grid>
     </Grid>
   </Domain>
</Xdmf>

Steps to reprduce: run code and open the xdmf with paraview. If the code 
was run with 1,2 or 3 cpus, the data are correct (except the plane xy 
has become the plane yz), but with 4 cpus the data are scrambled.

Does anyone have any insight?

(I am using Petsc Release Version 3.14.2, but I can compile a newer one 
if you think it's important.)

Best

     Matteo



More information about the petsc-users mailing list