[petsc-users] parallel HDF5 output of DMDA data with dof>1

Matteo Semplice matteo.semplice at uninsubria.it
Wed Jul 21 07:04:05 CDT 2021


Hi all.

I have asked Thibault (author or this report on HDF5 
https://lists.mcs.anl.gov/pipermail/petsc-users/2021-July/044045.html 
<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.mcs.anl.gov%2Fpipermail%2Fpetsc-users%2F2021-July%2F044045.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C0443f8463c604f1122fb08d9485ac463%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620376946153189%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=w%2BAY%2FEZ7pPUintp77eEN1vGJVB3R0VX1Pzp9U0xV4E4%3D&reserved=0> 
some days before mine) to run my MWE and it does not work for him either.

Further, I have tried on another machine of mine with --download-hdf5 
--download-mpich and still it is not working.

A detailed report follows at the end of this message.

I am wondering if something is wrong/incompatible with the HDF5 version 
of VecView, at least when the Vec is associated with a DMDA. Of course 
it might just be that I didn't manage to write a correct xdmf, but I 
can't spot the mistake...

I am of course available to run tests in order to find/fix this problem.

Best

     Matteo

On 16/07/21 12:27, Matteo Semplice wrote:
>
> Il 15/07/21 17:44, Matteo Semplice ha scritto:
>> Hi.
>>
>> When I write (HDF5 viewer) a vector associated to a DMDA with 1 dof, 
>> the output is independent of the number of cpus used.
>>
>> However, for a DMDA with dof=2, the output seems to be correct when I 
>> run on 1 or 2 cpus, but is scrambled when I run with 4 cpus. Judging 
>> from the ranges of the data, each field gets written to the correct 
>> part, and its the data witin the field that is scrambled. Here's my MWE:
>>
>> #include <petscversion.h>
>> #include <petscdmda.h>
>> #include <petscviewer.h>
>> #include <petscsys.h>
>> #include <petscviewerhdf5.h>
>>
>> int main(int argc, char **argv) {
>>
>>   PetscErrorCode ierr;
>>   ierr = PetscInitialize(&argc,&argv,(char*)0,help); CHKERRQ(ierr);
>>   PetscInt Nx=11;
>>   PetscInt Ny=11;
>>   PetscScalar dx = 1.0 / (Nx-1);
>>   PetscScalar dy = 1.0 / (Ny-1);
>>   DM dmda;
>>   ierr = DMDACreate2d(PETSC_COMM_WORLD,
>>                       DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,
>>                       DMDA_STENCIL_STAR,
>>                       Nx,Ny, //global dim
>>                       PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim
>>                       2,1, //dof, stencil width
>>                       NULL, NULL, //n nodes per direction on each cpu
>>                       &dmda);      CHKERRQ(ierr);
>>   ierr = DMSetFromOptions(dmda); CHKERRQ(ierr);
>>   ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr);
>>   ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0, 0.0, 
>> 1.0); CHKERRQ(ierr);
>>   ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr);
>>   ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr);
>>   DMDALocalInfo daInfo;
>>   ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr);
>>   IS *is;
>>   DM *daField;
>>   ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is, &daField); 
>> CHKERRQ(ierr);
>>   Vec U0;
>>   ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr);
>>
>>   //Initial data
>>   typedef struct{ PetscScalar s,c;} data_type;
>>   data_type **u;
>>   ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr);
>>   for (PetscInt j=daInfo.ys; j<daInfo.ys+daInfo.ym; j++){
>>     PetscScalar y = j*dy;
>>     for (PetscInt i=daInfo.xs; i<daInfo.xs+daInfo.xm; i++){
>>       PetscScalar x = i*dx;
>>       u[j][i].s = x+2.*y;
>>       u[j][i].c = 10. + 2.*x*x+y*y;
>>     }
>>   }
>>   ierr = DMDAVecRestoreArray(dmda,U0,&u); CHKERRQ(ierr);
>>
>>   PetscViewer viewer;
>>   ierr = 
>> PetscViewerHDF5Open(PETSC_COMM_WORLD,"solutionSC.hdf5",FILE_MODE_WRITE,&viewer); 
>> CHKERRQ(ierr);
>>   Vec uField;
>>   ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr);
>>   PetscObjectSetName((PetscObject) uField, "S");
>>   ierr = VecView(uField,viewer); CHKERRQ(ierr);
>>   ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr);
>>   ierr = VecGetSubVector(U0,is[1],&uField); CHKERRQ(ierr);
>>   PetscObjectSetName((PetscObject) uField, "C");
>>   ierr = VecView(uField,viewer); CHKERRQ(ierr);
>>   ierr = VecRestoreSubVector(U0,is[1],&uField); CHKERRQ(ierr);
>>   ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr);
>>
>>   ierr = PetscFinalize();
>>   return ierr;
>> }
>>
>> and my xdmf file
>>
>> <?xml version="1.0" ?>
>> <Xdmf 
>> xmlns:xi="https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.w3.org%2F2001%2FXInclude&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C7c270ed0c49c4f8d950708d948444e1c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620280470927505%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=q45En4ULjQX6H%2F1ZgzUxgKmDk7Y7jK0K2IuWDHpr4HM%3D&reserved=0" 
>> Version="2.0">
>>   <Domain>
>>     <Grid GridType="Collection" CollectionType="Temporal">
>>       <Time TimeType="List">
>>         <DataItem Dimensions="1">1.0</DataItem>
>>       </Time>
>>       <Grid GridType="Uniform" Name="domain">
>>         <Topology TopologyType="2DCoRectMesh" Dimensions="11 11"/>
>>         <Geometry GeometryType="ORIGIN_DXDY">
>>           <DataItem Format="XML" NumberType="Float" 
>> Dimensions="2">0.0 0.0</DataItem>
>>           <DataItem Format="XML" NumberType="Float" 
>> Dimensions="2">0.1 0.1</DataItem>
>>         </Geometry>
>>         <Attribute Name="S" Center="Node" AttributeType="Scalar">
>>           <DataItem Format="HDF" Precision="8" Dimensions="11 
>> 11">solutionSC.hdf5:/S</DataItem>
>>         </Attribute>
>>         <Attribute Name="C" Center="Node" AttributeType="Scalar">
>>           <DataItem Format="HDF" Precision="8" Dimensions="11 
>> 11">solutionSC.hdf5:/C</DataItem>
>>         </Attribute>
>>       </Grid>
>>     </Grid>
>>   </Domain>
>> </Xdmf>
>>
>> Steps to reprduce: run code and open the xdmf with paraview. If the 
>> code was run with 1,2 or 3 cpus, the data are correct (except the 
>> plane xy has become the plane yz), but with 4 cpus the data are 
>> scrambled.
>>
>> Does anyone have any insight?
>>
>> (I am using Petsc Release Version 3.14.2, but I can compile a newer 
>> one if you think it's important.)
>
> Hi,
>
>     I have a small update on this issue.
>
> First, it is still here with version 3.15.2.
>
> Secondly, I have run the code under valgrind and
>
> - for 1 or 2 processes, I get no errors
>
> - for 4 processes, 3 out of 4, trigger the following
>
> ==25921== Conditional jump or move depends on uninitialised value(s)
> ==25921==    at 0xB3D6259: ??? (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so)
> ==25921==    by 0xB3D85C8: mca_fcoll_two_phase_file_write_all (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so)
> ==25921==    by 0xAAEB29B: mca_common_ompio_file_write_at_all (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/libmca_common_ompio.so.41.9.0)
> ==25921==    by 0xB316605: mca_io_ompio_file_write_at_all (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_io_ompio.so)
> ==25921==    by 0x73C7FE7: PMPI_File_write_at_all (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3)
> ==25921==    by 0x69E8700: H5FD__mpio_write (H5FDmpio.c:1466)
> ==25921==    by 0x670D6EB: H5FD_write (H5FDint.c:248)
> ==25921==    by 0x66DA0D3: H5F__accum_write (H5Faccum.c:826)
> ==25921==    by 0x684F091: H5PB_write (H5PB.c:1031)
> ==25921==    by 0x66E8055: H5F_shared_block_write (H5Fio.c:205)
> ==25921==    by 0x6674538: H5D__chunk_collective_fill (H5Dchunk.c:5064)
> ==25921==    by 0x6674538: H5D__chunk_allocate (H5Dchunk.c:4736)
> ==25921==    by 0x668C839: H5D__init_storage (H5Dint.c:2473)
> ==25921==  Uninitialised value was created by a heap allocation
> ==25921==    at 0x483577F: malloc (vg_replace_malloc.c:299)
> ==25921==    by 0xB3D6155: ??? (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so)
> ==25921==    by 0xB3D85C8: mca_fcoll_two_phase_file_write_all (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so)
> ==25921==    by 0xAAEB29B: mca_common_ompio_file_write_at_all (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/libmca_common_ompio.so.41.9.0)
> ==25921==    by 0xB316605: mca_io_ompio_file_write_at_all (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_io_ompio.so)
> ==25921==    by 0x73C7FE7: PMPI_File_write_at_all (in 
> /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3)
> ==25921==    by 0x69E8700: H5FD__mpio_write (H5FDmpio.c:1466)
> ==25921==    by 0x670D6EB: H5FD_write (H5FDint.c:248)
> ==25921==    by 0x66DA0D3: H5F__accum_write (H5Faccum.c:826)
> ==25921==    by 0x684F091: H5PB_write (H5PB.c:1031)
> ==25921==    by 0x66E8055: H5F_shared_block_write (H5Fio.c:205)
> ==25921==    by 0x6674538: H5D__chunk_collective_fill (H5Dchunk.c:5064)
> ==25921==    by 0x6674538: H5D__chunk_allocate (H5Dchunk.c:4736)
>
> Does anyone have any hint on what might be causing this?
>
> Is this the "buggy MPI-IO" that Matt was mentioning in 
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.mcs.anl.gov%2Fpipermail%2Fpetsc-users%2F2021-July%2F044138.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C7c270ed0c49c4f8d950708d948444e1c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620280470927505%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=gPAbClDgJ1toQxzVVnoRCrgPBNR2tjw%2BGfdrxv%2FwVmY%3D&reserved=0?
>
> I am using the release branch (commit c548142fde) and I have 
> configured with --download-hdf5; configure finds the installed openmpi 
> 3.1.3 from Debian buster. The relevant lines from configure.log are
>
> MPI:
>   Version:  3
>   Mpiexec: mpiexec --oversubscribe
>   OMPI_VERSION: 3.1.3
> hdf5:
>   Version:  1.12.0
>   Includes: -I/home/matteo/software/petsc/opt/include
>   Library:  -Wl,-rpath,/home/matteo/software/petsc/opt/lib 
> -L/home/matteo/software/petsc/opt/lib -lhdf5hl_fortran -lhdf5_fortran 
> -lhdf5_hl -lhdf5

Update 1: on a different machine, I have compiled petsc (release branch) 
with --download-hdf5 and --download-mpich and I have tried 3d HDF5 
output at the end of my simulation. All's fine for 1 or 2 CPUs, but the 
output is funny for more CPUs. The smooth solution gives rise to an 
output that renders like little bricks, as if the data were written 
doing the 3 nested loops in the wrong order.

Update 2: Thibault was kind enough to compile and run my MWE on his 
setup and he gets a crash related to the VecView with the HDF5 viewer. 
Here's the report that he sent me.

On 21/07/21 10:59, Thibault Bridel-Bertomeu wrote:
Hi Matteo,

I ran your test, and actually it does not give me garbage for a number 
of processes greater than 1, it straight-up crashes ...
Here is the error log for 2 processes :

Compiled with Petsc Development GIT revision: 
v3.14.4-671-g707297fd510GIT Date: 2021-02-24 22:50:05 +0000

[1]PETSC ERROR: 
------------------------------------------------------------------------

[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
probably memory access out of range

[1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger

[1]PETSC ERROR: or see 
https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind 
<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mcs.anl.gov%2Fpetsc%2Fdocumentation%2Ffaq.html%23valgrind&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866796549%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZFBJWmDy6QUnBGYC2mHWBz%2F%2Bx%2BYan2Pw4ub%2FhR8PvK4%3D&reserved=0>

[1]PETSC ERROR: or try http://valgrind.org 
<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvalgrind.org%2F&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866806506%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=5aITh8yt1ihtXQzuhBzG5vYy8ZeQhDq%2Bhj3aLMGjEvc%3D&reserved=0> 
on GNU/linux and Apple Mac OS X to find memory corruption errors

[1]PETSC ERROR: likely location of problem given in stack below

[1]PETSC ERROR: ---------------------Stack Frames 
------------------------------------

[1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,

[1]PETSC ERROR: INSTEAD the line number of the start of the function

[1]PETSC ERROR: is given.

[1]PETSC ERROR: [1] H5Dcreate2 line 716 
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c

[1]PETSC ERROR: [1] VecView_MPI_HDF5 line 622 
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c

[1]PETSC ERROR: [1] VecView_MPI line 815 
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c

[1]PETSC ERROR: [1] VecView line 580 
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/interface/vector.c

[1]PETSC ERROR: --------------------- Error Message 
--------------------------------------------------------------

[1]PETSC ERROR: Signal received

[1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html 
<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mcs.anl.gov%2Fpetsc%2Fdocumentation%2Ffaq.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866816465%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Obo%2BDTzUQK%2BAzcBn7rmfrZwiFIhxv%2B9SAF5BQPVLCAg%3D&reserved=0> 
for trouble shooting.

[1]PETSC ERROR: Petsc Development GIT revision: 
v3.14.4-671-g707297fd510GIT Date: 2021-02-24 22:50:05 +0000

[1]PETSC ERROR: 
/ccc/work/cont001/ocre/bridelbert/MWE_HDF5_Output/testHDF5 on anamed 
r1login by bridelbert Wed Jul 21 10:57:11 2021

[1]PETSC ERROR: Configure options --with-clean=1 
--prefix=/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti 
--with-make-np=8 --with-windows-graphics=0 --with-debugging=1 
--download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 
--PETSC_ARCH=INTI_UNS3D 
--with-fc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort 
--with-cc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc 
--with-cxx=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx 
--with-openmp=0 
--download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p1.tar.gz 
--download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz 
--download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz 
--download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz 
--with-cmake-dir=/ccc/products/cmake-3.13.3/system/default 
--download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2 
--download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz

[1]PETSC ERROR: #1 User provided function() line 0 inunknown file

[1]PETSC ERROR: Checking the memory for corruption.

--------------------------------------------------------------------------

MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD

with errorcode 50176059.


NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.

You may or may not see output from other processes, depending on

exactly when Open MPI kills them.

--------------------------------------------------------------------------

[0]PETSC ERROR: 
------------------------------------------------------------------------

[0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the 
batch system) has told this process to end

[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger

[0]PETSC ERROR: or see 
https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind 
<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mcs.anl.gov%2Fpetsc%2Fdocumentation%2Ffaq.html%23valgrind&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866816465%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EzrdlWi7y%2FWU7LsRuqGd%2BAp%2B17bdX8bOxKbszA15k5E%3D&reserved=0>

[0]PETSC ERROR: or try http://valgrind.org 
<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvalgrind.org%2F&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866826422%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=o%2BFJ7bq2dp5bokp%2BKQj1efXC7Dqfbl0yBjHi2xobvug%3D&reserved=0> 
on GNU/linux and Apple Mac OS X to find memory corruption errors

[0]PETSC ERROR: likely location of problem given in stack below

[0]PETSC ERROR: ---------------------Stack Frames 
------------------------------------

[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,

[0]PETSC ERROR: INSTEAD the line number of the start of the function

[0]PETSC ERROR: is given.

[0]PETSC ERROR: [0] H5Dcreate2 line 716 
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c

[0]PETSC ERROR: [0] VecView_MPI_HDF5 line 622 
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c

[0]PETSC ERROR: [0] VecView_MPI line 815 
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c

[0]PETSC ERROR: [0] VecView line 580 
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/interface/vector.c

[0]PETSC ERROR: --------------------- Error Message 
--------------------------------------------------------------

[0]PETSC ERROR: Signal received

[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html 
<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mcs.anl.gov%2Fpetsc%2Fdocumentation%2Ffaq.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866826422%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TZ3qXboAnlDtWc6%2Bh5V175dPVzqCUhTTyFdCtm3J7lM%3D&reserved=0> 
for trouble shooting.

[0]PETSC ERROR: Petsc Development GIT revision: 
v3.14.4-671-g707297fd510GIT Date: 2021-02-24 22:50:05 +0000

[0]PETSC ERROR: 
/ccc/work/cont001/ocre/bridelbert/MWE_HDF5_Output/testHDF5 on anamed 
r1login by bridelbert Wed Jul 21 10:57:11 2021

[0]PETSC ERROR: Configure options --with-clean=1 
--prefix=/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti 
--with-make-np=8 --with-windows-graphics=0 --with-debugging=1 
--download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 
--PETSC_ARCH=INTI_UNS3D 
--with-fc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort 
--with-cc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc 
--with-cxx=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx 
--with-openmp=0 
--download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p1.tar.gz 
--download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz 
--download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz 
--download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz 
--with-cmake-dir=/ccc/products/cmake-3.13.3/system/default 
--download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2 
--download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz

[0]PETSC ERROR: #1 User provided function() line 0 inunknown file

[r1login:24498] 1 more process has sent help message help-mpi-api.txt / 
mpi-abort

[r1login:24498] Set MCA parameter "orte_base_help_aggregate" to 0 to see 
all help / error messages


I am starting to wonder if the PETSc configure script installs HDF5 with 
MPI correctly at all ...

Here is my conf :

Compilers:

C Compiler: 
/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc-fPIC -Wall 
-Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas 
-fstack-protector -fvisibility=hidden -g3

Version: gcc (GCC) 8.3.0

C++ Compiler: 
/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx-Wall 
-Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas 
-fstack-protector -fvisibility=hidden -g-fPIC

Version: g++ (GCC) 8.3.0

Fortran Compiler: 
/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort-fPIC -Wall 
-ffree-line-length-0 -Wno-unused-dummy-argument -g

Version: GNU Fortran (GCC) 8.3.0

Linkers:

Shared linker: 
/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc-shared-fPIC 
-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas 
-fstack-protector -fvisibility=hidden -g3

Dynamic linker: 
/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc-shared-fPIC 
-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas 
-fstack-protector -fvisibility=hidden -g3

Libraries linked against: -lquadmath -lstdc++ -ldl

BlasLapack:

Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib 
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib 
-lflapack -lfblas

uses 4 byte integers

MPI:

Version:3

Mpiexec: /ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpiexec

OMPI_VERSION: 2.0.4

fblaslapack:

zlib:

Version:1.2.11

Includes: 
-I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include

Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib 
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -lz

hdf5:

Version:1.12.0

Includes: 
-I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include

Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib 
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib 
-lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5

cmake:

Version:3.13.3

/ccc/products/cmake-3.13.3/system/default/bin/cmake

metis:

Version:5.1.0

Includes: 
-I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include

Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib 
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -lmetis

parmetis:

Version:4.0.3

Includes: 
-I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include

Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib 
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -lparmetis

regex:

sowing:

Version:1.1.26

/ccc/work/cont001/ocre/bridelbert/04-PETSC/INTI_UNS3D/bin/bfort

Language used to compile PETSc: C


Please don't hesitate to ask if you need something else from me !

Cheers,
Thibault
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210721/e88930fa/attachment-0001.html>


More information about the petsc-users mailing list