<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p>Hi all.</p>
<p>I have asked Thibault (author or this report on HDF5 <a href="https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.mcs.anl.gov%2Fpipermail%2Fpetsc-users%2F2021-July%2F044045.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C0443f8463c604f1122fb08d9485ac463%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620376946153189%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=w%2BAY%2FEZ7pPUintp77eEN1vGJVB3R0VX1Pzp9U0xV4E4%3D&reserved=0" rel="noreferrer" target="_blank">https://lists.mcs.anl.gov/pipermail/petsc-users/2021-July/044045.html</a>
some days before mine) to run my MWE and it does not work for him
either.</p>
<p>Further, I have tried on another machine of mine with
--download-hdf5 --download-mpich and still it is not working.</p>
<p>A detailed report follows at the end of this message.</p>
<p>I am wondering if something is wrong/incompatible with the HDF5
version of VecView, at least when the Vec is associated with a
DMDA. Of course it might just be that I didn't manage to write a
correct xdmf, but I can't spot the mistake...<br>
</p>
<p>I am of course available to run tests in order to find/fix this
problem.</p>
<p>Best</p>
<p> Matteo<br>
</p>
<div class="moz-cite-prefix">On 16/07/21 12:27, Matteo Semplice
wrote:<br>
</div>
<blockquote type="cite" cite="mid:6c443aac-e9c6-0704-beb5-05afa8c38798@uninsubria.it">
<br>
Il 15/07/21 17:44, Matteo Semplice ha scritto:
<br>
<blockquote type="cite">Hi.
<br>
<br>
When I write (HDF5 viewer) a vector associated to a DMDA with 1
dof, the output is independent of the number of cpus used.
<br>
<br>
However, for a DMDA with dof=2, the output seems to be correct
when I run on 1 or 2 cpus, but is scrambled when I run with 4
cpus. Judging from the ranges of the data, each field gets
written to the correct part, and its the data witin the field
that is scrambled. Here's my MWE:
<br>
<br>
#include <petscversion.h>
<br>
#include <petscdmda.h>
<br>
#include <petscviewer.h>
<br>
#include <petscsys.h>
<br>
#include <petscviewerhdf5.h>
<br>
<br>
int main(int argc, char **argv) {
<br>
<br>
PetscErrorCode ierr;
<br>
ierr = PetscInitialize(&argc,&argv,(char*)0,help);
CHKERRQ(ierr);
<br>
PetscInt Nx=11;
<br>
PetscInt Ny=11;
<br>
PetscScalar dx = 1.0 / (Nx-1);
<br>
PetscScalar dy = 1.0 / (Ny-1);
<br>
DM dmda;
<br>
ierr = DMDACreate2d(PETSC_COMM_WORLD,
<br>
DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,
<br>
DMDA_STENCIL_STAR,
<br>
Nx,Ny, //global dim
<br>
PETSC_DECIDE,PETSC_DECIDE, //n proc on
each dim
<br>
2,1, //dof, stencil width
<br>
NULL, NULL, //n nodes per direction on
each cpu
<br>
&dmda); CHKERRQ(ierr);
<br>
ierr = DMSetFromOptions(dmda); CHKERRQ(ierr);
<br>
ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr);
<br>
ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0,
0.0, 1.0); CHKERRQ(ierr);
<br>
ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr);
<br>
ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr);
<br>
DMDALocalInfo daInfo;
<br>
ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr);
<br>
IS *is;
<br>
DM *daField;
<br>
ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is,
&daField); CHKERRQ(ierr);
<br>
Vec U0;
<br>
ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr);
<br>
<br>
//Initial data
<br>
typedef struct{ PetscScalar s,c;} data_type;
<br>
data_type **u;
<br>
ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr);
<br>
for (PetscInt j=daInfo.ys; j<daInfo.ys+daInfo.ym; j++){
<br>
PetscScalar y = j*dy;
<br>
for (PetscInt i=daInfo.xs; i<daInfo.xs+daInfo.xm; i++){
<br>
PetscScalar x = i*dx;
<br>
u[j][i].s = x+2.*y;
<br>
u[j][i].c = 10. + 2.*x*x+y*y;
<br>
}
<br>
}
<br>
ierr = DMDAVecRestoreArray(dmda,U0,&u); CHKERRQ(ierr);
<br>
<br>
PetscViewer viewer;
<br>
ierr =
PetscViewerHDF5Open(PETSC_COMM_WORLD,"solutionSC.hdf5",FILE_MODE_WRITE,&viewer);
CHKERRQ(ierr);
<br>
Vec uField;
<br>
ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr);
<br>
PetscObjectSetName((PetscObject) uField, "S");
<br>
ierr = VecView(uField,viewer); CHKERRQ(ierr);
<br>
ierr = VecRestoreSubVector(U0,is[0],&uField);
CHKERRQ(ierr);
<br>
ierr = VecGetSubVector(U0,is[1],&uField); CHKERRQ(ierr);
<br>
PetscObjectSetName((PetscObject) uField, "C");
<br>
ierr = VecView(uField,viewer); CHKERRQ(ierr);
<br>
ierr = VecRestoreSubVector(U0,is[1],&uField);
CHKERRQ(ierr);
<br>
ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr);
<br>
<br>
ierr = PetscFinalize();
<br>
return ierr;
<br>
}
<br>
<br>
and my xdmf file
<br>
<br>
<?xml version="1.0" ?>
<br>
<Xdmf
xmlns:xi=<a class="moz-txt-link-rfc2396E" href="https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.w3.org%2F2001%2FXInclude&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C7c270ed0c49c4f8d950708d948444e1c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620280470927505%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=q45En4ULjQX6H%2F1ZgzUxgKmDk7Y7jK0K2IuWDHpr4HM%3D&reserved=0">"https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.w3.org%2F2001%2FXInclude&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C7c270ed0c49c4f8d950708d948444e1c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620280470927505%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=q45En4ULjQX6H%2F1ZgzUxgKmDk7Y7jK0K2IuWDHpr4HM%3D&reserved=0"</a>
Version="2.0">
<br>
<Domain>
<br>
<Grid GridType="Collection" CollectionType="Temporal">
<br>
<Time TimeType="List">
<br>
<DataItem Dimensions="1">1.0</DataItem>
<br>
</Time>
<br>
<Grid GridType="Uniform" Name="domain">
<br>
<Topology TopologyType="2DCoRectMesh" Dimensions="11
11"/>
<br>
<Geometry GeometryType="ORIGIN_DXDY">
<br>
<DataItem Format="XML" NumberType="Float"
Dimensions="2">0.0 0.0</DataItem>
<br>
<DataItem Format="XML" NumberType="Float"
Dimensions="2">0.1 0.1</DataItem>
<br>
</Geometry>
<br>
<Attribute Name="S" Center="Node"
AttributeType="Scalar">
<br>
<DataItem Format="HDF" Precision="8" Dimensions="11
11">solutionSC.hdf5:/S</DataItem>
<br>
</Attribute>
<br>
<Attribute Name="C" Center="Node"
AttributeType="Scalar">
<br>
<DataItem Format="HDF" Precision="8" Dimensions="11
11">solutionSC.hdf5:/C</DataItem>
<br>
</Attribute>
<br>
</Grid>
<br>
</Grid>
<br>
</Domain>
<br>
</Xdmf>
<br>
<br>
Steps to reprduce: run code and open the xdmf with paraview. If
the code was run with 1,2 or 3 cpus, the data are correct
(except the plane xy has become the plane yz), but with 4 cpus
the data are scrambled.
<br>
<br>
Does anyone have any insight?
<br>
<br>
(I am using Petsc Release Version 3.14.2, but I can compile a
newer one if you think it's important.)
<br>
</blockquote>
<br>
Hi,
<br>
<br>
I have a small update on this issue.
<br>
<br>
First, it is still here with version 3.15.2.
<br>
<br>
Secondly, I have run the code under valgrind and
<br>
<br>
- for 1 or 2 processes, I get no errors
<br>
<br>
- for 4 processes, 3 out of 4, trigger the following
<br>
<br>
==25921== Conditional jump or move depends on uninitialised
value(s)
<br>
==25921== at 0xB3D6259: ??? (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so)
<br>
==25921== by 0xB3D85C8: mca_fcoll_two_phase_file_write_all (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so)
<br>
==25921== by 0xAAEB29B: mca_common_ompio_file_write_at_all (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmca_common_ompio.so.41.9.0)
<br>
==25921== by 0xB316605: mca_io_ompio_file_write_at_all (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_io_ompio.so)
<br>
==25921== by 0x73C7FE7: PMPI_File_write_at_all (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3)
<br>
==25921== by 0x69E8700: H5FD__mpio_write (H5FDmpio.c:1466)
<br>
==25921== by 0x670D6EB: H5FD_write (H5FDint.c:248)
<br>
==25921== by 0x66DA0D3: H5F__accum_write (H5Faccum.c:826)
<br>
==25921== by 0x684F091: H5PB_write (H5PB.c:1031)
<br>
==25921== by 0x66E8055: H5F_shared_block_write (H5Fio.c:205)
<br>
==25921== by 0x6674538: H5D__chunk_collective_fill
(H5Dchunk.c:5064)
<br>
==25921== by 0x6674538: H5D__chunk_allocate (H5Dchunk.c:4736)
<br>
==25921== by 0x668C839: H5D__init_storage (H5Dint.c:2473)
<br>
==25921== Uninitialised value was created by a heap allocation
<br>
==25921== at 0x483577F: malloc (vg_replace_malloc.c:299)
<br>
==25921== by 0xB3D6155: ??? (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so)
<br>
==25921== by 0xB3D85C8: mca_fcoll_two_phase_file_write_all (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so)
<br>
==25921== by 0xAAEB29B: mca_common_ompio_file_write_at_all (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmca_common_ompio.so.41.9.0)
<br>
==25921== by 0xB316605: mca_io_ompio_file_write_at_all (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_io_ompio.so)
<br>
==25921== by 0x73C7FE7: PMPI_File_write_at_all (in
/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3)
<br>
==25921== by 0x69E8700: H5FD__mpio_write (H5FDmpio.c:1466)
<br>
==25921== by 0x670D6EB: H5FD_write (H5FDint.c:248)
<br>
==25921== by 0x66DA0D3: H5F__accum_write (H5Faccum.c:826)
<br>
==25921== by 0x684F091: H5PB_write (H5PB.c:1031)
<br>
==25921== by 0x66E8055: H5F_shared_block_write (H5Fio.c:205)
<br>
==25921== by 0x6674538: H5D__chunk_collective_fill
(H5Dchunk.c:5064)
<br>
==25921== by 0x6674538: H5D__chunk_allocate (H5Dchunk.c:4736)
<br>
<br>
Does anyone have any hint on what might be causing this?
<br>
<br>
Is this the "buggy MPI-IO" that Matt was mentioning in
<a class="moz-txt-link-freetext" href="https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.mcs.anl.gov%2Fpipermail%2Fpetsc-users%2F2021-July%2F044138.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C7c270ed0c49c4f8d950708d948444e1c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620280470927505%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=gPAbClDgJ1toQxzVVnoRCrgPBNR2tjw%2BGfdrxv%2FwVmY%3D&reserved=0">https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.mcs.anl.gov%2Fpipermail%2Fpetsc-users%2F2021-July%2F044138.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C7c270ed0c49c4f8d950708d948444e1c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620280470927505%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=gPAbClDgJ1toQxzVVnoRCrgPBNR2tjw%2BGfdrxv%2FwVmY%3D&reserved=0</a>?<br>
<br>
I am using the release branch (commit c548142fde) and I have
configured with --download-hdf5; configure finds the installed
openmpi 3.1.3 from Debian buster. The relevant lines from
configure.log are
<br>
<br>
MPI:
<br>
Version: 3
<br>
Mpiexec: mpiexec --oversubscribe
<br>
OMPI_VERSION: 3.1.3
<br>
hdf5:
<br>
Version: 1.12.0
<br>
Includes: -I/home/matteo/software/petsc/opt/include
<br>
Library: -Wl,-rpath,/home/matteo/software/petsc/opt/lib
-L/home/matteo/software/petsc/opt/lib -lhdf5hl_fortran
-lhdf5_fortran -lhdf5_hl -lhdf5
<br>
</blockquote>
<p>Update 1: on a different machine, I have compiled petsc (release
branch) with --download-hdf5 and --download-mpich and I have tried
3d HDF5 output at the end of my simulation. All's fine for 1 or 2
CPUs, but the output is funny for more CPUs. The smooth solution
gives rise to an output that renders like little bricks, as if the
data were written doing the 3 nested loops in the wrong order.</p>
<p>Update 2: Thibault was kind enough to compile and run my MWE on
his setup and he gets a crash related to the VecView with the HDF5
viewer. Here's the report that he sent me.<br>
</p>
<div>On 21/07/21 10:59, Thibault Bridel-Bertomeu wrote:<br>
</div>
Hi Matteo,
<div><br>
</div>
<div>I ran your test, and actually it does not give me garbage for a
number of processes greater than 1, it straight-up crashes ...</div>
<div>Here is the error log for 2 processes :</div>
<div><br>
</div>
<div>
<p><span>Compiled with Petsc Development GIT revision:
v3.14.4-671-g707297fd510<span> </span>GIT Date: 2021-02-24
22:50:05 +0000</span></p>
<p><span>[1]PETSC ERROR:
------------------------------------------------------------------------</span></p>
<p><span>[1]PETSC ERROR: Caught signal number 11 SEGV:
Segmentation Violation, probably memory access out of range</span></p>
<p><span>[1]PETSC ERROR: Try option -start_in_debugger or
-on_error_attach_debugger</span></p>
<p><span>[1]PETSC ERROR: or see <a href="https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mcs.anl.gov%2Fpetsc%2Fdocumentation%2Ffaq.html%23valgrind&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866796549%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZFBJWmDy6QUnBGYC2mHWBz%2F%2Bx%2BYan2Pw4ub%2FhR8PvK4%3D&reserved=0" target="_blank">https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a></span></p>
<p><span>[1]PETSC ERROR: or try <a href="https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvalgrind.org%2F&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866806506%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=5aITh8yt1ihtXQzuhBzG5vYy8ZeQhDq%2Bhj3aLMGjEvc%3D&reserved=0" target="_blank">http://valgrind.org</a> on GNU/linux and
Apple Mac OS X to find memory corruption errors</span></p>
<p><span>[1]PETSC ERROR: likely location of problem given in stack
below</span></p>
<p><span>[1]PETSC ERROR: ---------------------<span> </span>Stack
Frames ------------------------------------</span></p>
<p><span>[1]PETSC ERROR: Note: The EXACT line numbers in the stack
are not available,</span></p>
<p><span>[1]PETSC ERROR: <span> </span>INSTEAD the line
number of the start of the function</span></p>
<p><span>[1]PETSC ERROR: <span> </span>is given.</span></p>
<p><span>[1]PETSC ERROR: [1] H5Dcreate2 line 716
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c</span></p>
<p><span>[1]PETSC ERROR: [1] VecView_MPI_HDF5 line 622
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c</span></p>
<p><span>[1]PETSC ERROR: [1] VecView_MPI line 815
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c</span></p>
<p><span>[1]PETSC ERROR: [1] VecView line 580
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/interface/vector.c</span></p>
<p><span>[1]PETSC ERROR: --------------------- Error Message
--------------------------------------------------------------</span></p>
<p><span>[1]PETSC ERROR: Signal received</span></p>
<p><span>[1]PETSC ERROR: See <a href="https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mcs.anl.gov%2Fpetsc%2Fdocumentation%2Ffaq.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866816465%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Obo%2BDTzUQK%2BAzcBn7rmfrZwiFIhxv%2B9SAF5BQPVLCAg%3D&reserved=0" target="_blank">https://www.mcs.anl.gov/petsc/documentation/faq.html</a>
for trouble shooting.</span></p>
<p><span>[1]PETSC ERROR: Petsc Development GIT revision:
v3.14.4-671-g707297fd510<span> </span>GIT Date: 2021-02-24
22:50:05 +0000</span></p>
<p><span>[1]PETSC ERROR:
/ccc/work/cont001/ocre/bridelbert/MWE_HDF5_Output/testHDF5 on
a<span> </span>named r1login by bridelbert Wed Jul 21
10:57:11 2021</span></p>
<p><span>[1]PETSC ERROR: Configure options --with-clean=1
--prefix=/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti
--with-make-np=8 --with-windows-graphics=0 --with-debugging=1
--download-mpich-shared=0 --with-x=0 --with-pthread=0
--with-valgrind=0 --PETSC_ARCH=INTI_UNS3D
--with-fc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort
--with-cc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc
--with-cxx=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx
--with-openmp=0
--download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p1.tar.gz
--download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz
--download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz
--download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz
--with-cmake-dir=/ccc/products/cmake-3.13.3/system/default
--download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2
--download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz</span></p>
<p><span>[1]PETSC ERROR: #1 User provided function() line 0 in<span>
</span>unknown file</span></p>
<p><span>[1]PETSC ERROR: Checking the memory for corruption.</span></p>
<p><span>--------------------------------------------------------------------------</span></p>
<p><span>MPI_ABORT was invoked on rank 1 in communicator
MPI_COMM_WORLD</span></p>
<p><span>with errorcode 50176059.</span></p>
<p><span></span><br>
</p>
<p><span>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI
processes.</span></p>
<p><span>You may or may not see output from other processes,
depending on</span></p>
<p><span>exactly when Open MPI kills them.</span></p>
<p><span>--------------------------------------------------------------------------</span></p>
<p><span>[0]PETSC ERROR:
------------------------------------------------------------------------</span></p>
<p><span>[0]PETSC ERROR: Caught signal number 15 Terminate: Some
process (or the batch system) has told this process to end</span></p>
<p><span>[0]PETSC ERROR: Try option -start_in_debugger or
-on_error_attach_debugger</span></p>
<p><span>[0]PETSC ERROR: or see <a href="https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mcs.anl.gov%2Fpetsc%2Fdocumentation%2Ffaq.html%23valgrind&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866816465%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EzrdlWi7y%2FWU7LsRuqGd%2BAp%2B17bdX8bOxKbszA15k5E%3D&reserved=0" target="_blank">https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind</a></span></p>
<p><span>[0]PETSC ERROR: or try <a href="https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvalgrind.org%2F&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866826422%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=o%2BFJ7bq2dp5bokp%2BKQj1efXC7Dqfbl0yBjHi2xobvug%3D&reserved=0" target="_blank">http://valgrind.org</a> on GNU/linux and
Apple Mac OS X to find memory corruption errors</span></p>
<p><span>[0]PETSC ERROR: likely location of problem given in stack
below</span></p>
<p><span>[0]PETSC ERROR: ---------------------<span> </span>Stack
Frames ------------------------------------</span></p>
<p><span>[0]PETSC ERROR: Note: The EXACT line numbers in the stack
are not available,</span></p>
<p><span>[0]PETSC ERROR: <span> </span>INSTEAD the line
number of the start of the function</span></p>
<p><span>[0]PETSC ERROR: <span> </span>is given.</span></p>
<p><span>[0]PETSC ERROR: [0] H5Dcreate2 line 716
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c</span></p>
<p><span>[0]PETSC ERROR: [0] VecView_MPI_HDF5 line 622
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c</span></p>
<p><span>[0]PETSC ERROR: [0] VecView_MPI line 815
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c</span></p>
<p><span>[0]PETSC ERROR: [0] VecView line 580
/ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/interface/vector.c</span></p>
<p><span>[0]PETSC ERROR: --------------------- Error Message
--------------------------------------------------------------</span></p>
<p><span>[0]PETSC ERROR: Signal received</span></p>
<p><span>[0]PETSC ERROR: See <a href="https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mcs.anl.gov%2Fpetsc%2Fdocumentation%2Ffaq.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C4046642d31454b13ad8708d94c299c2c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637624563866826422%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TZ3qXboAnlDtWc6%2Bh5V175dPVzqCUhTTyFdCtm3J7lM%3D&reserved=0" target="_blank">https://www.mcs.anl.gov/petsc/documentation/faq.html</a>
for trouble shooting.</span></p>
<p><span>[0]PETSC ERROR: Petsc Development GIT revision:
v3.14.4-671-g707297fd510<span> </span>GIT Date: 2021-02-24
22:50:05 +0000</span></p>
<p><span>[0]PETSC ERROR:
/ccc/work/cont001/ocre/bridelbert/MWE_HDF5_Output/testHDF5 on
a<span> </span>named r1login by bridelbert Wed Jul 21
10:57:11 2021</span></p>
<p><span>[0]PETSC ERROR: Configure options --with-clean=1
--prefix=/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti
--with-make-np=8 --with-windows-graphics=0 --with-debugging=1
--download-mpich-shared=0 --with-x=0 --with-pthread=0
--with-valgrind=0 --PETSC_ARCH=INTI_UNS3D
--with-fc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort
--with-cc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc
--with-cxx=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx
--with-openmp=0
--download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p1.tar.gz
--download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz
--download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz
--download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz
--with-cmake-dir=/ccc/products/cmake-3.13.3/system/default
--download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2
--download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz</span></p>
<p><span>[0]PETSC ERROR: #1 User provided function() line 0 in<span>
</span>unknown file</span></p>
<p><span>[r1login:24498] 1 more process has sent help message
help-mpi-api.txt / mpi-abort</span></p>
<p><span>[r1login:24498] Set MCA parameter
"orte_base_help_aggregate" to 0 to see all help / error
messages</span></p>
</div>
<br>
<div>I am starting to wonder if the PETSc configure script installs
HDF5 with MPI correctly at all ...</div>
<div><br>
</div>
<div>
<div>Here is my conf :</div>
<div><br>
</div>
<div>
<p class="gmail-p1"><span class="gmail-s1">Compilers:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>C Compiler:
<span class="gmail-Apple-converted-space"> </span>/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc<span class="gmail-Apple-converted-space"> </span>-fPIC -Wall
-Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas
-fstack-protector -fvisibility=hidden -g3</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:
gcc (GCC) 8.3.0</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>C++
Compiler: <span class="gmail-Apple-converted-space">
</span>/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx<span class="gmail-Apple-converted-space"> </span>-Wall
-Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas
-fstack-protector -fvisibility=hidden -g<span class="gmail-Apple-converted-space"> </span>-fPIC</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:
g++ (GCC) 8.3.0</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Fortran
Compiler: <span class="gmail-Apple-converted-space">
</span>/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort<span class="gmail-Apple-converted-space"> </span>-fPIC -Wall
-ffree-line-length-0 -Wno-unused-dummy-argument -g</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:
GNU Fortran (GCC) 8.3.0</span></p>
<p class="gmail-p1"><span class="gmail-s1">Linkers:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Shared
linker: <span class="gmail-Apple-converted-space"> </span>/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc<span class="gmail-Apple-converted-space"> </span>-shared<span class="gmail-Apple-converted-space"> </span>-fPIC -Wall
-Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas
-fstack-protector -fvisibility=hidden -g3</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Dynamic
linker: <span class="gmail-Apple-converted-space"> </span>/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc<span class="gmail-Apple-converted-space"> </span>-shared<span class="gmail-Apple-converted-space"> </span>-fPIC -Wall
-Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas
-fstack-protector -fvisibility=hidden -g3</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Libraries
linked against: <span class="gmail-Apple-converted-space">
</span>-lquadmath -lstdc++ -ldl</span></p>
<p class="gmail-p1"><span class="gmail-s1">BlasLapack:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Library:<span class="gmail-Apple-converted-space"> </span>-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib
-lflapack -lfblas</span></p>
</div>
<div>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>uses 4 byte
integers</span></p>
<p class="gmail-p1"><span class="gmail-s1">MPI:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:<span class="gmail-Apple-converted-space"> </span>3</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Mpiexec:
/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpiexec</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>OMPI_VERSION:
2.0.4</span></p>
<p class="gmail-p1"><span class="gmail-s1">fblaslapack:</span></p>
<p class="gmail-p1"><span class="gmail-s1">zlib:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:<span class="gmail-Apple-converted-space"> </span>1.2.11</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Includes:
-I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Library:<span class="gmail-Apple-converted-space"> </span>-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -lz</span></p>
<p class="gmail-p1"><span class="gmail-s1">hdf5:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:<span class="gmail-Apple-converted-space"> </span>1.12.0</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Includes:
-I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Library:<span class="gmail-Apple-converted-space"> </span>-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib
-lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5</span></p>
<p class="gmail-p1"><span class="gmail-s1">cmake:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:<span class="gmail-Apple-converted-space"> </span>3.13.3</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>/ccc/products/cmake-3.13.3/system/default/bin/cmake</span></p>
<p class="gmail-p1"><span class="gmail-s1">metis:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:<span class="gmail-Apple-converted-space"> </span>5.1.0</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Includes:
-I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Library:<span class="gmail-Apple-converted-space"> </span>-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib
-lmetis</span></p>
<p class="gmail-p1"><span class="gmail-s1">parmetis:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:<span class="gmail-Apple-converted-space"> </span>4.0.3</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Includes:
-I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Library:<span class="gmail-Apple-converted-space"> </span>-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib
-L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib
-lparmetis</span></p>
<p class="gmail-p1"><span class="gmail-s1">regex:</span></p>
<p class="gmail-p1"><span class="gmail-s1">sowing:</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Version:<span class="gmail-Apple-converted-space"> </span>1.1.26</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>/ccc/work/cont001/ocre/bridelbert/04-PETSC/INTI_UNS3D/bin/bfort</span></p>
<p class="gmail-p1"><span class="gmail-s1"><span class="gmail-Apple-converted-space"> </span>Language
used to compile PETSc: C</span></p>
</div>
<br>
<div>Please don't hesitate to ask if you need something else from
me !</div>
<div><br>
</div>
<div>Cheers, </div>
<div>Thibault</div>
</div>
</body>
</html>