From Hannes_Brandt at gmx.de Wed Dec 1 07:33:30 2021 From: Hannes_Brandt at gmx.de (Hannes Brandt) Date: Wed, 1 Dec 2021 14:33:30 +0100 Subject: [petsc-users] Communication in parallel MatMatMult Message-ID: <4ac57f40-b09c-627f-3f65-ff706e3d502a@gmx.de> Hello, I am interested in the communication scheme Petsc uses for the multiplication of dense, parallel distributed matrices in MatMatMult. Is it based on collective communication or on single calls to MPI_Send/Recv, and is it done in a blocking or a non-blocking way? How do you make sure that the processes do not receive/buffer too much data at the same time? Best Regards, Hannes Brandt From bsmith at petsc.dev Wed Dec 1 08:32:13 2021 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 1 Dec 2021 09:32:13 -0500 Subject: [petsc-users] Communication in parallel MatMatMult In-Reply-To: <4ac57f40-b09c-627f-3f65-ff706e3d502a@gmx.de> References: <4ac57f40-b09c-627f-3f65-ff706e3d502a@gmx.de> Message-ID: <31A10569-CC27-4EE7-81BA-BD28332C6479@petsc.dev> PETSc uses Elemental to perform such operations. PetscErrorCode MatMatMultNumeric_Elemental(Mat A,Mat B,Mat C) { Mat_Elemental *a = (Mat_Elemental*)A->data; Mat_Elemental *b = (Mat_Elemental*)B->data; Mat_Elemental *c = (Mat_Elemental*)C->data; PetscElemScalar one = 1,zero = 0; PetscFunctionBegin; { /* Scoping so that constructor is called before pointer is returned */ El::Gemm(El::NORMAL,El::NORMAL,one,*a->emat,*b->emat,zero,*c->emat); } C->assembled = PETSC_TRUE; PetscFunctionReturn(0); } You can consult Elemental's documentation and papers for how it manages the communication. Barry > On Dec 1, 2021, at 8:33 AM, Hannes Brandt wrote: > > Hello, > > > I am interested in the communication scheme Petsc uses for the multiplication of dense, parallel distributed matrices in MatMatMult. Is it based on collective communication or on single calls to MPI_Send/Recv, and is it done in a blocking or a non-blocking way? How do you make sure that the processes do not receive/buffer too much data at the same time? > > > Best Regards, > > Hannes Brandt > > From knepley at gmail.com Wed Dec 1 08:45:56 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 1 Dec 2021 09:45:56 -0500 Subject: [petsc-users] Communication in parallel MatMatMult In-Reply-To: <31A10569-CC27-4EE7-81BA-BD28332C6479@petsc.dev> References: <4ac57f40-b09c-627f-3f65-ff706e3d502a@gmx.de> <31A10569-CC27-4EE7-81BA-BD28332C6479@petsc.dev> Message-ID: On Wed, Dec 1, 2021 at 9:32 AM Barry Smith wrote: > > PETSc uses Elemental to perform such operations. > > PetscErrorCode MatMatMultNumeric_Elemental(Mat A,Mat B,Mat C) > { > Mat_Elemental *a = (Mat_Elemental*)A->data; > Mat_Elemental *b = (Mat_Elemental*)B->data; > Mat_Elemental *c = (Mat_Elemental*)C->data; > PetscElemScalar one = 1,zero = 0; > > PetscFunctionBegin; > { /* Scoping so that constructor is called before pointer is returned */ > El::Gemm(El::NORMAL,El::NORMAL,one,*a->emat,*b->emat,zero,*c->emat); > } > C->assembled = PETSC_TRUE; > PetscFunctionReturn(0); > } > > > You can consult Elemental's documentation and papers for how it manages > the communication. > Elemental uses all collective communication operations as a fundamental aspect of the design. Thanks, Matt > Barry > > > > On Dec 1, 2021, at 8:33 AM, Hannes Brandt wrote: > > > > Hello, > > > > > > I am interested in the communication scheme Petsc uses for the > multiplication of dense, parallel distributed matrices in MatMatMult. Is it > based on collective communication or on single calls to MPI_Send/Recv, and > is it done in a blocking or a non-blocking way? How do you make sure that > the processes do not receive/buffer too much data at the same time? > > > > > > Best Regards, > > > > Hannes Brandt > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at mcmaster.ca Wed Dec 1 12:54:55 2021 From: bourdin at mcmaster.ca (Blaise Bourdin) Date: Wed, 1 Dec 2021 18:54:55 +0000 Subject: [petsc-users] Output data using ExodusIIViewer In-Reply-To: <7d14bab6-7df9-4804-9bb5-c810c64a7a86@edison.tech> References: <48B7C2BC-B133-4ADB-A269-56B666A52C81@mcmaster.ca> <7d14bab6-7df9-4804-9bb5-c810c64a7a86@edison.tech> Message-ID: An HTML attachment was scrubbed... URL: From bourdin at mcmaster.ca Wed Dec 1 13:48:21 2021 From: bourdin at mcmaster.ca (Blaise Bourdin) Date: Wed, 1 Dec 2021 19:48:21 +0000 Subject: [petsc-users] Output data using ExodusIIViewer In-Reply-To: References: <48B7C2BC-B133-4ADB-A269-56B666A52C81@mcmaster.ca> <7d14bab6-7df9-4804-9bb5-c810c64a7a86@edison.tech> Message-ID: <76929DE4-AFA0-4E60-9D32-E9E7A599D5DE@mcmaster.ca> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: exo2.c Type: application/octet-stream Size: 2899 bytes Desc: exo2.c URL: From knepley at gmail.com Wed Dec 1 13:54:32 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 1 Dec 2021 14:54:32 -0500 Subject: [petsc-users] Output data using ExodusIIViewer In-Reply-To: <76929DE4-AFA0-4E60-9D32-E9E7A599D5DE@mcmaster.ca> References: <48B7C2BC-B133-4ADB-A269-56B666A52C81@mcmaster.ca> <7d14bab6-7df9-4804-9bb5-c810c64a7a86@edison.tech> <76929DE4-AFA0-4E60-9D32-E9E7A599D5DE@mcmaster.ca> Message-ID: Blaise, is that stuff we should be doing in the viewer? Thanks, Matt On Wed, Dec 1, 2021 at 2:48 PM Blaise Bourdin wrote: > David, > > Here is a modified example. > Exodus needs some additional work prior to saving fields. See the attached > modified example. > > Blaise > > > On Dec 1, 2021, at 1:54 PM, Blaise Bourdin wrote: > > OK, let me have a look. > Blaise > > > On Nov 30, 2021, at 7:31 PM, David Andrs wrote: > > I see. I added a "Cell Sets" label like so: > ? > ? DMCreateLabel(dm, "Cell Sets"); > DMLabel cs_label; > DMGetLabel(dm, "Cell Sets", &cs_label); > DMLabelAddStratum(cs_label, 0); > PetscInt idxs[] = { 1 }; > IS is; > ISCreateGeneral(comm, 1, idxs, PETSC_COPY_VALUES, &is); > DMLabelSetStratumIS(cs_label, 0, is); > ISDestroy(&is); > > Note, that I have only a single element (Quad4) in the mesh and I am just > trying to get this working, so I understand what needs to happen for larger > meshes. > ? > ?This got me past the segfault, but now I see: > ? > [?0]PETSC ERROR: Argument out of range > [0]PETSC ERROR: Number of vertices 1 in dimension 2 has no ExodusII type > > So, I assume I need to do something ?more to make it work. > ? > ?David > ?? > On Nov 30 2021, at 11:39 AM, Blaise Bourdin wrote: > > It looks like your DM cannot be saved in exodus format as such. The exodus > format requires that all cells be part of a single block (defined by ?Cell > Set? labels), and that the cell sets consists of sequentially numbered > cells. > Can you see if that is enough? If not, I will go through your example > > Blaise > > On Nov 30, 2021, at 9:50 AM, David Andrs wrote: > > Hello! > > I am trying to store data into an ExodusII file using the ExodusIIViewer, > but running into a segfault inside PETSc. Attached is a minimal example > showing the problem. It can very much be that I am missing something > obvious. However, if I change the code to VTKViewer I get the desired > output file. > > Machine: MacBook Pro 2019 > OS version/type: Darwin notak.local 21.1.0 Darwin Kernel Version 21.1.0: > Wed Oct 13 17:33:23 PDT 2021; root:xnu-8019.41.5~1/RELEASE_X86_64 x86_64 > PETSc: Petsc Release Version 3.16.1, Nov 01, 2021 > MPI: MPICH 3.4.2 > Compiler: clang-12 > > Call stack (not sure how relevant that is since it is from opt version): > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS > (code=1, address=0xc) > frame #0: 0x0000000102303ba9 > libpetsc.3.16.dylib`DMView_PlexExodusII(dm=, > viewer=) at plexexodusii.c:457:45 [opt] > 454 else if (degree == 2) nodes[cs] = nodesHexP2; > 455 } > 456 /* Compute the number of cells not in the connectivity table */ > -> 457 cellsNotInConnectivity -= nodes[cs][3]*csSize; > 458 > 459 ierr = ISRestoreIndices(stratumIS, &cells);CHKERRQ(ierr); > 460 ierr = ISDestroy(&stratumIS);CHKERRQ(ierr); > > With regards, > > David Andrs > > > > -- > Professor, Department of Mathematics & Statistics > Hamilton Hall room 409A, McMaster University > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > Tel. +1 (905) 525 9140 ext. 27243 > > > -- > Professor, Department of Mathematics & Statistics > Hamilton Hall room 409A, McMaster University > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > Tel. +1 (905) 525 9140 ext. 27243 > > > -- > Professor, Department of Mathematics & Statistics > Hamilton Hall room 409A, McMaster University > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > Tel. +1 (905) 525 9140 ext. 27243 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Thu Dec 2 02:33:16 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Thu, 2 Dec 2021 08:33:16 +0000 Subject: [petsc-users] Unstructured mesh Message-ID: Hello, Are there example tutorials on unstructured mesh in ksp? Can some of them run on gpus? Kind regards, Karthik. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 2 04:57:14 2021 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 2 Dec 2021 05:57:14 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: Message-ID: On Thu, Dec 2, 2021 at 3:33 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Hello, > > > > Are there example tutorials on unstructured mesh in ksp? Can some of them > run on gpus? > There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now. Thanks, Matt > Kind regards, > > Karthik. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrsd at gmail.com Thu Dec 2 09:02:22 2021 From: andrsd at gmail.com (David Andrs) Date: Thu, 2 Dec 2021 08:02:22 -0700 Subject: [petsc-users] Output data using ExodusIIViewer In-Reply-To: <76929DE4-AFA0-4E60-9D32-E9E7A599D5DE@mcmaster.ca> References: <48B7C2BC-B133-4ADB-A269-56B666A52C81@mcmaster.ca> <7d14bab6-7df9-4804-9bb5-c810c64a7a86@edison.tech> <76929DE4-AFA0-4E60-9D32-E9E7A599D5DE@mcmaster.ca> Message-ID: Blaise, ? ?thank you very much for you help. This helps a lot. ? ?By the way, I was not able to open the produced file in paraview 5.9.1 (failed with "EX_INQ_TIME failed"). However, ncdump seems correct. At least, I do not immediately see anything wrong there. I will try to look into this problem myself, now that I know it is ok to do direct exodusII calls. ? Thank you again,? ? ?David ? On Dec 1 2021, at 12:48 PM, Blaise Bourdin wrote: > > > David, > > > > Here is a modified example. > > Exodus needs some additional work prior to saving fields. See the attached modified example. > > > > Blaise > > > > > > > > > > > On Dec 1, 2021, at 1:54 PM, Blaise Bourdin wrote: > > > > > > > > > > OK, let me have a look. > > > > Blaise > > > > > > > > > > > > > > > > On Nov 30, 2021, at 7:31 PM, David Andrs wrote: > > > > > > > > > > > > I see. I added a "Cell Sets" label like so: > > > > > > ? > > > > > > ? DMCreateLabel(dm, "Cell Sets"); > > > > > > DMLabel cs_label; > > > > > > DMGetLabel(dm, "Cell Sets", &cs_label); > > > > > > DMLabelAddStratum(cs_label, 0); > > > > > > PetscInt idxs[] = { 1 }; > > > > > > IS is; > > > > > > ISCreateGeneral(comm, 1, idxs, PETSC_COPY_VALUES, &is); > > > > > > DMLabelSetStratumIS(cs_label, 0, is); > > > > > > ISDestroy(&is); > > > > > > > > > Note, that I have only a single element (Quad4) in the mesh and I am just trying to get this working, so I understand what needs to happen for larger meshes. > > > > > > ? > > > > > > ?This got me past the segfault, but now I see: > > > > > > ? > > > > > > [?0]PETSC ERROR: Argument out of range > > > > > > [0]PETSC ERROR: Number of vertices 1 in dimension 2 has no ExodusII type > > > > > > > > > So, I assume I need to do something ?more to make it work. > > > > > > ? > > > > > > ?David > > > > > > ?? > > > > > > On Nov 30 2021, at 11:39 AM, Blaise Bourdin wrote: > > > > > > > > > > > It looks like your DM cannot be saved in exodus format as such. The exodus format requires that all cells be part of a single block (defined by ?Cell Set? labels), and that the cell sets consists of sequentially numbered cells. > > > > > > > > Can you see if that is enough? If not, I will go through your example > > > > > > > > > > > > > > > > Blaise > > > > > > > > > > > > > > > > > > > > > > > > > > On Nov 30, 2021, at 9:50 AM, David Andrs wrote: > > > > > > > > > > > > > > > > > > > > > > > > > Hello! > > > > > > > > > > > > > > > > > > > > I am trying to store data into an ExodusII file using the ExodusIIViewer, but running into a segfault inside PETSc. Attached is a minimal example showing the problem. It can very much be that I am missing something obvious. However, if I change the code to VTKViewer I get the desired output file. > > > > > > > > > > > > > > > > > > > > Machine: MacBook Pro 2019 > > > > > > > > > > OS version/type: Darwin notak.local 21.1.0 Darwin Kernel Version 21.1.0: Wed Oct 13 17:33:23 PDT 2021; root:xnu-8019.41.5~1/RELEASE_X86_64 x86_64 > > > > > > > > > > PETSc: Petsc Release Version 3.16.1, Nov 01, 2021 > > > > > > > > > > MPI: MPICH 3.4.2 > > > > > > > > > > Compiler: clang-12 > > > > > > > > > > > > > > > > > > > > Call stack (not sure how relevant that is since it is from opt version): > > > > > > > > > > > > > > > > > > > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0xc) > > > > > > > > > > frame #0: 0x0000000102303ba9 libpetsc.3.16.dylib`DMView_PlexExodusII(dm=, viewer=) at plexexodusii.c:457:45 [opt] > > > > > > > > > > 454 else if (degree == 2) nodes[cs] = nodesHexP2; > > > > > > > > > > 455 } > > > > > > > > > > 456 /* Compute the number of cells not in the connectivity table */ > > > > > > > > > > -> 457 cellsNotInConnectivity -= nodes[cs][3]*csSize; > > > > > > > > > > 458 > > > > > > > > > > 459 ierr = ISRestoreIndices(stratumIS, &cells);CHKERRQ(ierr); > > > > > > > > > > 460 ierr = ISDestroy(&stratumIS);CHKERRQ(ierr); > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > With regards, > > > > > > > > > > > > > > > > > > > > > > > > > David Andrs > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > Professor, Department of Mathematics & Statistics > > > > > > > > Hamilton Hall room 409A, McMaster University > > > > > > > > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > > > > > > > > Tel. +1 (905) 525 9140 ext. 27243 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Professor, Department of Mathematics & Statistics > > > > Hamilton Hall room 409A, McMaster University > > > > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > > > > Tel. +1 (905) 525 9140 ext. 27243 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > Professor, Department of Mathematics & Statistics > > Hamilton Hall room 409A, McMaster University > > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > > Tel. +1 (905) 525 9140 ext. 27243 > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kuang-chung.wang at intel.com Thu Dec 2 14:06:21 2021 From: kuang-chung.wang at intel.com (Wang, Kuang-chung) Date: Thu, 2 Dec 2021 20:06:21 +0000 Subject: [petsc-users] Orthogonality of eigenvectors in SLEPC In-Reply-To: References: Message-ID: Thanks Jose for your prompt reply. I did find my matrix highly non-hermitian. By forcing the solver to be hermtian, the orthogonality was restored. But I do need to root cause why my matrix is non-hermitian in the first place. Along the way, I highly recommend MatIsHermitian() function or combining functions like MatHermitianTranspose () MatAXPY MatNorm to determine the hermiticity to safeguard our program. Best, Kuang -----Original Message----- From: Jose E. Roman Sent: Wednesday, November 24, 2021 6:20 AM To: Wang, Kuang-chung Cc: petsc-users at mcs.anl.gov; Obradovic, Borna ; Cea, Stephen M Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC In Hermitian eigenproblems orthogonality of eigenvectors is guaranteed/enforced. But you are solving the problem as non-Hermitian. If your matrix is Hermitian, make sure you solve it as a HEP, and make sure that your matrix is numerically Hermitian. If your matrix is non-Hermitian, then you cannot expect the eigenvectors to be orthogonal. What you can do in this case is get an orthogonal basis of the computed eigenspace, see https://slepc.upv.es/documentation/current/docs/manualpages/EPS/EPSGetInvariantSubspace.html By the way, version 3.7 is more than 5 years old, it is better if you can upgrade to a more recent version. Jose > El 24 nov 2021, a las 7:15, Wang, Kuang-chung escribi?: > > Dear Jose : > I came across this thread describing issue using krylovschur and finding eigenvectors non-orthogonal. > https://lists.mcs.anl.gov/pipermail/petsc-users/2014-October/023360.ht > ml > > I furthermore have tested by reducing the tolerance as highlighted below from 1e-12 to 1e-16 with no luck. > Could you please suggest options/sources to try out ? > Thanks a lot for sharing your knowledge! > > Sincere, > Kuang-Chung Wang > > ======================================================= > Kuang-Chung Wang > Computational and Modeling Technology > Intel Corporation > Hillsboro OR 97124 > ======================================================= > > Here are more info: > ? slepc/3.7.4 > ? output message from by doing EPSView(eps,PETSC_NULL): > EPS Object: 1 MPI processes > type: krylovschur > Krylov-Schur: 50% of basis vectors kept after restart > Krylov-Schur: using the locking variant > problem type: non-hermitian eigenvalue problem > selected portion of the spectrum: closest to target: 20.1161 (in magnitude) > number of eigenvalues (nev): 40 > number of column vectors (ncv): 81 > maximum dimension of projected problem (mpd): 81 > maximum number of iterations: 1000 > tolerance: 1e-12 > convergence test: relative to the eigenvalue BV Object: 1 MPI > processes > type: svec > 82 columns of global length 2988 > vector orthogonalization method: classical Gram-Schmidt > orthogonalization refinement: always > block orthogonalization method: Gram-Schmidt > doing matmult as a single matrix-matrix product DS Object: 1 MPI > processes > type: nhep > ST Object: 1 MPI processes > type: sinvert > shift: 20.1161 > number of matrices: 1 > KSP Object: (st_) 1 MPI processes > type: preonly > maximum iterations=1000, initial guess is zero > tolerances: relative=1.12005e-09, absolute=1e-50, divergence=10000. > left preconditioning > using NONE norm type for convergence test > PC Object: (st_) 1 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 0., needed 0. > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij > rows=2988, cols=2988 > package used to perform factorization: mumps > total: nonzeros=614160, allocated nonzeros=614160 > total number of mallocs used during MatSetValues calls =0 > MUMPS run parameters: > SYM (matrix type): 0 > PAR (host participation): 1 > ICNTL(1) (output for error): 6 > ICNTL(2) (output of diagnostic msg): 0 > ICNTL(3) (output for global info): 0 > ICNTL(4) (level of printing): 0 > ICNTL(5) (input mat struct): 0 > ICNTL(6) (matrix prescaling): 7 > ICNTL(7) (sequential matrix ordering):7 > ICNTL(8) (scaling strategy): 77 > ICNTL(10) (max num of refinements): 0 > ICNTL(11) (error analysis): 0 > ICNTL(12) (efficiency control): 1 > ICNTL(13) (efficiency control): 0 > ICNTL(14) (percentage of estimated workspace increase): 20 > ICNTL(18) (input mat struct): 0 > ICNTL(19) (Schur complement info): 0 > ICNTL(20) (rhs sparse pattern): 0 > ICNTL(21) (solution struct): 0 > ICNTL(22) (in-core/out-of-core facility): 0 > ICNTL(23) (max size of memory can be allocated locally):0 > ICNTL(24) (detection of null pivot rows): 0 > ICNTL(25) (computation of a null space basis): 0 > ICNTL(26) (Schur options for rhs or solution): 0 > ICNTL(27) (experimental parameter): -24 > ICNTL(28) (use parallel or sequential ordering): 1 > ICNTL(29) (parallel ordering): 0 > ICNTL(30) (user-specified set of entries in inv(A)): 0 > ICNTL(31) (factors is discarded in the solve phase): 0 > ICNTL(33) (compute determinant): 0 > CNTL(1) (relative pivoting threshold): 0.01 > CNTL(2) (stopping criterion of refinement): 1.49012e-08 > CNTL(3) (absolute pivoting threshold): 0. > CNTL(4) (value of static pivoting): -1. > CNTL(5) (fixation for null pivots): 0. > RINFO(1) (local estimated flops for the elimination after analysis): > [0] 8.15668e+07 > RINFO(2) (local estimated flops for the assembly after factorization): > [0] 892584. > RINFO(3) (local estimated flops for the elimination after factorization): > [0] 8.15668e+07 > INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization): > [0] 16 > INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization): > [0] 16 > INFO(23) (num of pivots eliminated on this processor after factorization): > [0] 2988 > RINFOG(1) (global estimated flops for the elimination after analysis): 8.15668e+07 > RINFOG(2) (global estimated flops for the assembly after factorization): 892584. > RINFOG(3) (global estimated flops for the elimination after factorization): 8.15668e+07 > (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0) > INFOG(3) (estimated real workspace for factors on all processors after analysis): 614160 > INFOG(4) (estimated integer workspace for factors on all processors after analysis): 31971 > INFOG(5) (estimated maximum front size in the complete tree): 246 > INFOG(6) (number of nodes in the complete tree): 197 > INFOG(7) (ordering option effectively use after analysis): 2 > INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100 > INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 614160 > INFOG(10) (total integer space store the matrix factors after factorization): 31971 > INFOG(11) (order of largest frontal matrix after factorization): 246 > INFOG(12) (number of off-diagonal pivots): 0 > INFOG(13) (number of delayed pivots after factorization): 0 > INFOG(14) (number of memory compress after factorization): 0 > INFOG(15) (number of steps of iterative refinement after solution): 0 > INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 16 > INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 16 > INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 16 > INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 16 > INFOG(20) (estimated number of entries in the factors): 614160 > INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 14 > INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 14 > INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0 > INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1 > INFOG(25) (after factorization: number of pivots modified by static pivoting): 0 > INFOG(28) (after factorization: number of null pivots encountered): 0 > INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 614160 > INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 13, 13 > INFOG(32) (after analysis: type of analysis done): 1 > INFOG(33) (value used for ICNTL(8)): 7 > INFOG(34) (exponent of the determinant if determinant is requested): 0 > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=2988, cols=2988 > total: nonzeros=151488, allocated nonzeros=151488 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 996 nodes, limit used is 5 From hzhang at mcs.anl.gov Thu Dec 2 16:18:26 2021 From: hzhang at mcs.anl.gov (Zhang, Hong) Date: Thu, 2 Dec 2021 22:18:26 +0000 Subject: [petsc-users] Orthogonality of eigenvectors in SLEPC In-Reply-To: References: Message-ID: Kuang, PETSc supports MatIsHermitian() for SeqAIJ, IS and SeqSBAIJ matrix types. What is your matrix type? We should be able to add this support to other mat types. Hong ________________________________ From: petsc-users on behalf of Wang, Kuang-chung Sent: Thursday, December 2, 2021 2:06 PM To: Jose E. Roman Cc: petsc-users at mcs.anl.gov ; Obradovic, Borna ; Cea, Stephen M Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC Thanks Jose for your prompt reply. I did find my matrix highly non-hermitian. By forcing the solver to be hermtian, the orthogonality was restored. But I do need to root cause why my matrix is non-hermitian in the first place. Along the way, I highly recommend MatIsHermitian() function or combining functions like MatHermitianTranspose () MatAXPY MatNorm to determine the hermiticity to safeguard our program. Best, Kuang -----Original Message----- From: Jose E. Roman Sent: Wednesday, November 24, 2021 6:20 AM To: Wang, Kuang-chung Cc: petsc-users at mcs.anl.gov; Obradovic, Borna ; Cea, Stephen M Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC In Hermitian eigenproblems orthogonality of eigenvectors is guaranteed/enforced. But you are solving the problem as non-Hermitian. If your matrix is Hermitian, make sure you solve it as a HEP, and make sure that your matrix is numerically Hermitian. If your matrix is non-Hermitian, then you cannot expect the eigenvectors to be orthogonal. What you can do in this case is get an orthogonal basis of the computed eigenspace, see https://slepc.upv.es/documentation/current/docs/manualpages/EPS/EPSGetInvariantSubspace.html By the way, version 3.7 is more than 5 years old, it is better if you can upgrade to a more recent version. Jose > El 24 nov 2021, a las 7:15, Wang, Kuang-chung escribi?: > > Dear Jose : > I came across this thread describing issue using krylovschur and finding eigenvectors non-orthogonal. > https://lists.mcs.anl.gov/pipermail/petsc-users/2014-October/023360.ht > ml > > I furthermore have tested by reducing the tolerance as highlighted below from 1e-12 to 1e-16 with no luck. > Could you please suggest options/sources to try out ? > Thanks a lot for sharing your knowledge! > > Sincere, > Kuang-Chung Wang > > ======================================================= > Kuang-Chung Wang > Computational and Modeling Technology > Intel Corporation > Hillsboro OR 97124 > ======================================================= > > Here are more info: > ? slepc/3.7.4 > ? output message from by doing EPSView(eps,PETSC_NULL): > EPS Object: 1 MPI processes > type: krylovschur > Krylov-Schur: 50% of basis vectors kept after restart > Krylov-Schur: using the locking variant > problem type: non-hermitian eigenvalue problem > selected portion of the spectrum: closest to target: 20.1161 (in magnitude) > number of eigenvalues (nev): 40 > number of column vectors (ncv): 81 > maximum dimension of projected problem (mpd): 81 > maximum number of iterations: 1000 > tolerance: 1e-12 > convergence test: relative to the eigenvalue BV Object: 1 MPI > processes > type: svec > 82 columns of global length 2988 > vector orthogonalization method: classical Gram-Schmidt > orthogonalization refinement: always > block orthogonalization method: Gram-Schmidt > doing matmult as a single matrix-matrix product DS Object: 1 MPI > processes > type: nhep > ST Object: 1 MPI processes > type: sinvert > shift: 20.1161 > number of matrices: 1 > KSP Object: (st_) 1 MPI processes > type: preonly > maximum iterations=1000, initial guess is zero > tolerances: relative=1.12005e-09, absolute=1e-50, divergence=10000. > left preconditioning > using NONE norm type for convergence test > PC Object: (st_) 1 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 0., needed 0. > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij > rows=2988, cols=2988 > package used to perform factorization: mumps > total: nonzeros=614160, allocated nonzeros=614160 > total number of mallocs used during MatSetValues calls =0 > MUMPS run parameters: > SYM (matrix type): 0 > PAR (host participation): 1 > ICNTL(1) (output for error): 6 > ICNTL(2) (output of diagnostic msg): 0 > ICNTL(3) (output for global info): 0 > ICNTL(4) (level of printing): 0 > ICNTL(5) (input mat struct): 0 > ICNTL(6) (matrix prescaling): 7 > ICNTL(7) (sequential matrix ordering):7 > ICNTL(8) (scaling strategy): 77 > ICNTL(10) (max num of refinements): 0 > ICNTL(11) (error analysis): 0 > ICNTL(12) (efficiency control): 1 > ICNTL(13) (efficiency control): 0 > ICNTL(14) (percentage of estimated workspace increase): 20 > ICNTL(18) (input mat struct): 0 > ICNTL(19) (Schur complement info): 0 > ICNTL(20) (rhs sparse pattern): 0 > ICNTL(21) (solution struct): 0 > ICNTL(22) (in-core/out-of-core facility): 0 > ICNTL(23) (max size of memory can be allocated locally):0 > ICNTL(24) (detection of null pivot rows): 0 > ICNTL(25) (computation of a null space basis): 0 > ICNTL(26) (Schur options for rhs or solution): 0 > ICNTL(27) (experimental parameter): -24 > ICNTL(28) (use parallel or sequential ordering): 1 > ICNTL(29) (parallel ordering): 0 > ICNTL(30) (user-specified set of entries in inv(A)): 0 > ICNTL(31) (factors is discarded in the solve phase): 0 > ICNTL(33) (compute determinant): 0 > CNTL(1) (relative pivoting threshold): 0.01 > CNTL(2) (stopping criterion of refinement): 1.49012e-08 > CNTL(3) (absolute pivoting threshold): 0. > CNTL(4) (value of static pivoting): -1. > CNTL(5) (fixation for null pivots): 0. > RINFO(1) (local estimated flops for the elimination after analysis): > [0] 8.15668e+07 > RINFO(2) (local estimated flops for the assembly after factorization): > [0] 892584. > RINFO(3) (local estimated flops for the elimination after factorization): > [0] 8.15668e+07 > INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization): > [0] 16 > INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization): > [0] 16 > INFO(23) (num of pivots eliminated on this processor after factorization): > [0] 2988 > RINFOG(1) (global estimated flops for the elimination after analysis): 8.15668e+07 > RINFOG(2) (global estimated flops for the assembly after factorization): 892584. > RINFOG(3) (global estimated flops for the elimination after factorization): 8.15668e+07 > (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0) > INFOG(3) (estimated real workspace for factors on all processors after analysis): 614160 > INFOG(4) (estimated integer workspace for factors on all processors after analysis): 31971 > INFOG(5) (estimated maximum front size in the complete tree): 246 > INFOG(6) (number of nodes in the complete tree): 197 > INFOG(7) (ordering option effectively use after analysis): 2 > INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100 > INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 614160 > INFOG(10) (total integer space store the matrix factors after factorization): 31971 > INFOG(11) (order of largest frontal matrix after factorization): 246 > INFOG(12) (number of off-diagonal pivots): 0 > INFOG(13) (number of delayed pivots after factorization): 0 > INFOG(14) (number of memory compress after factorization): 0 > INFOG(15) (number of steps of iterative refinement after solution): 0 > INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 16 > INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 16 > INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 16 > INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 16 > INFOG(20) (estimated number of entries in the factors): 614160 > INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 14 > INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 14 > INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0 > INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1 > INFOG(25) (after factorization: number of pivots modified by static pivoting): 0 > INFOG(28) (after factorization: number of null pivots encountered): 0 > INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 614160 > INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 13, 13 > INFOG(32) (after analysis: type of analysis done): 1 > INFOG(33) (value used for ICNTL(8)): 7 > INFOG(34) (exponent of the determinant if determinant is requested): 0 > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=2988, cols=2988 > total: nonzeros=151488, allocated nonzeros=151488 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 996 nodes, limit used is 5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.seize at onera.fr Fri Dec 3 02:52:54 2021 From: pierre.seize at onera.fr (Pierre Seize) Date: Fri, 3 Dec 2021 09:52:54 +0100 Subject: [petsc-users] How to use F and G for TS Message-ID: Hello, I want to set a TS object for the time integration of my FV CFD solver. The equation is M dQ/dt = f(Q) where M is a diagonal mass matrix filled with the cell volumes from my FV discretisation. I've read the PETSc manual and I found some interesting mails in the petsc-users archive, but I still do not understand something. To me, there is three ways I could set my TS : 1. F(t, x, x') = Mx' - f(x)?? and G(t, x) = 0 (default) 2. F(t, x, x') = Mx'????????? and G(t, x) = f(x) 3. F(t, x, x') = x' (default) and G(t, x) = M^{-1} f(x) >From (https://lists.mcs.anl.gov/pipermail/petsc-dev/2017-October/021545.html), I think that unless I'm using an IMEX method, whatever F and G, it does F <-- F - G internally, but I would like to be sure. Will there be a difference be if I use an explicit method, as Euler or RK ? What about implicit method such as BEuler or Theta methods ? If I use an implicit method (beuler), what happens if I don't give F' and/or G' ? Are their matrix-vector product approximated with finite difference ? What I understand is that for implicit-explicit methods, "G is treated explicitly while F is treated implicitly". In this case, am I right to assume it's useless to give the RHS Jacobian ? Then, when is G' used ? If I do not use an IMEX method, are the 3 formulations equivalent ? Thank you for your help Pierre Seize -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Fri Dec 3 06:57:12 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Fri, 3 Dec 2021 12:57:12 +0000 Subject: [petsc-users] Hypre (GPU) memory consumption Message-ID: <938680ED-65FD-49C1-BBC7-FC84BBECD569@stfc.ac.uk> Hello, I am able to successfully run hypre on gpus but the problem seems to consumption a lot of memory. I ran ksp/ksp/tutorial/ex45 on a grid of 320 x 320 x 320 using 6 gpus by the following options mpirun -n 6 ./ex45 -da_grid_x 320 -da_grid_y 320 -da_grid_z 320 -dm_mat_type hypre -dm_vec_type cuda -ksp_type cg -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor -log_view -malloc_dump -memory_view -malloc_log From the log_view out (also attached) I get the following memory consumption: Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 9.7412e+09 max 1.6999e+09 min 1.5368e+09 Current process memory: total 8.1640e+09 max 1.4359e+09 min 1.2733e+09 Maximum (over computational time) space PetscMalloc()ed: total 7.7661e+08 max 1.3401e+08 min 1.1148e+08 Current space PetscMalloc()ed: total 1.8356e+06 max 3.0594e+05 min 3.0594e+05 Each gpu is a Nvidia Tesla V100 ? even using 4 gpus the system runs out of cuda memory alloc for the above problem. From the above listed memory output I believe the problem should be able to run on one gpu. Is the memory usage of hypre not listed include above? Best, Karthik. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ksp_cg_pc_hypre_ex45_N320_gpu_6.txt URL: From knepley at gmail.com Fri Dec 3 07:28:06 2021 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 3 Dec 2021 08:28:06 -0500 Subject: [petsc-users] Hypre (GPU) memory consumption In-Reply-To: <938680ED-65FD-49C1-BBC7-FC84BBECD569@stfc.ac.uk> References: <938680ED-65FD-49C1-BBC7-FC84BBECD569@stfc.ac.uk> Message-ID: On Fri, Dec 3, 2021 at 7:57 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Hello, > > > > I am able to successfully run hypre on gpus but the problem seems to > consumption a lot of memory. I ran ksp/ksp/tutorial/ex45 on a grid of 320 > x 320 x 320 using 6 gpus by the following options > > > > mpirun -n 6 ./ex45 -da_grid_x 320 -da_grid_y 320 -da_grid_z 320 > -dm_mat_type hypre -dm_vec_type cuda -ksp_type cg -pc_type hypre > -pc_hypre_type boomeramg -ksp_monitor -log_view -malloc_dump -memory_view > -malloc_log > > > > From the log_view out (also attached) I get the following memory > consumption: > > > > Summary of Memory Usage in PETSc > > Maximum (over computational time) process memory: total > 9.7412e+09 max 1.6999e+09 min 1.5368e+09 > > Current process memory: > total 8.1640e+09 max 1.4359e+09 min 1.2733e+09 > > Maximum (over computational time) space PetscMalloc()ed: total 7.7661e+08 > max 1.3401e+08 min 1.1148e+08 > > Current space PetscMalloc()ed: > total 1.8356e+06 max 3.0594e+05 min > 3.0594e+05 > > > > Each gpu is a Nvidia Tesla V100 ? even using 4 gpus the system runs out of > cuda memory alloc for the above problem. From the above listed memory > output I believe the problem should be able to run on one gpu. Is the > memory usage of hypre not listed include above? > Yes, we have no way of knowing how much memory Hypre is using on the GPU. Thanks, Matt > Best, > > Karthik. > > > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Dec 3 07:28:41 2021 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 3 Dec 2021 08:28:41 -0500 Subject: [petsc-users] Hypre (GPU) memory consumption In-Reply-To: <938680ED-65FD-49C1-BBC7-FC84BBECD569@stfc.ac.uk> References: <938680ED-65FD-49C1-BBC7-FC84BBECD569@stfc.ac.uk> Message-ID: You might try with -pc_type jacobi (and remove mat_type hypre) to get a baseline. Does that work and how much memory usage do you see? >From there you could try -pc_type gamg to test the built-in AMG. However, 32M equations in 3D (Lapalcian?) might be too much. AMG does use a lot of memory. On Fri, Dec 3, 2021 at 7:57 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Hello, > > > > I am able to successfully run hypre on gpus but the problem seems to > consumption a lot of memory. I ran ksp/ksp/tutorial/ex45 on a grid of 320 > x 320 x 320 using 6 gpus by the following options > > > > mpirun -n 6 ./ex45 -da_grid_x 320 -da_grid_y 320 -da_grid_z 320 > -dm_mat_type hypre -dm_vec_type cuda -ksp_type cg -pc_type hypre > -pc_hypre_type boomeramg -ksp_monitor -log_view -malloc_dump -memory_view > -malloc_log > > > > From the log_view out (also attached) I get the following memory > consumption: > > > > Summary of Memory Usage in PETSc > > Maximum (over computational time) process memory: total > 9.7412e+09 max 1.6999e+09 min 1.5368e+09 > > Current process memory: > total 8.1640e+09 max 1.4359e+09 min 1.2733e+09 > > Maximum (over computational time) space PetscMalloc()ed: total 7.7661e+08 > max 1.3401e+08 min 1.1148e+08 > > Current space PetscMalloc()ed: > total 1.8356e+06 max 3.0594e+05 min > 3.0594e+05 > > > > Each gpu is a Nvidia Tesla V100 ? even using 4 gpus the system runs out of > cuda memory alloc for the above problem. From the above listed memory > output I believe the problem should be able to run on one gpu. Is the > memory usage of hypre not listed include above? > > > > Best, > > Karthik. > > > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Fri Dec 3 11:13:15 2021 From: hongzhang at anl.gov (Zhang, Hong) Date: Fri, 3 Dec 2021 17:13:15 +0000 Subject: [petsc-users] How to use F and G for TS In-Reply-To: References: Message-ID: <09605D51-3A5F-4BAF-84CF-DD31C2BD5B36@anl.gov> On Dec 3, 2021, at 2:52 AM, Pierre Seize > wrote: Hello, I want to set a TS object for the time integration of my FV CFD solver. The equation is M dQ/dt = f(Q) where M is a diagonal mass matrix filled with the cell volumes from my FV discretisation. I've read the PETSc manual and I found some interesting mails in the petsc-users archive, but I still do not understand something. To me, there is three ways I could set my TS : 1. F(t, x, x') = Mx' - f(x) and G(t, x) = 0 (default) 2. F(t, x, x') = Mx' and G(t, x) = f(x) 3. F(t, x, x') = x' (default) and G(t, x) = M^{-1} f(x) From (https://lists.mcs.anl.gov/pipermail/petsc-dev/2017-October/021545.html), I think that unless I'm using an IMEX method, whatever F and G, it does F <-- F - G internally, but I would like to be sure. Will there be a difference be if I use an explicit method, as Euler or RK ? What about implicit method such as BEuler or Theta methods ? To use an explicit method, you must use the explicit form (option 3 above). For implicit methods, all the three options will work. If I use an implicit method (beuler), what happens if I don't give F' and/or G' ? Are their matrix-vector product approximated with finite difference ? If the Jacobian is not provided, it will be approximated with finite-difference. If you prefer a matrix-free implementation, you can use -snes_mf. What I understand is that for implicit-explicit methods, "G is treated explicitly while F is treated implicitly". In this case, am I right to assume it's useless to give the RHS Jacobian ? Then, when is G' used ? Right. The RHS Jacobian (G') is not needed for IMEX. G? is used if you switch to an implicit method such as beuler. If I do not use an IMEX method, are the 3 formulations equivalent ? Option 3 allows you to switch between explicit methods and implicit methods at runtime. Of course, it requires inverting the mass matrix, which is fine in your case but may be difficult for other applications (where option 1 can be used). Option 2 is mostly useful for IMEX. Hong (Mr.) Thank you for your help Pierre Seize -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Dec 3 13:32:26 2021 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 3 Dec 2021 14:32:26 -0500 Subject: [petsc-users] dmplex extruded layers In-Reply-To: References: Message-ID: On Wed, Nov 24, 2021 at 5:45 PM Bhargav Subramanya < bhargav.subramanya at kaust.edu.sa> wrote: > On Wed, Nov 24, 2021 at 8:59 PM Matthew Knepley wrote: > >> On Wed, Nov 24, 2021 at 12:27 PM Bhargav Subramanya < >> bhargav.subramanya at kaust.edu.sa> wrote: >> >>> Dear Matt and Mark, >>> >>> Thank you very much for your reply. Your inputs are very useful to me. >>> >>> On Mon, Nov 22, 2021 at 9:38 PM Matthew Knepley >>> wrote: >>> >>>> On Mon, Nov 22, 2021 at 12:10 PM Bhargav Subramanya < >>>> bhargav.subramanya at kaust.edu.sa> wrote: >>>> >>>>> Dear All, >>>>> >>>>> I have a prismatic mesh generated by extruding a base icosahedron >>>>> spherical surface mesh. The mesh is distributed on 2 processes after >>>>> extrusion. The dof for my problem are defined only on the simplices of the >>>>> spherical layers. >>>>> >>>>> 1. For a given spherical layer, I want to get points, which are >>>>> basically simplices, lying on that layer only. DMPlexGetHeightStratum >>>>> returns the points on all the spherical Layers. I can probably use >>>>> DMPlexFindVertices (based on the radius of the spherical layer) and >>>>> DMPlexGetSimplexOrBoxCells. Could you suggest if there is a better way to >>>>> retrieve the points (which are simplices) on a given spherical layer in the >>>>> extruded mesh? >>>>> >>>> >>>> DMPlexGetHeightStratum() refers to height in the Hasse Diagram, which >>>> is a DAG, not in the mesh. For example, height 0 points are the cells, >>>> height 1 are the faces, etc. >>>> >>>> I believe the default numbering for extrusion (in the main branch), is >>>> that all vertices produced from a given vertex be in order. This would mean >>>> that if v were the vertex point number, then >>>> >>>> (v - vStart) % Nl == l >>>> >>>> where Nl is the number of layers and l is the layer of that vertex. It >>>> is also the same for triangles. So if you want to segregate each shell, >>>> after extrusion, make a label that gives triangles this marker, and then >>>> use DMPlexLabelComplete(). Then after refinement you will still have the >>>> shells labeled correctly. >>>> >>>> I would be happy to help you make an example that does this. It seems >>>> cool. >>>> >>> >>> ----- Sure, I am interested to make an example of it. And, I really >>> appreciate your help. >>> >>>> >>>> >>>>> 2. I am able to visualize the entire mesh using dm_view. Is there a >>>>> way to get the mesh file for the local dm from a specific process? >>>>> >>>> >>>> You can use -dm_partition_view which outputs a field with the process >>>> number. Then use Clip inside Paraview and clip to the process number you >>>> want, >>>> or just view that field so each process has a different color. >>>> >>>> >>>>> 3. One of my previous emails might have got lost as I mistakenly >>>>> attached a large mesh file and sent it. So I am repeating the question >>>>> here. DMPlexExtrude gives the following error after distributing the base >>>>> 2D spherical mesh. Both refinement or/and extrusion after dm distribution >>>>> does not work for me. In fact, I tried >>>>> with src/dm/impls/plex/tutorials/ex10.c also. Although the mesh is >>>>> distributed after the extrusion in ex10.c (which is working fine for me), I >>>>> tried to distribute before extrusion, and I get the following error. Could >>>>> you please suggest where I might be making a mistake? >>>>> >>>> >>>> So you want to distribute the mesh before extruding. For that small >>>> example (using the main branch), I run >>>> >>>> mpiexec -n 3 ./meshtest -dm_plex_shape sphere -dm_refine_pre 2 >>>> -dm_distribute -dm_partition_view -dm_view hdf5:mesh.h5 -dm_extrude 3 >>>> >>> >>> ----- I pulled the code from the main branch and configured my petsc >>> again. I am now finally able to reproduce the mesh that you have attached. >>> I am able to do parallel mesh refinement, followed by extrusion, and use >>> the following code to check that. However, I still have one problem. >>> Attached are the extruded mesh files. >>> >>> 1. Mesh doesn't seem to be refined properly near interfaces between the >>> processes. I then used -init_dm_distribute_overlap of 1 and that fixed the >>> problem. I hope this is the way to do that. >>> >> >> Hmm, something is wrong here. We should track this down and fix it. I do >> not see this. Could you tell me how to reproduce the problem? >> > > ----- I mainly wanted to check if the mesh was distributed or not before > the extrusion. I do this by specifying different prefix options as shown in > the code below. I think using different prefix options is probably causing > the problem. > > #include > > int main(int argc, char **argv) > { > DM dm; > PetscBool distributed; > PetscErrorCode ierr; > > > ierr = PetscInitialize(&argc, &argv, NULL, NULL);if (ierr) return ierr; > ierr = DMCreate(PETSC_COMM_WORLD, &dm);CHKERRQ(ierr); > ierr = DMSetType(dm, DMPLEX);CHKERRQ(ierr); > ierr = PetscObjectSetOptionsPrefix((PetscObject) dm, > "init_");CHKERRQ(ierr); > ierr = DMSetFromOptions(dm);CHKERRQ(ierr); > > ierr = DMPlexIsDistributed(dm, &distributed); > ierr = PetscSynchronizedPrintf(PETSC_COMM_WORLD, "%d \n", distributed); > CHKERRQ(ierr); > ierr = PetscSynchronizedFlush(PETSC_COMM_WORLD,PETSC_STDOUT); > > ierr = PetscObjectSetOptionsPrefix((PetscObject) dm, NULL);CHKERRQ(ierr); > ierr = DMSetFromOptions(dm);CHKERRQ(ierr); > ierr = DMViewFromOptions(dm, NULL, "-dm_view");CHKERRQ(ierr); > ierr = DMDestroy(&dm);CHKERRQ(ierr); > ierr = PetscFinalize(); > return ierr; > } > > The command line options are: mpiexec -n 3 ./cavity_flow.out > -init_dm_plex_shape sphere -init_dm_plex_sphere_radius 1.0 > -init_dm_refine_pre 2 -init_dm_distribute -dm_refine 2 -dm_extrude 3 > -dm_partition_view -dm_view hdf5:mesh.h5 > Okay, I run exactly this code and command on 'main', and it does work for me: master *:~/Downloads/tmp/hpc1$ /PETSc3/petsc/apple/bin/mpiexec -n 3 ./meshtest -init_dm_plex_shape sphere -init_dm_plex_sphere_radius 1.0 -init_dm_refine_pre 2 -init_dm_distribute -init_dm_view -dm_refine 2 -dm_extrude 3 -dm_partition_view_no -dm_view -malloc_debug 0 DM Object: sphere (init_) 3 MPI processes type: plex sphere in 2 dimensions: 0-cells: 66 66 67 1-cells: 171 173 172 2-cells: 106 108 106 Labels: depth: 3 strata with value/size (0 (66), 1 (171), 2 (106)) celltype: 3 strata with value/size (0 (66), 1 (171), 3 (106)) 1 1 1 DM Object: sphere 3 MPI processes type: plex sphere in 3 dimensions: 0-cells: 3588 3636 3604 1-cells: 13059 (2691) 13271 (2727) 13087 (2703) 2-cells: 14560 (7776) 14820 (7908) 14572 (7788) 3-cells: 5088 5184 5088 Labels: celltype: 6 strata with value/size (5 (7776), 9 (5088), 0 (3588), 1 (10368), 3 (6784), 2 (2691)) depth: 4 strata with value/size (0 (3588), 1 (13059), 2 (14560), 3 (5088)) WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! There is one unused database option. It is: Option left: name:-dm_partition_view_no (no value) > 2. The parallel mesh refinement doesn't seem to take care of the spherical >>> geometry (shown in the attached mesh file). Could you please suggest how to >>> fix it? >>> >> >> Okay, this is more subtle. Let me explain how things work inside. >> >> 1) When you refine, it inserts new cells/faces/edges/vertices. When we >> insert a new vertex, we ask what the coordinates should be. If no special >> information is there, we >> just average the surrounding vertices. >> >> 2) Then we allow a second step that processed all coordinates in the mesh >> after refinement >> >> For the sphere, we use 2) to make all points stick to the implicit >> surface (radius R). We cannot do this for the extruded mesh, so we turn it >> off. >> > > ---- thanks for this explanation. > > >> What makes the most sense to me is to do >> > >> 1) Serial mesh refinement to get enough cells to distribute evenly >> >> 2) Mesh distribution >> >> 3) Mesh refinement >> >> 4) Extrusion >> >> This should preserve the geometry and scale well. Does that sound right? >> > > -- Yes, this sounds right, and this must preserve the geometry. In fact, I > was under the assumption that I was doing the exact same thing by using > these options: > mpiexec -n 3 ./cavity_flow.out -init_dm_plex_shape sphere > -init_dm_plex_sphere_radius 1.0 -init_dm_refine_pre 2 -init_dm_distribute > -dm_refine 2 -dm_extrude 3 -dm_partition_view -dm_view hdf5:mesh.h5 > Do you think these options are not doing the things in the order of 1,2,3 > and 4 as you specified? If not, could you suggest the options that I need > to use to get in the order of 1,2,3 and 4? > Yes, I am getting the surface of the sphere with this. Thanks, Matt > >> Thanks, >> >> Matt >> >> >>> command line options: >>> -init_dm_plex_shape sphere -init_dm_plex_sphere_radius 1.0 >>> -init_dm_refine_pre 2 -init_dm_distribute -dm_refine 2 -dm_partition_view >>> -dm_view hdf5:mesh.h5 -dm_extrude 3 >>> >>> ierr = DMCreate(PETSC_COMM_WORLD, &dm);CHKERRQ(ierr); >>> ierr = DMSetType(dm, DMPLEX);CHKERRQ(ierr); >>> ierr = PetscObjectSetOptionsPrefix((PetscObject) dm, >>> "init_");CHKERRQ(ierr); >>> ierr = DMSetFromOptions(dm);CHKERRQ(ierr); >>> >>> ierr = DMPlexIsDistributed(dm, &distributed); >>> ierr = PetscSynchronizedPrintf(PETSC_COMM_WORLD, "%d \n", distributed); >>> CHKERRQ(ierr); >>> ierr = PetscSynchronizedFlush(PETSC_COMM_WORLD,PETSC_STDOUT); >>> >>> ierr = PetscObjectSetOptionsPrefix((PetscObject) dm, NULL);CHKERRQ(ierr); >>> ierr = DMSetFromOptions(dm);CHKERRQ(ierr); >>> ierr = DMViewFromOptions(dm, NULL, "-dm_view");CHKERRQ(ierr); >>> >>> >>>> and I get the attached picture. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> [0]PETSC ERROR: --------------------- Error Message >>>>> -------------------------------------------------------------- >>>>> [0]PETSC ERROR: Object is in wrong state >>>>> [0]PETSC ERROR: This DMPlex is distributed but its PointSF has no >>>>> graph set >>>>> [0]PETSC ERROR: See https://petsc.org/release/faq/ >>>>> >>>>> for trouble shooting. >>>>> [0]PETSC ERROR: Petsc Release Version 3.16.1, unknown >>>>> [0]PETSC ERROR: ./cavity_flow.out on a arch-darwin-c-debug named >>>>> kl-21859 by subrambm Mon Nov 22 19:47:14 2021 >>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ >>>>> --with-fc=gfortran --download-mpich --download-fblaslapack >>>>> --download-superlu_dist --download-hypre --download-fiat >>>>> --download-generator --download-triangle --download-tetgen --download-chaco >>>>> --download-make -download-boost --download-cmake --download-ml >>>>> --download-mumps=https://bitbucket.org/petsc/pkg-mumps.git >>>>> >>>>> --download-mumps-commit=v5.4.1-p1 --download-scalapack --download-ptscotch >>>>> --download-hdf5 --force >>>>> [0]PETSC ERROR: #1 DMPlexCheckPointSF() at >>>>> /Users/subrambm/petsc/src/dm/impls/plex/plex.c:8554 >>>>> [0]PETSC ERROR: #2 DMPlexOrientInterface_Internal() at >>>>> /Users/subrambm/petsc/src/dm/impls/plex/plexinterpolate.c:595 >>>>> [0]PETSC ERROR: #3 DMPlexInterpolate() at >>>>> /Users/subrambm/petsc/src/dm/impls/plex/plexinterpolate.c:1357 >>>>> [0]PETSC ERROR: #4 DMPlexExtrude() at >>>>> /Users/subrambm/petsc/src/dm/impls/plex/plexcreate.c:1543 >>>>> [0]PETSC ERROR: #5 CreateMesh() at ex10.c:161 >>>>> [0]PETSC ERROR: #6 main() at ex10.c:180 >>>>> [0]PETSC ERROR: PETSc Option Table entries: >>>>> [0]PETSC ERROR: -dm_plex_extrude_layers 3 >>>>> [0]PETSC ERROR: -dm_view vtk:mesh.vtk >>>>> [0]PETSC ERROR: -init_dm_plex_dim 2 >>>>> [0]PETSC ERROR: -petscpartitioner_type simple >>>>> [0]PETSC ERROR: -srf_dm_distribute >>>>> [0]PETSC ERROR: -srf_dm_refine 0 >>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>> >>>>> Thanks, >>>>> Bhargav >>>>> >>>>> >>>>> ------------------------------ >>>>> This message and its contents, including attachments are intended >>>>> solely for the original recipient. If you are not the intended recipient or >>>>> have received this message in error, please notify me immediately and >>>>> delete this message from your computer system. Any unauthorized use or >>>>> distribution is prohibited. Please consider the environment before printing >>>>> this email. >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >>> ------------------------------ >>> This message and its contents, including attachments are intended solely >>> for the original recipient. If you are not the intended recipient or have >>> received this message in error, please notify me immediately and delete >>> this message from your computer system. Any unauthorized use or >>> distribution is prohibited. Please consider the environment before printing >>> this email. >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > ------------------------------ > This message and its contents, including attachments are intended solely > for the original recipient. If you are not the intended recipient or have > received this message in error, please notify me immediately and delete > this message from your computer system. Any unauthorized use or > distribution is prohibited. Please consider the environment before printing > this email. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrsd at gmail.com Fri Dec 3 17:59:15 2021 From: andrsd at gmail.com (David Andrs) Date: Fri, 3 Dec 2021 16:59:15 -0700 Subject: [petsc-users] Output data using ExodusIIViewer In-Reply-To: References: <48B7C2BC-B133-4ADB-A269-56B666A52C81@mcmaster.ca> <7d14bab6-7df9-4804-9bb5-c810c64a7a86@edison.tech> <76929DE4-AFA0-4E60-9D32-E9E7A599D5DE@mcmaster.ca> Message-ID: In case, somebody finds this thread in future, attached file produces an ExodusII file that I was able to open in paraview 5.9 ? ?David On Dec 2 2021, at 8:02 AM, David Andrs wrote: > > Blaise, > > ? > > ?thank you very much for you help. This helps a lot. > > ? > > ?By the way, I was not able to open the produced file in paraview 5.9.1 (failed with "EX_INQ_TIME failed"). However, ncdump seems correct. At least, I do not immediately see anything wrong there. I will try to look into this problem myself, now that I know it is ok to do direct exodusII calls. > > ? > > Thank you again,? > > ? > > ?David > > ? > > On Dec 1 2021, at 12:48 PM, Blaise Bourdin wrote: > > > > > > > David, > > > > > > > > Here is a modified example. > > > > Exodus needs some additional work prior to saving fields. See the attached modified example. > > > > > > > > Blaise > > > > > > > > > > > > > > > > > > > > On Dec 1, 2021, at 1:54 PM, Blaise Bourdin wrote: > > > > > > > > > > > > > > > OK, let me have a look. > > > > > > Blaise > > > > > > > > > > > > > > > > > > > > > > > On Nov 30, 2021, at 7:31 PM, David Andrs wrote: > > > > > > > > > > > > > > > > I see. I added a "Cell Sets" label like so: > > > > > > > > ? > > > > > > > > ? DMCreateLabel(dm, "Cell Sets"); > > > > > > > > DMLabel cs_label; > > > > > > > > DMGetLabel(dm, "Cell Sets", &cs_label); > > > > > > > > DMLabelAddStratum(cs_label, 0); > > > > > > > > PetscInt idxs[] = { 1 }; > > > > > > > > IS is; > > > > > > > > ISCreateGeneral(comm, 1, idxs, PETSC_COPY_VALUES, &is); > > > > > > > > DMLabelSetStratumIS(cs_label, 0, is); > > > > > > > > ISDestroy(&is); > > > > > > > > > > > > Note, that I have only a single element (Quad4) in the mesh and I am just trying to get this working, so I understand what needs to happen for larger meshes. > > > > > > > > ? > > > > > > > > ?This got me past the segfault, but now I see: > > > > > > > > ? > > > > > > > > [?0]PETSC ERROR: Argument out of range > > > > > > > > [0]PETSC ERROR: Number of vertices 1 in dimension 2 has no ExodusII type > > > > > > > > > > > > So, I assume I need to do something ?more to make it work. > > > > > > > > ? > > > > > > > > ?David > > > > > > > > ?? > > > > > > > > On Nov 30 2021, at 11:39 AM, Blaise Bourdin wrote: > > > > > > > > > > > > > > It looks like your DM cannot be saved in exodus format as such. The exodus format requires that all cells be part of a single block (defined by ?Cell Set? labels), and that the cell sets consists of sequentially numbered cells. > > > > > > > > > > Can you see if that is enough? If not, I will go through your example > > > > > > > > > > > > > > > > > > > > Blaise > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Nov 30, 2021, at 9:50 AM, David Andrs wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hello! > > > > > > > > > > > > > > > > > > > > > > > > I am trying to store data into an ExodusII file using the ExodusIIViewer, but running into a segfault inside PETSc. Attached is a minimal example showing the problem. It can very much be that I am missing something obvious. However, if I change the code to VTKViewer I get the desired output file. > > > > > > > > > > > > > > > > > > > > > > > > Machine: MacBook Pro 2019 > > > > > > > > > > > > OS version/type: Darwin notak.local 21.1.0 Darwin Kernel Version 21.1.0: Wed Oct 13 17:33:23 PDT 2021; root:xnu-8019.41.5~1/RELEASE_X86_64 x86_64 > > > > > > > > > > > > PETSc: Petsc Release Version 3.16.1, Nov 01, 2021 > > > > > > > > > > > > MPI: MPICH 3.4.2 > > > > > > > > > > > > Compiler: clang-12 > > > > > > > > > > > > > > > > > > > > > > > > Call stack (not sure how relevant that is since it is from opt version): > > > > > > > > > > > > > > > > > > > > > > > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0xc) > > > > > > > > > > > > frame #0: 0x0000000102303ba9 libpetsc.3.16.dylib`DMView_PlexExodusII(dm=, viewer=) at plexexodusii.c:457:45 [opt] > > > > > > > > > > > > 454 else if (degree == 2) nodes[cs] = nodesHexP2; > > > > > > > > > > > > 455 } > > > > > > > > > > > > 456 /* Compute the number of cells not in the connectivity table */ > > > > > > > > > > > > -> 457 cellsNotInConnectivity -= nodes[cs][3]*csSize; > > > > > > > > > > > > 458 > > > > > > > > > > > > 459 ierr = ISRestoreIndices(stratumIS, &cells);CHKERRQ(ierr); > > > > > > > > > > > > 460 ierr = ISDestroy(&stratumIS);CHKERRQ(ierr); > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > With regards, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > David Andrs > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > Professor, Department of Mathematics & Statistics > > > > > > > > > > Hamilton Hall room 409A, McMaster University > > > > > > > > > > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > > > > > > > > > > Tel. +1 (905) 525 9140 ext. 27243 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Professor, Department of Mathematics & Statistics > > > > > > Hamilton Hall room 409A, McMaster University > > > > > > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > > > > > > Tel. +1 (905) 525 9140 ext. 27243 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Professor, Department of Mathematics & Statistics > > > > Hamilton Hall room 409A, McMaster University > > > > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > > > > Tel. +1 (905) 525 9140 ext. 27243 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: exo2.c Type: application/octet-stream Size: 2645 bytes Desc: not available URL: From berend.vanwachem at ovgu.de Mon Dec 6 09:39:19 2021 From: berend.vanwachem at ovgu.de (Berend van Wachem) Date: Mon, 6 Dec 2021 16:39:19 +0100 Subject: [petsc-users] DMView and DMLoad In-Reply-To: References: <56ce2135-9757-4292-e33b-c7eea8fb7b2e@ovgu.de> <056E066F-D596-4254-A44A-60BFFD30FE82@erdw.ethz.ch> <6c4e0656-db99-e9da-000f-ab9f7dd62c07@ovgu.de> Message-ID: Dear Koki, Thanks for your email. In the example of your last email DMPlexCoordinatesLoad() takes sF0 (PetscSF) as a third argument. In our code this modification does not fix the error when loading a periodic dm. Are we doing something wrong? I've included an example code at the bottom of this email, including the error output. Thanks and best regards, Berend /**** Write DM + Vec restart ****/ PetscViewerHDF5Open(PETSC_COMM_WORLD, "result", FILE_MODE_WRITE, &H5Viewer); PetscObjectSetName((PetscObject)dm, "plexA"); PetscViewerPushFormat(H5Viewer, PETSC_VIEWER_HDF5_PETSC); DMPlexTopologyView(dm, H5Viewer); DMPlexLabelsView(dm, H5Viewer); DMPlexCoordinatesView(dm, H5Viewer); PetscViewerPopFormat(H5Viewer); DM sdm; PetscSection s; DMClone(dm, &sdm); PetscObjectSetName((PetscObject)sdm, "dmA"); DMGetGlobalSection(dm, &s); DMSetGlobalSection(sdm, s); DMPlexSectionView(dm, H5Viewer, sdm); Vec vec, vecOld; PetscScalar *array, *arrayOld, *xVecArray, *xVecArrayOld; PetscInt numPoints; DMGetGlobalVector(sdm, &vec); DMGetGlobalVector(sdm, &vecOld); /*** Fill the vectors vec and vecOld ***/ VecGetArray(vec, &array); VecGetArray(vecOld, &arrayOld); VecGetLocalSize(xGlobalVector, &numPoints); VecGetArray(xGlobalVector, &xVecArray); VecGetArray(xOldGlobalVector, &xVecArrayOld); for (i = 0; i < numPoints; i++) /* Loop over all internal mesh points */ { array[i] = xVecArray[i]; arrayOld[i] = xVecArrayOld[i]; } VecRestoreArray(vec, &array); VecRestoreArray(vecOld, &arrayOld); VecRestoreArray(xGlobalVector, &xVecArray); VecRestoreArray(xOldGlobalVector, &xVecArrayOld); PetscObjectSetName((PetscObject)vec, "vecA"); PetscObjectSetName((PetscObject)vecOld, "vecB"); DMPlexGlobalVectorView(dm, H5Viewer, sdm, vec); DMPlexGlobalVectorView(dm, H5Viewer, sdm, vecOld); PetscViewerDestroy(&H5Viewer); /*** end of writing ****/ /*** Load ***/ PetscViewerHDF5Open(PETSC_COMM_WORLD, "result", FILE_MODE_READ, &H5Viewer); DMCreate(PETSC_COMM_WORLD, &dm); DMSetType(dm, DMPLEX); PetscObjectSetName((PetscObject)dm, "plexA"); PetscViewerPushFormat(H5Viewer, PETSC_VIEWER_HDF5_PETSC); DMPlexTopologyLoad(dm, H5Viewer, &sfO); DMPlexLabelsLoad(dm, H5Viewer); DMPlexCoordinatesLoad(dm, H5Viewer, sfO); PetscViewerPopFormat(H5Viewer); DMPlexDistribute(dm, Options->Mesh.overlap, &sfDist, &distributedDM); if (distributedDM) { DMDestroy(&dm); dm = distributedDM; PetscObjectSetName((PetscObject)dm, "plexA"); } PetscSFCompose(sfO, sfDist, &sf); PetscSFDestroy(&sfO); PetscSFDestroy(&sfDist); DMClone(dm, &sdm); PetscObjectSetName((PetscObject)sdm, "dmA"); DMPlexSectionLoad(dm, H5Viewer, sdm, sf, &globalDataSF, &localDataSF); /** Load the Vectors **/ DMGetGlobalVector(sdm, &Restart_xGlobalVector); VecSet(Restart_xGlobalVector,0.0); PetscObjectSetName((PetscObject)Restart_xGlobalVector, "vecA"); DMPlexGlobalVectorLoad(dm, H5Viewer, sdm, globalDataSF,Restart_xGlobalVector); DMGetGlobalVector(sdm, &Restart_xOldGlobalVector); VecSet(Restart_xOldGlobalVector,0.0); PetscObjectSetName((PetscObject)Restart_xOldGlobalVector, "vecB"); DMPlexGlobalVectorLoad(dm, H5Viewer, sdm, globalDataSF, Restart_xOldGlobalVector); PetscViewerDestroy(&H5Viewer); /**** The error message when loading is the following ************/ Creating and distributing mesh [0]PETSC ERROR: --------------------- Error Message -------------------------- [0]PETSC ERROR: Invalid argument [0]PETSC ERROR: Number of coordinates loaded 17128 does not match number of vertices 8000 [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.16.1-435-g007f11b901 GIT Date: 2021-12-01 14:31:21 +0000 [0]PETSC ERROR: ./MF3 on a linux-gcc-openmpi-opt named ivt24.ads.uni-magdeburg.de by berend Mon Dec 6 16:11:21 2021 [0]PETSC ERROR: Configure options --with-p4est=yes --with-partemis --with-metis --with-debugging=no --download-metis=yes --download-parmetis=yes --with-errorchecking=no --download-hdf5 --download-zlib --download-p4est [0]PETSC ERROR: #1 DMPlexCoordinatesLoad_HDF5_V0_Private() at /home/berend/src/petsc_main/src/dm/impls/plex/plexhdf5.c:1387 [0]PETSC ERROR: #2 DMPlexCoordinatesLoad_HDF5_Internal() at /home/berend/src/petsc_main/src/dm/impls/plex/plexhdf5.c:1419 [0]PETSC ERROR: #3 DMPlexCoordinatesLoad() at /home/berend/src/petsc_main/src/dm/impls/plex/plex.c:2070 [0]PETSC ERROR: #4 RestartMeshDM() at /home/berend/src/eclipseworkspace/multiflow/src/io/restartmesh.c:81 [0]PETSC ERROR: #5 CreateMeshDM() at /home/berend/src/eclipseworkspace/multiflow/src/mesh/createmesh.c:61 [0]PETSC ERROR: #6 main() at /home/berend/src/eclipseworkspace/multiflow/src/general/main.c:132 [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: --download-hdf5 [0]PETSC ERROR: --download-metis=yes [0]PETSC ERROR: --download-p4est [0]PETSC ERROR: --download-parmetis=yes [0]PETSC ERROR: --download-zlib [0]PETSC ERROR: --with-debugging=no [0]PETSC ERROR: --with-errorchecking=no [0]PETSC ERROR: --with-metis [0]PETSC ERROR: --with-p4est=yes [0]PETSC ERROR: --with-partemis [0]PETSC ERROR: -d results [0]PETSC ERROR: -o run.mf [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 62. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- On 11/19/21 00:26, Sagiyama, Koki wrote: > Hi Berend, > > I was not able to reproduce the issue you are having, but the following > 1D example (and similar 2D examples) worked fine for me using the latest > PETSc. Please note that DMPlexCoordinatesLoad() now takes a PetscSF > object as the third argument, but the default behavior is unchanged. > > /* test_periodic_io.c */ > > #include > #include > #include > > int main(int argc, char **argv) > { > ? DM ? ? ? ? ? ? ? ? dm; > ? Vec ? ? ? ? ? ? ? ?coordinates; > ? PetscViewer ? ? ? ?viewer; > ? PetscViewerFormat ?format = PETSC_VIEWER_HDF5_PETSC; > ? PetscSF ? ? ? ? ? ?sfO; > ? PetscErrorCode ? ? ierr; > > ? ierr = PetscInitialize(&argc, &argv, NULL, NULL); if (ierr) return ierr; > ? /* Save */ > ? ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD, "periodic_example.h5", > FILE_MODE_WRITE, &viewer);CHKERRQ(ierr); > ? { > ? ? DM ? ? ? ? ? ? ?pdm; > ? ? PetscInt ? ? ? ?dim = 1; > ? ? const PetscInt ?faces[1] = {4}; > ? ? DMBoundaryType ?periodicity[] = {DM_BOUNDARY_PERIODIC}; > ? ? PetscInt ? ? ? ?overlap = 1; > > ? ? ierr = DMPlexCreateBoxMesh(PETSC_COMM_WORLD, dim, PETSC_FALSE, > faces, NULL, NULL, periodicity, PETSC_TRUE, &dm);CHKERRQ(ierr); > ? ? ierr = DMPlexDistribute(dm, overlap, NULL, &pdm);CHKERRQ(ierr); > ? ? ierr = DMDestroy(&dm);CHKERRQ(ierr); > ? ? dm = pdm; > ? ? ierr = PetscObjectSetName((PetscObject)dm, "periodicDM");CHKERRQ(ierr); > ? } > ? ierr = DMGetCoordinates(dm, &coordinates);CHKERRQ(ierr); > ? ierr = PetscPrintf(PETSC_COMM_WORLD, "Coordinates before > saving:\n");CHKERRQ(ierr); > ? ierr = VecView(coordinates, NULL);CHKERRQ(ierr); > ? ierr = PetscViewerPushFormat(viewer, format);CHKERRQ(ierr); > ? ierr = DMPlexTopologyView(dm, viewer);CHKERRQ(ierr); > ? ierr = DMPlexCoordinatesView(dm, viewer);CHKERRQ(ierr); > ? ierr = PetscViewerPopFormat(viewer);CHKERRQ(ierr); > ? ierr = DMDestroy(&dm);CHKERRQ(ierr); > ? ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); > ? /* Load */ > ? ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD, "periodic_example.h5", > FILE_MODE_READ, &viewer);CHKERRQ(ierr); > ? ierr = DMCreate(PETSC_COMM_WORLD, &dm);CHKERRQ(ierr); > ? ierr = DMSetType(dm, DMPLEX);CHKERRQ(ierr); > ? ierr = PetscObjectSetName((PetscObject)dm, "periodicDM");CHKERRQ(ierr); > ? ierr = PetscViewerPushFormat(viewer, format);CHKERRQ(ierr); > ? ierr = DMPlexTopologyLoad(dm, viewer, &sfO);CHKERRQ(ierr); > ? ierr = DMPlexCoordinatesLoad(dm, viewer, sfO);CHKERRQ(ierr); > ? ierr = PetscViewerPopFormat(viewer);CHKERRQ(ierr); > ? ierr = DMGetCoordinates(dm, &coordinates);CHKERRQ(ierr); > ? ierr = PetscPrintf(PETSC_COMM_WORLD, "Coordinates after > loading:\n");CHKERRQ(ierr); > ? ierr = VecView(coordinates, NULL);CHKERRQ(ierr); > ? ierr = PetscSFDestroy(&sfO);CHKERRQ(ierr); > ? ierr = DMDestroy(&dm);CHKERRQ(ierr); > ? ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); > ? ierr = PetscFinalize(); > ? return ierr; > } > > mpiexec -n 2 ./test_periodic_io > > Coordinates before saving: > Vec Object: coordinates 2 MPI processes > ? type: mpi > Process [0] > 0. > Process [1] > 0.25 > 0.5 > 0.75 > Coordinates after loading: > Vec Object: vertices 2 MPI processes > ? type: mpi > Process [0] > 0. > 0.25 > 0.5 > 0.75 > Process [1] > > I would also like to note that, with the latest update, we can > optionally load coordinates directly on the distributed dm as (using > your notation): > > ? /* Distribute dm */ > ? ... > ? PetscSFCompose(sfO, sfDist, &sf); > ? DMPlexCoordinatesLoad(dm, viewer, sf); > > To use this feature, we need to pass "-dm_plex_view_hdf5_storage_version > 2.0.0" option when saving topology/coordinates. > > > Thanks, > Koki > ------------------------------------------------------------------------ > *From:* Berend van Wachem > *Sent:* Wednesday, November 17, 2021 3:16 PM > *To:* Hapla Vaclav ; PETSc users list > ; Lawrence Mitchell ; Sagiyama, > Koki > *Subject:* Re: [petsc-users] DMView and DMLoad > > ******************* > This email originates from outside Imperial. Do not click on links and > attachments unless you recognise the sender. > If you trust the sender, add them to your safe senders list > https://spam.ic.ac.uk/SpamConsole/Senders.aspx > to disable email > stamping for this address. > ******************* > Dear Vaclav, Lawrence, Koki, > > Thanks for your help! Following your advice and following your example > (https://petsc.org/main/docs/manual/dmplex/#saving-and-loading-data-with-hdf5 > ) > > we are able to save and load the DM with a wrapped Vector in h5 format > (PETSC_VIEWER_HDF5_PETSC) successfully. > > For saving, we use something similar to: > > ???? DMPlexTopologyView(dm, viewer); > ???? DMClone(dm, &sdm); > ???? ... > ???? DMPlexSectionView(dm, viewer, sdm); > ???? DMGetLocalVector(sdm, &vec); > ???? ... > ???? DMPlexLocalVectorView(dm, viewer, sdm, vec); > > and for loading: > > ???? DMCreate(PETSC_COMM_WORLD, &dm); > ???? DMSetType(dm, DMPLEX); > ???????? ... > ?????? PetscViewerPushFormat(viewer, PETSC_VIEWER_HDF5_PETSC); > ???? DMPlexTopologyLoad(dm, viewer, &sfO); > ???? DMPlexLabelsLoad(dm, viewer); > ???? DMPlexCoordinatesLoad(dm, viewer); > ???? PetscViewerPopFormat(viewer); > ???? ... > ???? PetscSFCompose(sfO, sfDist, &sf); > ???? ... > ???? DMClone(dm, &sdm); > ???? DMPlexSectionLoad(dm, viewer, sdm, sf, &globalDataSF, &localDataSF); > ???? DMGetLocalVector(sdm, &vec); > ???? ... > ???? DMPlexLocalVectorLoad(dm, viewer, sdm, localDataSF, vec); > > > This works fine for non-periodic DMs but for periodic cases the line: > > ???? DMPlexCoordinatesLoad(dm, H5Viewer); > > delivers the error message: invalid argument and the number of loaded > coordinates does not match the number of vertices. > > Is this a known shortcoming, or have we forgotten something to load > periodic DMs? > > Best regards, > > Berend. > > > > On 9/22/21 20:59, Hapla Vaclav wrote: >> To avoid confusions here, Berend seems to be specifically demanding XDMF >> (PETSC_VIEWER_HDF5_XDMF). The stuff we are now working on is parallel >> checkpointing in our own HDF5 format?(PETSC_VIEWER_HDF5_PETSC), I will >> make a series of MRs on this topic in the following days. >> >> For XDMF, we are specifically missing the ability to write/load DMLabels >> properly. XDMF uses specific cell-local numbering for faces for >> specification of face sets, and face-local numbering for specification >> of edge sets, which is not great wrt DMPlex design. And ParaView doesn't >> show any of these properly so it's hard to debug. Matt, we should talk >> about this soon. >> >> Berend, for now, could you just load the mesh initially from XDMF and >> then use our PETSC_VIEWER_HDF5_PETSC format for subsequent saving/loading? >> >> Thanks, >> >> Vaclav >> >>> On 17 Sep 2021, at 15:46, Lawrence Mitchell >> >> wrote: >>> >>> Hi Berend, >>> >>>> On 14 Sep 2021, at 12:23, Matthew Knepley >>> >> wrote: >>>> >>>> On Tue, Sep 14, 2021 at 5:15 AM Berend van Wachem >>>> >> wrote: >>>> Dear PETSc-team, >>>> >>>> We are trying to save and load distributed DMPlex and its associated >>>> physical fields (created with DMCreateGlobalVector) ?(Uvelocity, >>>> VVelocity, ?...) in HDF5_XDMF format. To achieve this, we do the >>>> following: >>>> >>>> 1) save in the same xdmf.h5 file: >>>> DMView( DM ????????, H5_XDMF_Viewer ); >>>> VecView( UVelocity, H5_XDMF_Viewer ); >>>> >>>> 2) load the dm: >>>> DMPlexCreateFromfile(PETSC_COMM_WORLD, Filename, PETSC_TRUE, DM); >>>> >>>> 3) load the physical field: >>>> VecLoad( UVelocity, H5_XDMF_Viewer ); >>>> >>>> There are no errors in the execution, but the loaded DM is distributed >>>> differently to the original one, which results in the incorrect >>>> placement of the values of the physical fields (UVelocity etc.) in the >>>> domain. >>>> >>>> This approach is used to restart the simulation with the last saved DM. >>>> Is there something we are missing, or there exists alternative routes to >>>> this goal? Can we somehow get the IS of the redistribution, so we can >>>> re-distribute the vector data as well? >>>> >>>> Many thanks, best regards, >>>> >>>> Hi Berend, >>>> >>>> We are in the midst of rewriting this. We want to support saving >>>> multiple meshes, with fields attached to each, >>>> and preserving the discretization (section) information, and allowing >>>> us to load up on a different number of >>>> processes. We plan to be done by October. Vaclav and I are doing this >>>> in collaboration with Koki Sagiyama, >>>> David Ham, and Lawrence Mitchell from the Firedrake team. >>> >>> The core load/save cycle functionality is now in PETSc main. So if >>> you're using main rather than a release, you can get access to it now. >>> This section of the manual shows an example of how to do >>> thingshttps://petsc.org/main/docs/manual/dmplex/#saving-and-loading-data-with-hdf5 >>> > >>> >>> Let us know if things aren't clear! >>> >>> Thanks, >>> >>> Lawrence >> From quentin.chevalier at polytechnique.edu Mon Dec 6 09:22:17 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Mon, 6 Dec 2021 16:22:17 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? Message-ID: Hello PETSc users, This email is a duplicata of this gitlab issue , sorry for any inconvenience caused. I want to compute a PETSc vector in real mode, than perform calculations with it in complex mode. I want as much of this process to be parallel as possible. Right now, I compile PETSc in real mode, compute my vector and save it to a file, then switch to complex mode, read it, and move on. This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking inspiration from petsc4py doc , a bitbucket example and another one , all top Google results for 'petsc hdf5') : > viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD)q.load(viewer)q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) > > This crashes my code. I obtain traceback : > File "/home/shared/code.py", line 121, in Load viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5petsc4py.PETSc.Error: error code 86[0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442[0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages[0] Unknown PetscViewer type given: hdf5 > > I have petsc4py 3.16 from this docker container (list of dependencies include PETSc and petsc4py). I'm pretty sure this is not intended behaviour. Any insight as to how to fix this issue (I tried running ./configure --with-hdf5 to no avail) or more generally to perform this jiggling between real and complex would be much appreciated, Kind regards. Quentin -------------- next part -------------- An HTML attachment was scrubbed... URL: From fischega at westinghouse.com Mon Dec 6 10:19:51 2021 From: fischega at westinghouse.com (Fischer, Greg A.) Date: Mon, 6 Dec 2021 16:19:51 +0000 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility Message-ID: Hello petsc-users, I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? Thanks, Greg ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Dec 6 10:58:22 2021 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 6 Dec 2021 11:58:22 -0500 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: References: Message-ID: This algorithm computes the residual norms indirectly (this is part of the "trick" of the algorithm to reduce the number of global reductions needed), thus they may not match the actual residual norm computed by || b - A x ||. How different are your values? Barry > On Dec 6, 2021, at 11:19 AM, Fischer, Greg A. via petsc-users wrote: > > Hello petsc-users, > > I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) > > I would like to do this with the ?ibcgs? method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the ?gcr? method, the value I calculate matches the function input value. > > Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? > > Thanks, > Greg > > > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 6 11:00:58 2021 From: jed at jedbrown.org (Jed Brown) Date: Mon, 06 Dec 2021 10:00:58 -0700 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: References: Message-ID: <874k7ld5x1.fsf@jedbrown.org> "Fischer, Greg A. via petsc-users" writes: > Hello petsc-users, > > I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) > > I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. IBCGS uses the preconditioned norm by default while GCR uses the unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or KSPSetNormType() to make IBCGS use unpreconditioned. > Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? > > Thanks, > Greg > > > ________________________________ > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). From fischega at westinghouse.com Mon Dec 6 11:08:02 2021 From: fischega at westinghouse.com (Fischer, Greg A.) Date: Mon, 6 Dec 2021 17:08:02 +0000 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: References: Message-ID: Quite different: 2-NORM(residualVec): 9802864883082.29 2-NORM(ksp): 6.802405785457120E-015 Perhaps I?m doing something else wrong? From: Barry Smith Sent: Monday, December 6, 2021 11:58 AM To: Fischer, Greg A. Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility [External Email] This algorithm computes the residual norms indirectly (this is part of the "trick" of the algorithm to reduce the number of global reductions needed), thus they may not match the actual residual norm computed by || b - A x ||. How different are your values? Barry On Dec 6, 2021, at 11:19 AM, Fischer, Greg A. via petsc-users > wrote: Hello petsc-users, I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) I would like to do this with the ?ibcgs? method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the ?gcr? method, the value I calculate matches the function input value. Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? Thanks, Greg ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). -------------- next part -------------- An HTML attachment was scrubbed... URL: From mazumder at purdue.edu Mon Dec 6 11:16:52 2021 From: mazumder at purdue.edu (Sanjoy Kumar Mazumder) Date: Mon, 6 Dec 2021 17:16:52 +0000 Subject: [petsc-users] CV_CONV_FAILURE with TSSUNDIALS in PETSc Message-ID: Hi all, I am trying to solve a set of coupled stiff ODEs in parallel using TSSUNDIALS with SUNDIALS_BDF as 'TSSundialsSetType' in PETSc. I am using a sparse Jacobian matrix of type MATMPIAIJ with no preconditioner. It runs for some time with a very small initial timestep (~10^-18) and then terminates abruptly with the following error: [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. Is there anything I am missing out or not doing properly? Given below is the complete error that is showing up after the termination. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [5]PETSC ERROR: [CVODE ERROR] CVode [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [7]PETSC ERROR: [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [16]PETSC ERROR: [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [19]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [19]PETSC ERROR: Error in external library [19]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [19]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [19]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [19]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [19]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [19]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [19]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [19]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [19]PETSC ERROR: #4 User provided function() line 0 in User file [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Error in external library [0]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [0]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [0]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [0]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [0]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [0]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [0]PETSC ERROR: #4 User provided function() line 0 in User file [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Error in external library [1]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [1]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [1]PETSC ERROR: [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [2]PETSC ERROR: Error in external library [2]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [2]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [2]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [2]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [2]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [2]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [2]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [2]PETSC ERROR: #4 User provided function() line 0 in User file [3]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [3]PETSC ERROR: Error in external library [3]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [3]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [3]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [3]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [3]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [3]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [3]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [3]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [3]PETSC ERROR: #4 User provided function() line 0 in User file [4]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [4]PETSC ERROR: Error in external library [4]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [4]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [4]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [4]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [4]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [4]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c --------------------- Error Message -------------------------------------------------------------- [5]PETSC ERROR: Error in external library [5]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [5]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [5]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [5]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [5]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [5]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [5]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [5]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [5]PETSC ERROR: #4 User provided function() line 0 in User file --------------------- Error Message -------------------------------------------------------------- [7]PETSC ERROR: Error in external library [7]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [7]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [7]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [7]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [7]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [7]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [7]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [7]PETSC ERROR: #4 User provided function() line 0 in User file [8]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [8]PETSC ERROR: Error in external library [8]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [8]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [8]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [8]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [9]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [9]PETSC ERROR: Error in external library [9]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [9]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [9]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [9]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [9]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [9]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [9]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [9]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [10]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [10]PETSC ERROR: Error in external library [10]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [10]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [10]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [10]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [10]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [10]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [10]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [10]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [10]PETSC ERROR: #4 User provided function() line 0 in User file [11]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [11]PETSC ERROR: Error in external library [11]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [11]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [11]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [11]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [11]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [11]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [11]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [11]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [11]PETSC ERROR: #4 User provided function() line 0 in User file [12]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [12]PETSC ERROR: Error in external library [12]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [12]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [12]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [12]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [12]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [12]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [12]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [12]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [12]PETSC ERROR: #4 User provided function() line 0 in User file [14]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [14]PETSC ERROR: Error in external library [14]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [14]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [14]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [14]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [14]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [14]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [15]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [15]PETSC ERROR: Error in external library [15]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [15]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [15]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 --------------------- Error Message -------------------------------------------------------------- [16]PETSC ERROR: Error in external library [16]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [16]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [16]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [16]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [16]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [16]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [16]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [16]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [16]PETSC ERROR: #4 User provided function() line 0 in User file [17]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [17]PETSC ERROR: Error in external library [17]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [17]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [17]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [17]PETSC ERROR: [18]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [18]PETSC ERROR: Error in external library [18]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [18]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [18]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [17]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [17]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [17]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [17]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [17]PETSC ERROR: #4 User provided function() line 0 in User file Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [1]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [1]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [1]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [1]PETSC ERROR: #4 User provided function() line 0 in User file [4]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [4]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [4]PETSC ERROR: #4 User provided function() line 0 in User file [8]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [8]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [8]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [8]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [8]PETSC ERROR: #4 User provided function() line 0 in User file [9]PETSC ERROR: #4 User provided function() line 0 in User file [13]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [13]PETSC ERROR: Error in external library [13]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [13]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [13]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [13]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [13]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [13]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [13]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [13]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [13]PETSC ERROR: #4 User provided function() line 0 in User file [14]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [14]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [14]PETSC ERROR: #4 User provided function() line 0 in User file [15]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [15]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [15]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [15]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [15]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [15]PETSC ERROR: #4 User provided function() line 0 in User file Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [18]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [18]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [18]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [18]PETSC ERROR: #4 User provided function() line 0 in User file At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [6]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [6]PETSC ERROR: Error in external library [6]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [6]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [6]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [6]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [6]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [6]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [6]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [6]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [6]PETSC ERROR: #4 User provided function() line 0 in User file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF with errorcode 76. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [bell-a027.rcac.purdue.edu:29752] 15 more processes have sent help message help-mpi-api.txt / mpi-abort [bell-a027.rcac.purdue.edu:29752] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages Thanks With regards, Sanjoy -------------- next part -------------- An HTML attachment was scrubbed... URL: From fischega at westinghouse.com Mon Dec 6 11:23:39 2021 From: fischega at westinghouse.com (Fischer, Greg A.) Date: Mon, 6 Dec 2021 17:23:39 +0000 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: <874k7ld5x1.fsf@jedbrown.org> References: <874k7ld5x1.fsf@jedbrown.org> Message-ID: I tried your suggestion, but the values are still quite different. -----Original Message----- From: Jed Brown Sent: Monday, December 6, 2021 12:01 PM To: Fischer, Greg A. via petsc-users ; petsc-users at mcs.anl.gov Cc: Fischer, Greg A. Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility [External Email] "Fischer, Greg A. via petsc-users" writes: > Hello petsc-users, > > I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) > > I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. IBCGS uses the preconditioned norm by default while GCR uses the unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or KSPSetNormType() to make IBCGS use unpreconditioned. > Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? > > Thanks, > Greg > > > ________________________________ > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). From jed at jedbrown.org Mon Dec 6 11:30:33 2021 From: jed at jedbrown.org (Jed Brown) Date: Mon, 06 Dec 2021 10:30:33 -0700 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: References: <874k7ld5x1.fsf@jedbrown.org> Message-ID: <871r2pd4jq.fsf@jedbrown.org> Please try with -pc_type none. There is always a small difference due to using the recurrence, but it should be small so long as the Krylov basis is close to orthogonal. I'd note that if you're using this expensive convergence test, then IBCGS probably isn't helping over BCGS (the I part is to amortize some vector and reduction costs). It'd be worth comparing when you use normal BCGS. "Fischer, Greg A." writes: > I tried your suggestion, but the values are still quite different. > > -----Original Message----- > From: Jed Brown > Sent: Monday, December 6, 2021 12:01 PM > To: Fischer, Greg A. via petsc-users ; petsc-users at mcs.anl.gov > Cc: Fischer, Greg A. > Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility > > [External Email] > > "Fischer, Greg A. via petsc-users" writes: > >> Hello petsc-users, >> >> I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) >> >> I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. > > IBCGS uses the preconditioned norm by default while GCR uses the unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or KSPSetNormType() to make IBCGS use unpreconditioned. > >> Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? >> >> Thanks, >> Greg >> >> >> ________________________________ >> >> This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). > > ________________________________ > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). From knepley at gmail.com Mon Dec 6 12:00:49 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Dec 2021 13:00:49 -0500 Subject: [petsc-users] CV_CONV_FAILURE with TSSUNDIALS in PETSc In-Reply-To: References: Message-ID: On Mon, Dec 6, 2021 at 12:17 PM Sanjoy Kumar Mazumder wrote: > Hi all, > > I am trying to solve a set of coupled stiff ODEs in parallel using > TSSUNDIALS with SUNDIALS_BDF as 'TSSundialsSetType' in PETSc. I am using a > sparse Jacobian matrix of type MATMPIAIJ with no preconditioner. It runs > for some time with a very small initial timestep (~10^-18) and then > terminates abruptly with the following error: > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > Is there anything I am missing out or not doing properly? Given below is > the complete error that is showing up after the termination. > It is hard to know for us. BDF is implicit, so CVODE has to solve a (non)linear system, and it looks like this is what fails. I would send it to the CVODE team. Alternatively, you can run with -ts_type bdf and we would have more information to help you with. Thanks, Matt > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [5]PETSC ERROR: > [CVODE ERROR] CVode > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [7]PETSC ERROR: > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [16]PETSC ERROR: > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [19]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [19]PETSC ERROR: Error in external library > [19]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [19]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [19]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [19]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [19]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [19]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [19]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [19]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [19]PETSC ERROR: #4 User provided function() line 0 in User file > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Error in external library > [0]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [0]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [0]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [0]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [0]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [0]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [0]PETSC ERROR: #4 User provided function() line 0 in User file > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: Error in external library > [1]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [1]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [1]PETSC ERROR: [2]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [2]PETSC ERROR: Error in external library > [2]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [2]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [2]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [2]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [2]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [2]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [2]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [2]PETSC ERROR: #4 User provided function() line 0 in User file > [3]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [3]PETSC ERROR: Error in external library > [3]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [3]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [3]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [3]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [3]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [3]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [3]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [3]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [3]PETSC ERROR: #4 User provided function() line 0 in User file > [4]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [4]PETSC ERROR: Error in external library > [4]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [4]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [4]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [4]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [4]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [4]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > --------------------- Error Message > -------------------------------------------------------------- > [5]PETSC ERROR: Error in external library > [5]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [5]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [5]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [5]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [5]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [5]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [5]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [5]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [5]PETSC ERROR: #4 User provided function() line 0 in User file > --------------------- Error Message > -------------------------------------------------------------- > [7]PETSC ERROR: Error in external library > [7]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [7]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [7]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [7]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [7]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [7]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [7]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [7]PETSC ERROR: #4 User provided function() line 0 in User file > [8]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [8]PETSC ERROR: Error in external library > [8]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [8]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [8]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [8]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [9]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [9]PETSC ERROR: Error in external library > [9]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [9]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [9]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [9]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [9]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [9]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [9]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [9]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [10]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [10]PETSC ERROR: Error in external library > [10]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [10]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [10]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [10]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [10]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [10]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [10]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [10]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [10]PETSC ERROR: #4 User provided function() line 0 in User file > [11]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [11]PETSC ERROR: Error in external library > [11]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [11]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [11]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [11]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [11]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [11]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [11]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [11]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [11]PETSC ERROR: #4 User provided function() line 0 in User file > [12]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [12]PETSC ERROR: Error in external library > [12]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [12]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [12]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [12]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [12]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [12]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [12]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [12]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [12]PETSC ERROR: #4 User provided function() line 0 in User file > [14]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [14]PETSC ERROR: Error in external library > [14]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [14]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [14]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [14]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [14]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [14]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [15]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [15]PETSC ERROR: Error in external library > [15]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [15]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [15]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > --------------------- Error Message > -------------------------------------------------------------- > [16]PETSC ERROR: Error in external library > [16]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [16]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [16]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [16]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [16]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [16]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [16]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [16]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [16]PETSC ERROR: #4 User provided function() line 0 in User file > [17]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [17]PETSC ERROR: Error in external library > [17]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [17]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [17]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [17]PETSC ERROR: [18]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [18]PETSC ERROR: Error in external library > [18]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [18]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [18]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [17]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [17]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [17]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [17]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [17]PETSC ERROR: #4 User provided function() line 0 in User file > Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 > --download-fblaslapack --download-sundials=yes --with-debugging > [1]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [1]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [1]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [1]PETSC ERROR: #4 User provided function() line 0 in User file > [4]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [4]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [4]PETSC ERROR: #4 User provided function() line 0 in User file > [8]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [8]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [8]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [8]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [8]PETSC ERROR: #4 User provided function() line 0 in User file > [9]PETSC ERROR: #4 User provided function() line 0 in User file > [13]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [13]PETSC ERROR: Error in external library > [13]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [13]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [13]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [13]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [13]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [13]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [13]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [13]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [13]PETSC ERROR: #4 User provided function() line 0 in User file > [14]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [14]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [14]PETSC ERROR: #4 User provided function() line 0 in User file > [15]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [15]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [15]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [15]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [15]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [15]PETSC ERROR: #4 User provided function() line 0 in User file > Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 > --download-fblaslapack --download-sundials=yes --with-debugging > [18]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [18]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [18]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [18]PETSC ERROR: #4 User provided function() line 0 in User file > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [6]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [6]PETSC ERROR: Error in external library > [6]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [6]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [6]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [6]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [6]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [6]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [6]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [6]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [6]PETSC ERROR: #4 User provided function() line 0 in User file > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF > with errorcode 76. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [bell-a027.rcac.purdue.edu:29752] 15 more processes have sent help > message help-mpi-api.txt / mpi-abort > [bell-a027.rcac.purdue.edu:29752] Set MCA parameter > "orte_base_help_aggregate" to 0 to see all help / error messages > > Thanks > > With regards, > Sanjoy > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 6 12:02:20 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Dec 2021 13:02:20 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > Hello PETSc users, > > This email is a duplicata of this gitlab issue > , sorry for any > inconvenience caused. > > I want to compute a PETSc vector in real mode, than perform calculations > with it in complex mode. I want as much of this process to be parallel as > possible. Right now, I compile PETSc in real mode, compute my vector and > save it to a file, then switch to complex mode, read it, and move on. > > This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's > advice I'm moving to HDF5 format. My code is as follows (taking inspiration > from petsc4py doc > , > a bitbucket example > > and another one > , > all top Google results for 'petsc hdf5') : > >> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD)q.load(viewer)q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) >> >> > This crashes my code. I obtain traceback : > >> File "/home/shared/code.py", line 121, in Load viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5petsc4py.PETSc.Error: error code 86[0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442[0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages[0] Unknown PetscViewer type given: hdf5 >> >> This means that PETSc has not been configured with HDF5 (--with-hdf5 or --download-hdf5), so the container should be updated. THanks, Matt > I have petsc4py 3.16 from this docker container > (list of dependencies > > include PETSc and petsc4py). > > I'm pretty sure this is not intended behaviour. Any insight as to how to > fix this issue (I tried running ./configure --with-hdf5 to no avail) or > more generally to perform this jiggling between real and complex would be > much appreciated, > > Kind regards. > > Quentin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fischega at westinghouse.com Mon Dec 6 12:07:13 2021 From: fischega at westinghouse.com (Fischer, Greg A.) Date: Mon, 6 Dec 2021 18:07:13 +0000 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: <871r2pd4jq.fsf@jedbrown.org> References: <874k7ld5x1.fsf@jedbrown.org> <871r2pd4jq.fsf@jedbrown.org> Message-ID: With -pc_type none, my value matches the ksp value, but it no longer converges. -----Original Message----- From: Jed Brown Sent: Monday, December 6, 2021 12:31 PM To: Fischer, Greg A. ; Fischer, Greg A. via petsc-users ; petsc-users at mcs.anl.gov Cc: Fischer, Greg A. Subject: RE: [petsc-users] KSPBuildResidual and KSPType compatibility [External Email] Please try with -pc_type none. There is always a small difference due to using the recurrence, but it should be small so long as the Krylov basis is close to orthogonal. I'd note that if you're using this expensive convergence test, then IBCGS probably isn't helping over BCGS (the I part is to amortize some vector and reduction costs). It'd be worth comparing when you use normal BCGS. "Fischer, Greg A." writes: > I tried your suggestion, but the values are still quite different. > > -----Original Message----- > From: Jed Brown > Sent: Monday, December 6, 2021 12:01 PM > To: Fischer, Greg A. via petsc-users ; petsc-users at mcs.anl.gov > Cc: Fischer, Greg A. > Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility > > [External Email] > > "Fischer, Greg A. via petsc-users" writes: > >> Hello petsc-users, >> >> I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) >> >> I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. > > IBCGS uses the preconditioned norm by default while GCR uses the unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or KSPSetNormType() to make IBCGS use unpreconditioned. > >> Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? >> >> Thanks, >> Greg >> >> >> ________________________________ >> >> This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). > > ________________________________ > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). From quentin.chevalier at polytechnique.edu Mon Dec 6 12:08:08 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Mon, 6 Dec 2021 19:08:08 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: Hello Matthew and thanks for your quick response. I'm afraid I did try to snoop around the container and rerun PETSc's configure with the --with-hdf5 option, to absolutely no avail. I didn't see any errors during config or make, but it failed the tests (which aren't included in the minimal container I suppose) Quentin Quentin CHEVALIER ? IA parcours recherche LadHyX - Ecole polytechnique __________ On Mon, 6 Dec 2021 at 19:02, Matthew Knepley wrote: > > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier wrote: >> >> Hello PETSc users, >> >> This email is a duplicata of this gitlab issue, sorry for any inconvenience caused. >> >> I want to compute a PETSc vector in real mode, than perform calculations with it in complex mode. I want as much of this process to be parallel as possible. Right now, I compile PETSc in real mode, compute my vector and save it to a file, then switch to complex mode, read it, and move on. >> >> This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking inspiration from petsc4py doc, a bitbucket example and another one, all top Google results for 'petsc hdf5') : >>> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>> q.load(viewer) >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) >> >> >> This crashes my code. I obtain traceback : >>> >>> File "/home/shared/code.py", line 121, in Load >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >>> petsc4py.PETSc.Error: error code 86 >>> [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages >>> [0] Unknown PetscViewer type given: hdf5 > > This means that PETSc has not been configured with HDF5 (--with-hdf5 or --download-hdf5), so the container should be updated. > > THanks, > > Matt > >> >> I have petsc4py 3.16 from this docker container (list of dependencies include PETSc and petsc4py). >> >> I'm pretty sure this is not intended behaviour. Any insight as to how to fix this issue (I tried running ./configure --with-hdf5 to no avail) or more generally to perform this jiggling between real and complex would be much appreciated, >> >> Kind regards. >> >> Quentin > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From knepley at gmail.com Mon Dec 6 12:09:23 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Dec 2021 13:09:23 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > Hello Matthew and thanks for your quick response. > > I'm afraid I did try to snoop around the container and rerun PETSc's > configure with the --with-hdf5 option, to absolutely no avail. > > I didn't see any errors during config or make, but it failed the tests > (which aren't included in the minimal container I suppose) > Failed which tests? What was the error? Thanks, Matt > Quentin > > > > Quentin CHEVALIER ? IA parcours recherche > > LadHyX - Ecole polytechnique > > __________ > > > > On Mon, 6 Dec 2021 at 19:02, Matthew Knepley wrote: > > > > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> > >> Hello PETSc users, > >> > >> This email is a duplicata of this gitlab issue, sorry for any > inconvenience caused. > >> > >> I want to compute a PETSc vector in real mode, than perform > calculations with it in complex mode. I want as much of this process to be > parallel as possible. Right now, I compile PETSc in real mode, compute my > vector and save it to a file, then switch to complex mode, read it, and > move on. > >> > >> This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's > advice I'm moving to HDF5 format. My code is as follows (taking inspiration > from petsc4py doc, a bitbucket example and another one, all top Google > results for 'petsc hdf5') : > >>> > >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) > >>> q.load(viewer) > >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, > mode=PETSc.ScatterMode.FORWARD) > >> > >> > >> This crashes my code. I obtain traceback : > >>> > >>> File "/home/shared/code.py", line 121, in Load > >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) > >>> File "PETSc/Viewer.pyx", line 182, in > petsc4py.PETSc.Viewer.createHDF5 > >>> petsc4py.PETSc.Error: error code 86 > >>> [0] PetscViewerSetType() at > /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > >>> [0] Unknown type. Check for miss-spelling or missing package: > https://petsc.org/release/install/install/#external-packages > >>> [0] Unknown PetscViewer type given: hdf5 > > > > This means that PETSc has not been configured with HDF5 (--with-hdf5 or > --download-hdf5), so the container should be updated. > > > > THanks, > > > > Matt > > > >> > >> I have petsc4py 3.16 from this docker container (list of dependencies > include PETSc and petsc4py). > >> > >> I'm pretty sure this is not intended behaviour. Any insight as to how > to fix this issue (I tried running ./configure --with-hdf5 to no avail) or > more generally to perform this jiggling between real and complex would be > much appreciated, > >> > >> Kind regards. > >> > >> Quentin > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 6 12:10:13 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Dec 2021 13:10:13 -0500 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: References: <874k7ld5x1.fsf@jedbrown.org> <871r2pd4jq.fsf@jedbrown.org> Message-ID: On Mon, Dec 6, 2021 at 1:07 PM Fischer, Greg A. via petsc-users < petsc-users at mcs.anl.gov> wrote: > With -pc_type none, my value matches the ksp value, but it no longer > converges. > So now it seems like the check with unpreconditioned residuals was not right somehow. THanks, Matt > -----Original Message----- > From: Jed Brown > Sent: Monday, December 6, 2021 12:31 PM > To: Fischer, Greg A. ; Fischer, Greg A. via > petsc-users ; petsc-users at mcs.anl.gov > Cc: Fischer, Greg A. > Subject: RE: [petsc-users] KSPBuildResidual and KSPType compatibility > > [External Email] > > Please try with -pc_type none. > > There is always a small difference due to using the recurrence, but it > should be small so long as the Krylov basis is close to orthogonal. > > I'd note that if you're using this expensive convergence test, then IBCGS > probably isn't helping over BCGS (the I part is to amortize some vector and > reduction costs). It'd be worth comparing when you use normal BCGS. > > "Fischer, Greg A." writes: > > > I tried your suggestion, but the values are still quite different. > > > > -----Original Message----- > > From: Jed Brown > > Sent: Monday, December 6, 2021 12:01 PM > > To: Fischer, Greg A. via petsc-users ; > petsc-users at mcs.anl.gov > > Cc: Fischer, Greg A. > > Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility > > > > [External Email] > > > > "Fischer, Greg A. via petsc-users" writes: > > > >> Hello petsc-users, > >> > >> I would like to check convergence against the infinity norm, so I > defined my own convergence test routine with KSPSetConvergenceTest. (I > understand that it may be computationally expensive.) > >> > >> I would like to do this with the "ibcgs" method. When I use > KSPBuildResidual and calculate the NORM_2 against the output vector, I get > a value that differs from the 2-norm that gets passed into the convergence > test function. However, when I switch to the "gcr" method, the value I > calculate matches the function input value. > > > > IBCGS uses the preconditioned norm by default while GCR uses the > unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or > KSPSetNormType() to make IBCGS use unpreconditioned. > > > >> Is the KSPBuildResidual function only compatible with a subset of the > KSPType methods? If I want to evaluate convergence against the infinity > norm, do I need to set KSPSetInitialGuessNonzero and continually re-start > the solver with a lower tolerance values until I get a satisfactory value > of the infinity norm? > >> > >> Thanks, > >> Greg > >> > >> > >> ________________________________ > >> > >> This e-mail may contain proprietary information of the sending > organization. Any unauthorized or improper disclosure, copying, > distribution, or use of the contents of this e-mail and attached > document(s) is prohibited. The information contained in this e-mail and > attached document(s) is intended only for the personal and private use of > the recipient(s) named above. If you have received this communication in > error, please notify the sender immediately by email and delete the > original e-mail and attached document(s). > > > > ________________________________ > > > > This e-mail may contain proprietary information of the sending > organization. Any unauthorized or improper disclosure, copying, > distribution, or use of the contents of this e-mail and attached > document(s) is prohibited. The information contained in this e-mail and > attached document(s) is intended only for the personal and private use of > the recipient(s) named above. If you have received this communication in > error, please notify the sender immediately by email and delete the > original e-mail and attached document(s). > > ________________________________ > > This e-mail may contain proprietary information of the sending > organization. Any unauthorized or improper disclosure, copying, > distribution, or use of the contents of this e-mail and attached > document(s) is prohibited. The information contained in this e-mail and > attached document(s) is intended only for the personal and private use of > the recipient(s) named above. If you have received this communication in > error, please notify the sender immediately by email and delete the > original e-mail and attached document(s). > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From quentin.chevalier at polytechnique.edu Mon Dec 6 12:22:01 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Mon, 6 Dec 2021 19:22:01 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: It failed all of the tests included in `make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file or directory` I am therefore fairly confident this a "file absence" problem, and not a compilation problem. I repeat that there was no error at compilation stage. The final stage did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. Again, running `./configure --with-hdf5` followed by a `make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not change the problem. I get the same error at the same position as before. I will comment I am running on OpenSUSE. Quentin On Mon, 6 Dec 2021 at 19:09, Matthew Knepley wrote: > > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier wrote: >> >> Hello Matthew and thanks for your quick response. >> >> I'm afraid I did try to snoop around the container and rerun PETSc's >> configure with the --with-hdf5 option, to absolutely no avail. >> >> I didn't see any errors during config or make, but it failed the tests >> (which aren't included in the minimal container I suppose) > > > Failed which tests? What was the error? > > Thanks, > > Matt > >> >> Quentin >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> LadHyX - Ecole polytechnique >> >> __________ >> >> >> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley wrote: >> > >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier wrote: >> >> >> >> Hello PETSc users, >> >> >> >> This email is a duplicata of this gitlab issue, sorry for any inconvenience caused. >> >> >> >> I want to compute a PETSc vector in real mode, than perform calculations with it in complex mode. I want as much of this process to be parallel as possible. Right now, I compile PETSc in real mode, compute my vector and save it to a file, then switch to complex mode, read it, and move on. >> >> >> >> This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking inspiration from petsc4py doc, a bitbucket example and another one, all top Google results for 'petsc hdf5') : >> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >> >>> q.load(viewer) >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) >> >> >> >> >> >> This crashes my code. I obtain traceback : >> >>> >> >>> File "/home/shared/code.py", line 121, in Load >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >> >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >> >>> petsc4py.PETSc.Error: error code 86 >> >>> [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >> >>> [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages >> >>> [0] Unknown PetscViewer type given: hdf5 >> > >> > This means that PETSc has not been configured with HDF5 (--with-hdf5 or --download-hdf5), so the container should be updated. >> > >> > THanks, >> > >> > Matt >> > >> >> >> >> I have petsc4py 3.16 from this docker container (list of dependencies include PETSc and petsc4py). >> >> >> >> I'm pretty sure this is not intended behaviour. Any insight as to how to fix this issue (I tried running ./configure --with-hdf5 to no avail) or more generally to perform this jiggling between real and complex would be much appreciated, >> >> >> >> Kind regards. >> >> >> >> Quentin >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From knepley at gmail.com Mon Dec 6 12:24:39 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Dec 2021 13:24:39 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > It failed all of the tests included in `make > PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with > the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file > or directory` > > I am therefore fairly confident this a "file absence" problem, and not > a compilation problem. > > I repeat that there was no error at compilation stage. The final stage > did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. > > Again, running `./configure --with-hdf5` followed by a `make > PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not > change the problem. I get the same error at the same position as > before. > If you reconfigured and rebuilt, it is impossible to get the same error, so a) You did not reconfigure b) Your new build is somewhere else on the machine Thanks, Matt > I will comment I am running on OpenSUSE. > > Quentin > > On Mon, 6 Dec 2021 at 19:09, Matthew Knepley wrote: > > > > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> > >> Hello Matthew and thanks for your quick response. > >> > >> I'm afraid I did try to snoop around the container and rerun PETSc's > >> configure with the --with-hdf5 option, to absolutely no avail. > >> > >> I didn't see any errors during config or make, but it failed the tests > >> (which aren't included in the minimal container I suppose) > > > > > > Failed which tests? What was the error? > > > > Thanks, > > > > Matt > > > >> > >> Quentin > >> > >> > >> > >> Quentin CHEVALIER ? IA parcours recherche > >> > >> LadHyX - Ecole polytechnique > >> > >> __________ > >> > >> > >> > >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley wrote: > >> > > >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> >> > >> >> Hello PETSc users, > >> >> > >> >> This email is a duplicata of this gitlab issue, sorry for any > inconvenience caused. > >> >> > >> >> I want to compute a PETSc vector in real mode, than perform > calculations with it in complex mode. I want as much of this process to be > parallel as possible. Right now, I compile PETSc in real mode, compute my > vector and save it to a file, then switch to complex mode, read it, and > move on. > >> >> > >> >> This creates unexpected behaviour using MPIIO, so on Lisandro > Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking > inspiration from petsc4py doc, a bitbucket example and another one, all top > Google results for 'petsc hdf5') : > >> >>> > >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) > >> >>> q.load(viewer) > >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, > mode=PETSc.ScatterMode.FORWARD) > >> >> > >> >> > >> >> This crashes my code. I obtain traceback : > >> >>> > >> >>> File "/home/shared/code.py", line 121, in Load > >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) > >> >>> File "PETSc/Viewer.pyx", line 182, in > petsc4py.PETSc.Viewer.createHDF5 > >> >>> petsc4py.PETSc.Error: error code 86 > >> >>> [0] PetscViewerSetType() at > /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > >> >>> [0] Unknown type. Check for miss-spelling or missing package: > https://petsc.org/release/install/install/#external-packages > >> >>> [0] Unknown PetscViewer type given: hdf5 > >> > > >> > This means that PETSc has not been configured with HDF5 (--with-hdf5 > or --download-hdf5), so the container should be updated. > >> > > >> > THanks, > >> > > >> > Matt > >> > > >> >> > >> >> I have petsc4py 3.16 from this docker container (list of > dependencies include PETSc and petsc4py). > >> >> > >> >> I'm pretty sure this is not intended behaviour. Any insight as to > how to fix this issue (I tried running ./configure --with-hdf5 to no avail) > or more generally to perform this jiggling between real and complex would > be much appreciated, > >> >> > >> >> Kind regards. > >> >> > >> >> Quentin > >> > > >> > > >> > > >> > -- > >> > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >> > -- Norbert Wiener > >> > > >> > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From quentin.chevalier at polytechnique.edu Mon Dec 6 12:42:15 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Mon, 6 Dec 2021 19:42:15 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: The PETSC_DIR exactly corresponds to the previous one, so I guess that rules option b) out, except if a specific option is required to overwrite a previous installation of PETSc. As for a), well I thought reconfiguring pretty direct, you're welcome to give me a hint as to what could be wrong in the following process. Steps to reproduce this behaviour are as follows : * Run this docker container * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5( 'dummy.h5')" After you get the error Unknown PetscViewer type, feel free to try : * cd /usr/local/petsc/ * ./configure --with-hfd5 * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all Then repeat the MWE and observe absolutely no behavioural change whatsoever. I'm afraid I don't know PETSc well enough to be surprised by that. Quentin [image: cid:image003.jpg at 01D690CB.3B3FDC10] Quentin CHEVALIER ? IA parcours recherche LadHyX - Ecole polytechnique __________ On Mon, 6 Dec 2021 at 19:24, Matthew Knepley wrote: > On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> It failed all of the tests included in `make >> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with >> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file >> or directory` >> >> I am therefore fairly confident this a "file absence" problem, and not >> a compilation problem. >> >> I repeat that there was no error at compilation stage. The final stage >> did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. >> >> Again, running `./configure --with-hdf5` followed by a `make >> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not >> change the problem. I get the same error at the same position as >> before. >> > > If you reconfigured and rebuilt, it is impossible to get the same error, so > > a) You did not reconfigure > > b) Your new build is somewhere else on the machine > > Thanks, > > Matt > > >> I will comment I am running on OpenSUSE. >> >> Quentin >> >> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley wrote: >> > >> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> Hello Matthew and thanks for your quick response. >> >> >> >> I'm afraid I did try to snoop around the container and rerun PETSc's >> >> configure with the --with-hdf5 option, to absolutely no avail. >> >> >> >> I didn't see any errors during config or make, but it failed the tests >> >> (which aren't included in the minimal container I suppose) >> > >> > >> > Failed which tests? What was the error? >> > >> > Thanks, >> > >> > Matt >> > >> >> >> >> Quentin >> >> >> >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> LadHyX - Ecole polytechnique >> >> >> >> __________ >> >> >> >> >> >> >> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley >> wrote: >> >> > >> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >> Hello PETSc users, >> >> >> >> >> >> This email is a duplicata of this gitlab issue, sorry for any >> inconvenience caused. >> >> >> >> >> >> I want to compute a PETSc vector in real mode, than perform >> calculations with it in complex mode. I want as much of this process to be >> parallel as possible. Right now, I compile PETSc in real mode, compute my >> vector and save it to a file, then switch to complex mode, read it, and >> move on. >> >> >> >> >> >> This creates unexpected behaviour using MPIIO, so on Lisandro >> Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking >> inspiration from petsc4py doc, a bitbucket example and another one, all top >> Google results for 'petsc hdf5') : >> >> >>> >> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >> >> >>> q.load(viewer) >> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, >> mode=PETSc.ScatterMode.FORWARD) >> >> >> >> >> >> >> >> >> This crashes my code. I obtain traceback : >> >> >>> >> >> >>> File "/home/shared/code.py", line 121, in Load >> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >> >> >>> File "PETSc/Viewer.pyx", line 182, in >> petsc4py.PETSc.Viewer.createHDF5 >> >> >>> petsc4py.PETSc.Error: error code 86 >> >> >>> [0] PetscViewerSetType() at >> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >> >> >>> [0] Unknown type. Check for miss-spelling or missing package: >> https://petsc.org/release/install/install/#external-packages >> >> >>> [0] Unknown PetscViewer type given: hdf5 >> >> > >> >> > This means that PETSc has not been configured with HDF5 (--with-hdf5 >> or --download-hdf5), so the container should be updated. >> >> > >> >> > THanks, >> >> > >> >> > Matt >> >> > >> >> >> >> >> >> I have petsc4py 3.16 from this docker container (list of >> dependencies include PETSc and petsc4py). >> >> >> >> >> >> I'm pretty sure this is not intended behaviour. Any insight as to >> how to fix this issue (I tried running ./configure --with-hdf5 to no avail) >> or more generally to perform this jiggling between real and complex would >> be much appreciated, >> >> >> >> >> >> Kind regards. >> >> >> >> >> >> Quentin >> >> > >> >> > >> >> > >> >> > -- >> >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> >> > -- Norbert Wiener >> >> > >> >> > https://www.cse.buffalo.edu/~knepley/ >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2044 bytes Desc: not available URL: From mazumder at purdue.edu Mon Dec 6 13:04:20 2021 From: mazumder at purdue.edu (Sanjoy Kumar Mazumder) Date: Mon, 6 Dec 2021 19:04:20 +0000 Subject: [petsc-users] CV_CONV_FAILURE with TSSUNDIALS in PETSc In-Reply-To: References: Message-ID: Thank you for your suggestion. I will look into the documentation of ts_type bdf and try implementing the same. However, it seems like there is not much example on TSBDF available. If you can kindly share with me an example on the implementation of TSBDF it would be helpful. Thank You Sanjoy Get Outlook for Android ________________________________ From: Matthew Knepley Sent: Monday, December 6, 2021 1:00:49 PM To: Sanjoy Kumar Mazumder Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] CV_CONV_FAILURE with TSSUNDIALS in PETSc On Mon, Dec 6, 2021 at 12:17 PM Sanjoy Kumar Mazumder > wrote: Hi all, I am trying to solve a set of coupled stiff ODEs in parallel using TSSUNDIALS with SUNDIALS_BDF as 'TSSundialsSetType' in PETSc. I am using a sparse Jacobian matrix of type MATMPIAIJ with no preconditioner. It runs for some time with a very small initial timestep (~10^-18) and then terminates abruptly with the following error: [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. Is there anything I am missing out or not doing properly? Given below is the complete error that is showing up after the termination. It is hard to know for us. BDF is implicit, so CVODE has to solve a (non)linear system, and it looks like this is what fails. I would send it to the CVODE team. Alternatively, you can run with -ts_type bdf and we would have more information to help you with. Thanks, Matt ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [5]PETSC ERROR: [CVODE ERROR] CVode [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [7]PETSC ERROR: [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [16]PETSC ERROR: [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [CVODE ERROR] CVode At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [19]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [19]PETSC ERROR: Error in external library [19]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [19]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [19]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [19]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [19]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [19]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [19]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [19]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [19]PETSC ERROR: #4 User provided function() line 0 in User file [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Error in external library [0]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [0]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [0]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [0]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [0]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [0]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [0]PETSC ERROR: #4 User provided function() line 0 in User file [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Error in external library [1]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [1]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [1]PETSC ERROR: [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [2]PETSC ERROR: Error in external library [2]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [2]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [2]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [2]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [2]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [2]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [2]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [2]PETSC ERROR: #4 User provided function() line 0 in User file [3]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [3]PETSC ERROR: Error in external library [3]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [3]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [3]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [3]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [3]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [3]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [3]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [3]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [3]PETSC ERROR: #4 User provided function() line 0 in User file [4]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [4]PETSC ERROR: Error in external library [4]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [4]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [4]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [4]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [4]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [4]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c --------------------- Error Message -------------------------------------------------------------- [5]PETSC ERROR: Error in external library [5]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [5]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [5]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [5]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [5]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [5]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [5]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [5]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [5]PETSC ERROR: #4 User provided function() line 0 in User file --------------------- Error Message -------------------------------------------------------------- [7]PETSC ERROR: Error in external library [7]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [7]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [7]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [7]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [7]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [7]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [7]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [7]PETSC ERROR: #4 User provided function() line 0 in User file [8]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [8]PETSC ERROR: Error in external library [8]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [8]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [8]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [8]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [9]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [9]PETSC ERROR: Error in external library [9]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [9]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [9]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [9]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [9]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [9]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [9]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [9]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [10]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [10]PETSC ERROR: Error in external library [10]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [10]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [10]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [10]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [10]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [10]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [10]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [10]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [10]PETSC ERROR: #4 User provided function() line 0 in User file [11]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [11]PETSC ERROR: Error in external library [11]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [11]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [11]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [11]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [11]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [11]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [11]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [11]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [11]PETSC ERROR: #4 User provided function() line 0 in User file [12]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [12]PETSC ERROR: Error in external library [12]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [12]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [12]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [12]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [12]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [12]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [12]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [12]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [12]PETSC ERROR: #4 User provided function() line 0 in User file [14]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [14]PETSC ERROR: Error in external library [14]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [14]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [14]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [14]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [14]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [14]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [15]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [15]PETSC ERROR: Error in external library [15]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [15]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [15]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 --------------------- Error Message -------------------------------------------------------------- [16]PETSC ERROR: Error in external library [16]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [16]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [16]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [16]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [16]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [16]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [16]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [16]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [16]PETSC ERROR: #4 User provided function() line 0 in User file [17]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [17]PETSC ERROR: Error in external library [17]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [17]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [17]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [17]PETSC ERROR: [18]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [18]PETSC ERROR: Error in external library [18]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [18]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [18]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [17]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [17]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [17]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [17]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [17]PETSC ERROR: #4 User provided function() line 0 in User file Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [1]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [1]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [1]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [1]PETSC ERROR: #4 User provided function() line 0 in User file [4]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [4]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [4]PETSC ERROR: #4 User provided function() line 0 in User file [8]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [8]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [8]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [8]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [8]PETSC ERROR: #4 User provided function() line 0 in User file [9]PETSC ERROR: #4 User provided function() line 0 in User file [13]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [13]PETSC ERROR: Error in external library [13]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [13]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [13]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [13]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [13]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [13]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [13]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [13]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [13]PETSC ERROR: #4 User provided function() line 0 in User file [14]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [14]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [14]PETSC ERROR: #4 User provided function() line 0 in User file [15]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [15]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [15]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [15]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [15]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [15]PETSC ERROR: #4 User provided function() line 0 in User file Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [18]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [18]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [18]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [18]PETSC ERROR: #4 User provided function() line 0 in User file At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test failed repeatedly or with |h| = hmin. [6]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [6]PETSC ERROR: Error in external library [6]PETSC ERROR: CVode() fails, CV_CONV_FAILURE [6]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [6]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 [6]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 [6]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-fblaslapack --download-sundials=yes --with-debugging [6]PETSC ERROR: #1 TSStep_Sundials() line 156 in /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c [6]PETSC ERROR: #2 TSStep() line 3759 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [6]PETSC ERROR: #3 TSSolve() line 4156 in /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c [6]PETSC ERROR: #4 User provided function() line 0 in User file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF with errorcode 76. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [bell-a027.rcac.purdue.edu:29752] 15 more processes have sent help message help-mpi-api.txt / mpi-abort [bell-a027.rcac.purdue.edu:29752] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages Thanks With regards, Sanjoy -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Mon Dec 6 13:04:25 2021 From: pierre at joliv.et (Pierre Jolivet) Date: Mon, 6 Dec 2021 20:04:25 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier wrote: > > The PETSC_DIR exactly corresponds to the previous one, so I guess that rules option b) out, except if a specific option is required to overwrite a previous installation of PETSc. As for a), well I thought reconfiguring pretty direct, you're welcome to give me a hint as to what could be wrong in the following process. > > Steps to reproduce this behaviour are as follows : > * Run this docker container > * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" > > After you get the error Unknown PetscViewer type, feel free to try : > > * cd /usr/local/petsc/ > * ./configure --with-hfd5 It?s hdf5, not hfd5. It?s PETSC_ARCH, not PETSC-ARCH. PETSC_ARCH is missing from your configure line. Thanks, Pierre > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all > > Then repeat the MWE and observe absolutely no behavioural change whatsoever. I'm afraid I don't know PETSc well enough to be surprised by that. > > Quentin > > > > > Quentin CHEVALIER ? IA parcours recherche > LadHyX - Ecole polytechnique > > __________ > > > > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley > wrote: > On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier > wrote: > It failed all of the tests included in `make > PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with > the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file > or directory` > > I am therefore fairly confident this a "file absence" problem, and not > a compilation problem. > > I repeat that there was no error at compilation stage. The final stage > did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. > > Again, running `./configure --with-hdf5` followed by a `make > PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not > change the problem. I get the same error at the same position as > before. > > If you reconfigured and rebuilt, it is impossible to get the same error, so > > a) You did not reconfigure > > b) Your new build is somewhere else on the machine > > Thanks, > > Matt > > I will comment I am running on OpenSUSE. > > Quentin > > On Mon, 6 Dec 2021 at 19:09, Matthew Knepley > wrote: > > > > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier > wrote: > >> > >> Hello Matthew and thanks for your quick response. > >> > >> I'm afraid I did try to snoop around the container and rerun PETSc's > >> configure with the --with-hdf5 option, to absolutely no avail. > >> > >> I didn't see any errors during config or make, but it failed the tests > >> (which aren't included in the minimal container I suppose) > > > > > > Failed which tests? What was the error? > > > > Thanks, > > > > Matt > > > >> > >> Quentin > >> > >> > >> > >> Quentin CHEVALIER ? IA parcours recherche > >> > >> LadHyX - Ecole polytechnique > >> > >> __________ > >> > >> > >> > >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley > wrote: > >> > > >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier > wrote: > >> >> > >> >> Hello PETSc users, > >> >> > >> >> This email is a duplicata of this gitlab issue, sorry for any inconvenience caused. > >> >> > >> >> I want to compute a PETSc vector in real mode, than perform calculations with it in complex mode. I want as much of this process to be parallel as possible. Right now, I compile PETSc in real mode, compute my vector and save it to a file, then switch to complex mode, read it, and move on. > >> >> > >> >> This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking inspiration from petsc4py doc, a bitbucket example and another one, all top Google results for 'petsc hdf5') : > >> >>> > >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) > >> >>> q.load(viewer) > >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) > >> >> > >> >> > >> >> This crashes my code. I obtain traceback : > >> >>> > >> >>> File "/home/shared/code.py", line 121, in Load > >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) > >> >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 > >> >>> petsc4py.PETSc.Error: error code 86 > >> >>> [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > >> >>> [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages > >> >>> [0] Unknown PetscViewer type given: hdf5 > >> > > >> > This means that PETSc has not been configured with HDF5 (--with-hdf5 or --download-hdf5), so the container should be updated. > >> > > >> > THanks, > >> > > >> > Matt > >> > > >> >> > >> >> I have petsc4py 3.16 from this docker container (list of dependencies include PETSc and petsc4py). > >> >> > >> >> I'm pretty sure this is not intended behaviour. Any insight as to how to fix this issue (I tried running ./configure --with-hdf5 to no avail) or more generally to perform this jiggling between real and complex would be much appreciated, > >> >> > >> >> Kind regards. > >> >> > >> >> Quentin > >> > > >> > > >> > > >> > -- > >> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > >> > -- Norbert Wiener > >> > > >> > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Dec 6 13:15:57 2021 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 6 Dec 2021 14:15:57 -0500 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: <874k7ld5x1.fsf@jedbrown.org> References: <874k7ld5x1.fsf@jedbrown.org> Message-ID: <8F6A0614-C133-4E41-9F3D-E64E33603630@petsc.dev> What do you get for -ksp_type ibcgs -ksp_monitor -ksp_monitor_true_residual with and without -ksp_pc_side right ? > On Dec 6, 2021, at 12:00 PM, Jed Brown wrote: > > "Fischer, Greg A. via petsc-users" writes: > >> Hello petsc-users, >> >> I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) >> >> I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. > > IBCGS uses the preconditioned norm by default while GCR uses the unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or KSPSetNormType() to make IBCGS use unpreconditioned. > >> Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? >> >> Thanks, >> Greg >> >> >> ________________________________ >> >> This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 6 13:30:13 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Dec 2021 14:30:13 -0500 Subject: [petsc-users] CV_CONV_FAILURE with TSSUNDIALS in PETSc In-Reply-To: References: Message-ID: On Mon, Dec 6, 2021 at 2:04 PM Sanjoy Kumar Mazumder wrote: > Thank you for your suggestion. I will look into the documentation of > ts_type bdf and try implementing the same. However, it seems like there is > not much example on TSBDF available. If you can kindly share with me an > example on the implementation of TSBDF it would be helpful. > If your problem is defined using IFunction, then it works like any other implicit solver. You can look at any example that uses -ts_type bdf, such as TS ex19. Thanks, Matt > Thank You > > Sanjoy > > Get Outlook for Android > ------------------------------ > *From:* Matthew Knepley > *Sent:* Monday, December 6, 2021 1:00:49 PM > *To:* Sanjoy Kumar Mazumder > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] CV_CONV_FAILURE with TSSUNDIALS in PETSc > > On Mon, Dec 6, 2021 at 12:17 PM Sanjoy Kumar Mazumder > wrote: > > Hi all, > > I am trying to solve a set of coupled stiff ODEs in parallel using > TSSUNDIALS with SUNDIALS_BDF as 'TSSundialsSetType' in PETSc. I am using a > sparse Jacobian matrix of type MATMPIAIJ with no preconditioner. It runs > for some time with a very small initial timestep (~10^-18) and then > terminates abruptly with the following error: > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > Is there anything I am missing out or not doing properly? Given below is > the complete error that is showing up after the termination. > > > It is hard to know for us. BDF is implicit, so CVODE has to solve a > (non)linear system, and it looks like this is what fails. I would send it > to the CVODE team. > Alternatively, you can run with -ts_type bdf and we would have more > information to help you with. > > Thanks, > > Matt > > > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [5]PETSC ERROR: > [CVODE ERROR] CVode > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [7]PETSC ERROR: > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [16]PETSC ERROR: > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > > [CVODE ERROR] CVode > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [19]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [19]PETSC ERROR: Error in external library > [19]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [19]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [19]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [19]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [19]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [19]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [19]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [19]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [19]PETSC ERROR: #4 User provided function() line 0 in User file > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Error in external library > [0]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [0]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [0]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [0]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [0]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [0]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [0]PETSC ERROR: #4 User provided function() line 0 in User file > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: Error in external library > [1]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [1]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [1]PETSC ERROR: [2]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [2]PETSC ERROR: Error in external library > [2]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [2]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [2]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [2]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [2]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [2]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [2]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [2]PETSC ERROR: #4 User provided function() line 0 in User file > [3]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [3]PETSC ERROR: Error in external library > [3]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [3]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [3]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [3]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [3]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [3]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [3]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [3]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [3]PETSC ERROR: #4 User provided function() line 0 in User file > [4]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [4]PETSC ERROR: Error in external library > [4]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [4]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [4]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [4]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [4]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [4]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > --------------------- Error Message > -------------------------------------------------------------- > [5]PETSC ERROR: Error in external library > [5]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [5]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [5]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [5]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [5]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [5]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [5]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [5]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [5]PETSC ERROR: #4 User provided function() line 0 in User file > --------------------- Error Message > -------------------------------------------------------------- > [7]PETSC ERROR: Error in external library > [7]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [7]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [7]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [7]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [7]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [7]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [7]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [7]PETSC ERROR: #4 User provided function() line 0 in User file > [8]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [8]PETSC ERROR: Error in external library > [8]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [8]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [8]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [8]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [9]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [9]PETSC ERROR: Error in external library > [9]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [9]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [9]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [9]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [9]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [9]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [9]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [9]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [10]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [10]PETSC ERROR: Error in external library > [10]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [10]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [10]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [10]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [10]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [10]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [10]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [10]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [10]PETSC ERROR: #4 User provided function() line 0 in User file > [11]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [11]PETSC ERROR: Error in external library > [11]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [11]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [11]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [11]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [11]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [11]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [11]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [11]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [11]PETSC ERROR: #4 User provided function() line 0 in User file > [12]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [12]PETSC ERROR: Error in external library > [12]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [12]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [12]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [12]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [12]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [12]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [12]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [12]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [12]PETSC ERROR: #4 User provided function() line 0 in User file > [14]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [14]PETSC ERROR: Error in external library > [14]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [14]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [14]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [14]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [14]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [14]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [15]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [15]PETSC ERROR: Error in external library > [15]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [15]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [15]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > --------------------- Error Message > -------------------------------------------------------------- > [16]PETSC ERROR: Error in external library > [16]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [16]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [16]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [16]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [16]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [16]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [16]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [16]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [16]PETSC ERROR: #4 User provided function() line 0 in User file > [17]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [17]PETSC ERROR: Error in external library > [17]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [17]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [17]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [17]PETSC ERROR: [18]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [18]PETSC ERROR: Error in external library > [18]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [18]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [18]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [17]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [17]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [17]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [17]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [17]PETSC ERROR: #4 User provided function() line 0 in User file > Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 > --download-fblaslapack --download-sundials=yes --with-debugging > [1]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [1]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [1]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [1]PETSC ERROR: #4 User provided function() line 0 in User file > [4]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [4]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [4]PETSC ERROR: #4 User provided function() line 0 in User file > [8]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [8]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [8]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [8]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [8]PETSC ERROR: #4 User provided function() line 0 in User file > [9]PETSC ERROR: #4 User provided function() line 0 in User file > [13]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [13]PETSC ERROR: Error in external library > [13]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [13]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [13]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [13]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [13]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [13]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [13]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [13]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [13]PETSC ERROR: #4 User provided function() line 0 in User file > [14]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [14]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [14]PETSC ERROR: #4 User provided function() line 0 in User file > [15]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [15]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [15]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [15]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [15]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [15]PETSC ERROR: #4 User provided function() line 0 in User file > Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 > --download-fblaslapack --download-sundials=yes --with-debugging > [18]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [18]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [18]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [18]PETSC ERROR: #4 User provided function() line 0 in User file > At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test > failed repeatedly or with |h| = hmin. > > [6]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [6]PETSC ERROR: Error in external library > [6]PETSC ERROR: CVode() fails, CV_CONV_FAILURE > [6]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [6]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 > [6]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named > bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 > [6]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx > --with-fc=mpif90 --download-fblaslapack --download-sundials=yes > --with-debugging > [6]PETSC ERROR: #1 TSStep_Sundials() line 156 in > /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c > [6]PETSC ERROR: #2 TSStep() line 3759 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [6]PETSC ERROR: #3 TSSolve() line 4156 in > /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c > [6]PETSC ERROR: #4 User provided function() line 0 in User file > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF > with errorcode 76. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [bell-a027.rcac.purdue.edu:29752] 15 more processes have sent help > message help-mpi-api.txt / mpi-abort > [bell-a027.rcac.purdue.edu:29752] Set MCA parameter > "orte_base_help_aggregate" to 0 to see all help / error messages > > Thanks > > With regards, > Sanjoy > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From quentin.chevalier at polytechnique.edu Mon Dec 6 14:27:11 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Mon, 6 Dec 2021 21:27:11 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: Fine. MWE is unchanged : * Run this docker container * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" Updated attempt at a fix : * cd /usr/local/petsc/ * ./configure PETSC_ARCH= linux-gnu-real-32 PETSC_DIR=/usr/local/petsc --with-hdf5 --force * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 all Still no joy. The same error remains. Quentin On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet wrote: > > > > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > > The PETSC_DIR exactly corresponds to the previous one, so I guess that rules option b) out, except if a specific option is required to overwrite a previous installation of PETSc. As for a), well I thought reconfiguring pretty direct, you're welcome to give me a hint as to what could be wrong in the following process. > > Steps to reproduce this behaviour are as follows : > * Run this docker container > * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" > > After you get the error Unknown PetscViewer type, feel free to try : > > * cd /usr/local/petsc/ > * ./configure --with-hfd5 > > > It?s hdf5, not hfd5. > It?s PETSC_ARCH, not PETSC-ARCH. > PETSC_ARCH is missing from your configure line. > > Thanks, > Pierre > > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all > > Then repeat the MWE and observe absolutely no behavioural change whatsoever. I'm afraid I don't know PETSc well enough to be surprised by that. > > Quentin > > > > Quentin CHEVALIER ? IA parcours recherche > > LadHyX - Ecole polytechnique > > __________ > > > > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley wrote: >> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: >>> >>> It failed all of the tests included in `make >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file >>> or directory` >>> >>> I am therefore fairly confident this a "file absence" problem, and not >>> a compilation problem. >>> >>> I repeat that there was no error at compilation stage. The final stage >>> did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. >>> >>> Again, running `./configure --with-hdf5` followed by a `make >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not >>> change the problem. I get the same error at the same position as >>> before. >> >> >> If you reconfigured and rebuilt, it is impossible to get the same error, so >> >> a) You did not reconfigure >> >> b) Your new build is somewhere else on the machine >> >> Thanks, >> >> Matt >> >>> >>> I will comment I am running on OpenSUSE. >>> >>> Quentin >>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley wrote: >>> > >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: >>> >> >>> >> Hello Matthew and thanks for your quick response. >>> >> >>> >> I'm afraid I did try to snoop around the container and rerun PETSc's >>> >> configure with the --with-hdf5 option, to absolutely no avail. >>> >> >>> >> I didn't see any errors during config or make, but it failed the tests >>> >> (which aren't included in the minimal container I suppose) >>> > >>> > >>> > Failed which tests? What was the error? >>> > >>> > Thanks, >>> > >>> > Matt >>> > >>> >> >>> >> Quentin >>> >> >>> >> >>> >> >>> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >>> >> LadHyX - Ecole polytechnique >>> >> >>> >> __________ >>> >> >>> >> >>> >> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley wrote: >>> >> > >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >>> >> >> Hello PETSc users, >>> >> >> >>> >> >> This email is a duplicata of this gitlab issue, sorry for any inconvenience caused. >>> >> >> >>> >> >> I want to compute a PETSc vector in real mode, than perform calculations with it in complex mode. I want as much of this process to be parallel as possible. Right now, I compile PETSc in real mode, compute my vector and save it to a file, then switch to complex mode, read it, and move on. >>> >> >> >>> >> >> This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking inspiration from petsc4py doc, a bitbucket example and another one, all top Google results for 'petsc hdf5') : >>> >> >>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>> >> >>> q.load(viewer) >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) >>> >> >> >>> >> >> >>> >> >> This crashes my code. I obtain traceback : >>> >> >>> >>> >> >>> File "/home/shared/code.py", line 121, in Load >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>> >> >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >>> >> >>> petsc4py.PETSc.Error: error code 86 >>> >> >>> [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> >> >>> [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages >>> >> >>> [0] Unknown PetscViewer type given: hdf5 >>> >> > >>> >> > This means that PETSc has not been configured with HDF5 (--with-hdf5 or --download-hdf5), so the container should be updated. >>> >> > >>> >> > THanks, >>> >> > >>> >> > Matt >>> >> > >>> >> >> >>> >> >> I have petsc4py 3.16 from this docker container (list of dependencies include PETSc and petsc4py). >>> >> >> >>> >> >> I'm pretty sure this is not intended behaviour. Any insight as to how to fix this issue (I tried running ./configure --with-hdf5 to no avail) or more generally to perform this jiggling between real and complex would be much appreciated, >>> >> >> >>> >> >> Kind regards. >>> >> >> >>> >> >> Quentin >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> >> > -- Norbert Wiener >>> >> > >>> >> > https://www.cse.buffalo.edu/~knepley/ >>> > >>> > >>> > >>> > -- >>> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> > -- Norbert Wiener >>> > >>> > https://www.cse.buffalo.edu/~knepley/ >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 6 14:39:21 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Dec 2021 15:39:21 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > Fine. MWE is unchanged : > * Run this docker container > * Do : python3 -c "from petsc4py import PETSc; > PETSc.Viewer().createHDF5('dummy.h5')" > > Updated attempt at a fix : > * cd /usr/local/petsc/ > * ./configure PETSC_ARCH= linux-gnu-real-32 PETSC_DIR=/usr/local/petsc > --with-hdf5 --force > Did it find HDF5? If not, it will shut it off. You need to send us $PETSC_DIR/configure.log so we can see what happened in the configure run. Thanks, Matt > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 all > > Still no joy. The same error remains. > > Quentin > > > > On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet wrote: > > > > > > > > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > > > > The PETSC_DIR exactly corresponds to the previous one, so I guess that > rules option b) out, except if a specific option is required to overwrite a > previous installation of PETSc. As for a), well I thought reconfiguring > pretty direct, you're welcome to give me a hint as to what could be wrong > in the following process. > > > > Steps to reproduce this behaviour are as follows : > > * Run this docker container > > * Do : python3 -c "from petsc4py import PETSc; > PETSc.Viewer().createHDF5('dummy.h5')" > > > > After you get the error Unknown PetscViewer type, feel free to try : > > > > * cd /usr/local/petsc/ > > * ./configure --with-hfd5 > > > > > > It?s hdf5, not hfd5. > > It?s PETSC_ARCH, not PETSC-ARCH. > > PETSC_ARCH is missing from your configure line. > > > > Thanks, > > Pierre > > > > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all > > > > Then repeat the MWE and observe absolutely no behavioural change > whatsoever. I'm afraid I don't know PETSc well enough to be surprised by > that. > > > > Quentin > > > > > > > > Quentin CHEVALIER ? IA parcours recherche > > > > LadHyX - Ecole polytechnique > > > > __________ > > > > > > > > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley wrote: > >> > >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >>> > >>> It failed all of the tests included in `make > >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with > >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file > >>> or directory` > >>> > >>> I am therefore fairly confident this a "file absence" problem, and not > >>> a compilation problem. > >>> > >>> I repeat that there was no error at compilation stage. The final stage > >>> did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. > >>> > >>> Again, running `./configure --with-hdf5` followed by a `make > >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not > >>> change the problem. I get the same error at the same position as > >>> before. > >> > >> > >> If you reconfigured and rebuilt, it is impossible to get the same > error, so > >> > >> a) You did not reconfigure > >> > >> b) Your new build is somewhere else on the machine > >> > >> Thanks, > >> > >> Matt > >> > >>> > >>> I will comment I am running on OpenSUSE. > >>> > >>> Quentin > >>> > >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley > wrote: > >>> > > >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >>> >> > >>> >> Hello Matthew and thanks for your quick response. > >>> >> > >>> >> I'm afraid I did try to snoop around the container and rerun PETSc's > >>> >> configure with the --with-hdf5 option, to absolutely no avail. > >>> >> > >>> >> I didn't see any errors during config or make, but it failed the > tests > >>> >> (which aren't included in the minimal container I suppose) > >>> > > >>> > > >>> > Failed which tests? What was the error? > >>> > > >>> > Thanks, > >>> > > >>> > Matt > >>> > > >>> >> > >>> >> Quentin > >>> >> > >>> >> > >>> >> > >>> >> Quentin CHEVALIER ? IA parcours recherche > >>> >> > >>> >> LadHyX - Ecole polytechnique > >>> >> > >>> >> __________ > >>> >> > >>> >> > >>> >> > >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley > wrote: > >>> >> > > >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >>> >> >> > >>> >> >> Hello PETSc users, > >>> >> >> > >>> >> >> This email is a duplicata of this gitlab issue, sorry for any > inconvenience caused. > >>> >> >> > >>> >> >> I want to compute a PETSc vector in real mode, than perform > calculations with it in complex mode. I want as much of this process to be > parallel as possible. Right now, I compile PETSc in real mode, compute my > vector and save it to a file, then switch to complex mode, read it, and > move on. > >>> >> >> > >>> >> >> This creates unexpected behaviour using MPIIO, so on Lisandro > Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking > inspiration from petsc4py doc, a bitbucket example and another one, all top > Google results for 'petsc hdf5') : > >>> >> >>> > >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) > >>> >> >>> q.load(viewer) > >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, > mode=PETSc.ScatterMode.FORWARD) > >>> >> >> > >>> >> >> > >>> >> >> This crashes my code. I obtain traceback : > >>> >> >>> > >>> >> >>> File "/home/shared/code.py", line 121, in Load > >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', > COMM_WORLD) > >>> >> >>> File "PETSc/Viewer.pyx", line 182, in > petsc4py.PETSc.Viewer.createHDF5 > >>> >> >>> petsc4py.PETSc.Error: error code 86 > >>> >> >>> [0] PetscViewerSetType() at > /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > >>> >> >>> [0] Unknown type. Check for miss-spelling or missing package: > https://petsc.org/release/install/install/#external-packages > >>> >> >>> [0] Unknown PetscViewer type given: hdf5 > >>> >> > > >>> >> > This means that PETSc has not been configured with HDF5 > (--with-hdf5 or --download-hdf5), so the container should be updated. > >>> >> > > >>> >> > THanks, > >>> >> > > >>> >> > Matt > >>> >> > > >>> >> >> > >>> >> >> I have petsc4py 3.16 from this docker container (list of > dependencies include PETSc and petsc4py). > >>> >> >> > >>> >> >> I'm pretty sure this is not intended behaviour. Any insight as > to how to fix this issue (I tried running ./configure --with-hdf5 to no > avail) or more generally to perform this jiggling between real and complex > would be much appreciated, > >>> >> >> > >>> >> >> Kind regards. > >>> >> >> > >>> >> >> Quentin > >>> >> > > >>> >> > > >>> >> > > >>> >> > -- > >>> >> > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >>> >> > -- Norbert Wiener > >>> >> > > >>> >> > https://www.cse.buffalo.edu/~knepley/ > >>> > > >>> > > >>> > > >>> > -- > >>> > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >>> > -- Norbert Wiener > >>> > > >>> > https://www.cse.buffalo.edu/~knepley/ > >> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >> -- Norbert Wiener > >> > >> https://www.cse.buffalo.edu/~knepley/ > > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From FERRANJ2 at my.erau.edu Mon Dec 6 13:02:30 2021 From: FERRANJ2 at my.erau.edu (Ferrand, Jesus A.) Date: Mon, 6 Dec 2021 19:02:30 +0000 Subject: [petsc-users] DM/DMPlex Issue from DM vector access Message-ID: Dear PETSc Team: I am a new PETSc user who is working on an FEA code and ran into an issue pertaining to DMPlex. I have a gmsh mesh file that I import using "DMPlexCreateGmshFromFile()." I then fetch the XYZ coordinates of the nodes from this mesh using "DMGetCoordinatesLocal()." Deeper into the code, I have a call to "DMPlexGetTransitiveClosure()" inside a loop that scans the cells (I think you guys refer to it as "depth-3") to reference nodes. Here's the catch: If I attempt to reference the array from the vector (Vec) in the call to "DMGetCoordinatesLocal()" by using something like "VecGetArray()," or "VecGetArrayRead()," the call to "DMPlexGetTransitiveClosure()" errors-out with a segmentation fault. I need access to that vector's stored XYZ-data because I'm using my own finite element scripts. I have no clue as to why this is happening. Maybe it is a newbie mistake and I am forgetting to restore some memory? Code, error message, and gmsh files are attached. Your help is much appreciated. Machine Type: HP Laptop C-compiler: Gnu C OS: Ubuntu 20.04 PETSc version: 3.16.0 MPI Implementation: MPICH Sincerely: J.A. Ferrand Embry-Riddle Aeronautical University - Daytona Beach FL M.Sc. Aerospace Engineering | May 2022 B.Sc. Aerospace Engineering B.Sc. Computational Mathematics Sigma Gamma Tau Tau Beta Pi Honors Program Phone: (386)-843-1829 Email(s): ferranj2 at my.erau.edu jesus.ferrand at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gmsh.c URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: TopOptmesh2.msh Type: application/octet-stream Size: 111265 bytes Desc: TopOptmesh2.msh URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error.txt URL: From liyuansen89 at gmail.com Mon Dec 6 17:03:55 2021 From: liyuansen89 at gmail.com (Ning Li) Date: Mon, 6 Dec 2021 17:03:55 -0600 Subject: [petsc-users] install PETSc on windows Message-ID: Howdy, I am trying to install PETSc on windows with cygwin but got an mpi error. Could you have a look at my issue and give me some instructions? Here is the information about my environment: 1. operation system: windows 11 2. visual studio version: 2019 3. intel one API toolkit is installed 4. Microsoft MS MPI is installed. 5. Intel MPI is uninstalled. 6. PETSc version: 3.16.1 this is my configuration: ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] --with-blas-lapack-lib=['/cygdrive/c/Program Files (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cygdrive/c/Program Files (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive/c/Program Files (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] --with-scalapack-include='/cygdrive/c/Program Files (x86)/Intel/oneAPI/mkl/2021.3.0/include' --with-scalapack-lib=['/cygdrive/c/Program Files (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib','/cygdrive/c/Program Files (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.lib'] --with-fortran-interfaces=1 --with-debugging=0 attached is the configure.log file. Thanks Ning Li -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1594481 bytes Desc: not available URL: From balay at mcs.anl.gov Mon Dec 6 17:21:05 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Dec 2021 17:21:05 -0600 (CST) Subject: [petsc-users] install PETSc on windows In-Reply-To: References: Message-ID: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> >>> Executing: /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe ifort -o /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/conftest.exe -MD -O3 -fpp /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/conftest.o /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpi.lib /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library <<< I'm not sure why this link command is giving this error. Can you retry with '--with-shared-libraries=0'? Satish On Mon, 6 Dec 2021, Ning Li wrote: > Howdy, > > I am trying to install PETSc on windows with cygwin but got an mpi error. > Could you have a look at my issue and give me some instructions? > > Here is the information about my environment: > 1. operation system: windows 11 > 2. visual studio version: 2019 > 3. intel one API toolkit is installed > 4. Microsoft MS MPI is installed. > 5. Intel MPI is uninstalled. > 6. PETSc version: 3.16.1 > > this is my configuration: > ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft > SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program Files > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program Files > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > --with-blas-lapack-lib=['/cygdrive/c/Program Files > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cygdrive/c/Program > Files > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive/c/Program > Files (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > --with-scalapack-include='/cygdrive/c/Program Files > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > --with-scalapack-lib=['/cygdrive/c/Program Files > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib','/cygdrive/c/Program > Files > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.lib'] > --with-fortran-interfaces=1 --with-debugging=0 > > attached is the configure.log file. > > Thanks > > Ning Li > From knepley at gmail.com Mon Dec 6 17:45:30 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Dec 2021 18:45:30 -0500 Subject: [petsc-users] DM/DMPlex Issue from DM vector access In-Reply-To: References: Message-ID: On Mon, Dec 6, 2021 at 6:06 PM Ferrand, Jesus A. wrote: > Dear PETSc Team: > > I am a new PETSc user who is working on an FEA code and ran into an issue > pertaining to DMPlex. > > I have a gmsh mesh file that I import using "DMPlexCreateGmshFromFile()." > I then fetch the XYZ coordinates of the nodes from this mesh using > "DMGetCoordinatesLocal()." Deeper into the code, I have a call to > "DMPlexGetTransitiveClosure()" inside a loop that scans the cells (I think > you guys refer to it as "depth-3") > It is more robust to use height 0 for cells. > to reference nodes. Here's the catch: If I attempt to reference the array > from the vector (Vec) in the call to "DMGetCoordinatesLocal()" by using > something like "VecGetArray()," or "VecGetArrayRead()," the call to > "DMPlexGetTransitiveClosure()" errors-out with a segmentation fault. I need > access to that vector's stored XYZ-data because I'm using my own finite > element scripts. I have no clue as to why this is happening. Maybe it is > a newbie mistake and I am forgetting to restore some memory? > 1) The First DMCreate() is unnecessary and leaks right now 2) PetscSectionDestroy(&s) after DMSetLocalSection() 3) Once you have set the section, you can use DMCreateGlobalVector() and DMCreateMatrix() instead of doing it by hand 4) DMPlexGetTransitiveClosure() has somewhat complicated memory management: a) The safest thing to do is initialize closure to NULL, and call Get/RestoreClosure() each iteration b) If you want to manage the memory yourself, then allocate closure[] coming in and every iteration reset size_closure to the array size on input c) You can combine these, so that you set closure to NULL on input, so that GetClosure() allocates the array for you. Then make the rest of your calls without changing size_closure and closure. Then after the loop ends call RestoreClosure() and it will deallocate. Note that this looks like elasticity. SNES ex17 is my idea of doing elasticity :) Thanks, Matt > Code, error message, and gmsh files are attached. > Your help is much appreciated. > > Machine Type: HP Laptop > C-compiler: Gnu C > OS: Ubuntu 20.04 > PETSc version: 3.16.0 > MPI Implementation: MPICH > > Sincerely: > > *J.A. Ferrand* > > Embry-Riddle Aeronautical University - Daytona Beach FL > > M.Sc. Aerospace Engineering | May 2022 > > B.Sc. Aerospace Engineering > > B.Sc. Computational Mathematics > > > > Sigma Gamma Tau > > Tau Beta Pi > > Honors Program > > > > *Phone:* (386)-843-1829 > > *Email(s):* ferranj2 at my.erau.edu > > jesus.ferrand at gmail.com > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From liyuansen89 at gmail.com Mon Dec 6 17:55:17 2021 From: liyuansen89 at gmail.com (Ning Li) Date: Mon, 6 Dec 2021 17:55:17 -0600 Subject: [petsc-users] install PETSc on windows In-Reply-To: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> References: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> Message-ID: Thanks for your reply. After I added '--with-shared-libraries=0', the configuration stage passed and now it is executing the 'make' command! Thanks very much On Mon, Dec 6, 2021 at 5:21 PM Satish Balay wrote: > >>> > Executing: /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe ifort > -o > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/conftest.exe > -MD -O3 -fpp > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/conftest.o > /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpi.lib > /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ > SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib > stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of > other libs; use /NODEFAULTLIB:library > <<< > > I'm not sure why this link command is giving this error. Can you retry > with '--with-shared-libraries=0'? > > Satish > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > Howdy, > > > > I am trying to install PETSc on windows with cygwin but got an mpi error. > > Could you have a look at my issue and give me some instructions? > > > > Here is the information about my environment: > > 1. operation system: windows 11 > > 2. visual studio version: 2019 > > 3. intel one API toolkit is installed > > 4. Microsoft MS MPI is installed. > > 5. Intel MPI is uninstalled. > > 6. PETSc version: 3.16.1 > > > > this is my configuration: > > ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' > > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > > --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft > > SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft > > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program Files > > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program Files > > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > > --with-blas-lapack-lib=['/cygdrive/c/Program Files > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cygdrive/c/Program > > Files > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive/c/Program > > Files (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > > --with-scalapack-include='/cygdrive/c/Program Files > > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > > --with-scalapack-lib=['/cygdrive/c/Program Files > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib','/cygdrive/c/Program > > Files > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.lib'] > > --with-fortran-interfaces=1 --with-debugging=0 > > > > attached is the configure.log file. > > > > Thanks > > > > Ning Li > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Dec 6 18:59:08 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 6 Dec 2021 18:59:08 -0600 (CST) Subject: [petsc-users] install PETSc on windows In-Reply-To: References: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> Message-ID: <84672efd-113b-2daf-85d1-e58d3812421d@mcs.anl.gov> Glad it worked. Thanks for the update. Satish On Mon, 6 Dec 2021, Ning Li wrote: > Thanks for your reply. > > After I added '--with-shared-libraries=0', the configuration stage passed > and now it is executing the 'make' command! > > Thanks very much > > On Mon, Dec 6, 2021 at 5:21 PM Satish Balay wrote: > > > >>> > > Executing: /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe ifort > > -o > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/conftest.exe > > -MD -O3 -fpp > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/conftest.o > > /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpi.lib > > /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ > > SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib > > stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of > > other libs; use /NODEFAULTLIB:library > > <<< > > > > I'm not sure why this link command is giving this error. Can you retry > > with '--with-shared-libraries=0'? > > > > Satish > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > Howdy, > > > > > > I am trying to install PETSc on windows with cygwin but got an mpi error. > > > Could you have a look at my issue and give me some instructions? > > > > > > Here is the information about my environment: > > > 1. operation system: windows 11 > > > 2. visual studio version: 2019 > > > 3. intel one API toolkit is installed > > > 4. Microsoft MS MPI is installed. > > > 5. Intel MPI is uninstalled. > > > 6. PETSc version: 3.16.1 > > > > > > this is my configuration: > > > ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' > > > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > > > --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft > > > SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft > > > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program Files > > > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program Files > > > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > > > --with-blas-lapack-lib=['/cygdrive/c/Program Files > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cygdrive/c/Program > > > Files > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive/c/Program > > > Files (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > > > --with-scalapack-include='/cygdrive/c/Program Files > > > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > > > --with-scalapack-lib=['/cygdrive/c/Program Files > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib','/cygdrive/c/Program > > > Files > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.lib'] > > > --with-fortran-interfaces=1 --with-debugging=0 > > > > > > attached is the configure.log file. > > > > > > Thanks > > > > > > Ning Li > > > > > > > > From faraz_hussain at yahoo.com Mon Dec 6 22:04:29 2021 From: faraz_hussain at yahoo.com (Faraz Hussain) Date: Tue, 7 Dec 2021 04:04:29 +0000 (UTC) Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> Message-ID: <2030978811.184065.1638849869029@mail.yahoo.com> I am studying the examples but it seems all ranks read the full matrix. Is there an MPI example where only rank 0 reads the matrix? I don't want all ranks to read my input matrix and consume a lot of memory allocating data for the arrays. I have worked with Intel's cluster sparse solver and their documentation states: " Most of the input parameters must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm. " From quentin.chevalier at polytechnique.edu Tue Dec 7 02:54:46 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Tue, 7 Dec 2021 09:54:46 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: Hello Matthew, That would indeed make sense. Full log is attached, I grepped hdf5 in there and didn't find anything alarming. Cheers, Quentin [image: cid:image003.jpg at 01D690CB.3B3FDC10] Quentin CHEVALIER ? IA parcours recherche LadHyX - Ecole polytechnique __________ On Mon, 6 Dec 2021 at 21:39, Matthew Knepley wrote: > On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> Fine. MWE is unchanged : >> * Run this docker container >> * Do : python3 -c "from petsc4py import PETSc; >> PETSc.Viewer().createHDF5('dummy.h5')" >> >> Updated attempt at a fix : >> * cd /usr/local/petsc/ >> * ./configure PETSC_ARCH= linux-gnu-real-32 PETSC_DIR=/usr/local/petsc >> --with-hdf5 --force >> > > Did it find HDF5? If not, it will shut it off. You need to send us > > $PETSC_DIR/configure.log > > so we can see what happened in the configure run. > > Thanks, > > Matt > > >> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 all >> >> Still no joy. The same error remains. >> >> Quentin >> >> >> >> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet wrote: >> > >> > >> > >> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> > >> > The PETSC_DIR exactly corresponds to the previous one, so I guess that >> rules option b) out, except if a specific option is required to overwrite a >> previous installation of PETSc. As for a), well I thought reconfiguring >> pretty direct, you're welcome to give me a hint as to what could be wrong >> in the following process. >> > >> > Steps to reproduce this behaviour are as follows : >> > * Run this docker container >> > * Do : python3 -c "from petsc4py import PETSc; >> PETSc.Viewer().createHDF5('dummy.h5')" >> > >> > After you get the error Unknown PetscViewer type, feel free to try : >> > >> > * cd /usr/local/petsc/ >> > * ./configure --with-hfd5 >> > >> > >> > It?s hdf5, not hfd5. >> > It?s PETSC_ARCH, not PETSC-ARCH. >> > PETSC_ARCH is missing from your configure line. >> > >> > Thanks, >> > Pierre >> > >> > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all >> > >> > Then repeat the MWE and observe absolutely no behavioural change >> whatsoever. I'm afraid I don't know PETSc well enough to be surprised by >> that. >> > >> > Quentin >> > >> > >> > >> > Quentin CHEVALIER ? IA parcours recherche >> > >> > LadHyX - Ecole polytechnique >> > >> > __________ >> > >> > >> > >> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley wrote: >> >> >> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >>> >> >>> It failed all of the tests included in `make >> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with >> >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file >> >>> or directory` >> >>> >> >>> I am therefore fairly confident this a "file absence" problem, and not >> >>> a compilation problem. >> >>> >> >>> I repeat that there was no error at compilation stage. The final stage >> >>> did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. >> >>> >> >>> Again, running `./configure --with-hdf5` followed by a `make >> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not >> >>> change the problem. I get the same error at the same position as >> >>> before. >> >> >> >> >> >> If you reconfigured and rebuilt, it is impossible to get the same >> error, so >> >> >> >> a) You did not reconfigure >> >> >> >> b) Your new build is somewhere else on the machine >> >> >> >> Thanks, >> >> >> >> Matt >> >> >> >>> >> >>> I will comment I am running on OpenSUSE. >> >>> >> >>> Quentin >> >>> >> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley >> wrote: >> >>> > >> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >>> >> >> >>> >> Hello Matthew and thanks for your quick response. >> >>> >> >> >>> >> I'm afraid I did try to snoop around the container and rerun >> PETSc's >> >>> >> configure with the --with-hdf5 option, to absolutely no avail. >> >>> >> >> >>> >> I didn't see any errors during config or make, but it failed the >> tests >> >>> >> (which aren't included in the minimal container I suppose) >> >>> > >> >>> > >> >>> > Failed which tests? What was the error? >> >>> > >> >>> > Thanks, >> >>> > >> >>> > Matt >> >>> > >> >>> >> >> >>> >> Quentin >> >>> >> >> >>> >> >> >>> >> >> >>> >> Quentin CHEVALIER ? IA parcours recherche >> >>> >> >> >>> >> LadHyX - Ecole polytechnique >> >>> >> >> >>> >> __________ >> >>> >> >> >>> >> >> >>> >> >> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley >> wrote: >> >>> >> > >> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >>> >> >> >> >>> >> >> Hello PETSc users, >> >>> >> >> >> >>> >> >> This email is a duplicata of this gitlab issue, sorry for any >> inconvenience caused. >> >>> >> >> >> >>> >> >> I want to compute a PETSc vector in real mode, than perform >> calculations with it in complex mode. I want as much of this process to be >> parallel as possible. Right now, I compile PETSc in real mode, compute my >> vector and save it to a file, then switch to complex mode, read it, and >> move on. >> >>> >> >> >> >>> >> >> This creates unexpected behaviour using MPIIO, so on Lisandro >> Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking >> inspiration from petsc4py doc, a bitbucket example and another one, all top >> Google results for 'petsc hdf5') : >> >>> >> >>> >> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >> >>> >> >>> q.load(viewer) >> >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, >> mode=PETSc.ScatterMode.FORWARD) >> >>> >> >> >> >>> >> >> >> >>> >> >> This crashes my code. I obtain traceback : >> >>> >> >>> >> >>> >> >>> File "/home/shared/code.py", line 121, in Load >> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', >> COMM_WORLD) >> >>> >> >>> File "PETSc/Viewer.pyx", line 182, in >> petsc4py.PETSc.Viewer.createHDF5 >> >>> >> >>> petsc4py.PETSc.Error: error code 86 >> >>> >> >>> [0] PetscViewerSetType() at >> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >> >>> >> >>> [0] Unknown type. Check for miss-spelling or missing package: >> https://petsc.org/release/install/install/#external-packages >> >>> >> >>> [0] Unknown PetscViewer type given: hdf5 >> >>> >> > >> >>> >> > This means that PETSc has not been configured with HDF5 >> (--with-hdf5 or --download-hdf5), so the container should be updated. >> >>> >> > >> >>> >> > THanks, >> >>> >> > >> >>> >> > Matt >> >>> >> > >> >>> >> >> >> >>> >> >> I have petsc4py 3.16 from this docker container (list of >> dependencies include PETSc and petsc4py). >> >>> >> >> >> >>> >> >> I'm pretty sure this is not intended behaviour. Any insight as >> to how to fix this issue (I tried running ./configure --with-hdf5 to no >> avail) or more generally to perform this jiggling between real and complex >> would be much appreciated, >> >>> >> >> >> >>> >> >> Kind regards. >> >>> >> >> >> >>> >> >> Quentin >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> > -- >> >>> >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> >>> >> > -- Norbert Wiener >> >>> >> > >> >>> >> > https://www.cse.buffalo.edu/~knepley/ >> >>> > >> >>> > >> >>> > >> >>> > -- >> >>> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> >>> > -- Norbert Wiener >> >>> > >> >>> > https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> >> >> >> -- >> >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> >> -- Norbert Wiener >> >> >> >> https://www.cse.buffalo.edu/~knepley/ >> > >> > >> > >> > >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2044 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 4890722 bytes Desc: not available URL: From mfadams at lbl.gov Tue Dec 7 05:36:15 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 7 Dec 2021 06:36:15 -0500 Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: <2030978811.184065.1638849869029@mail.yahoo.com> References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> Message-ID: I assume you are using PETSc to load matices. What example are you looking at? On Mon, Dec 6, 2021 at 11:04 PM Faraz Hussain via petsc-users < petsc-users at mcs.anl.gov> wrote: > I am studying the examples but it seems all ranks read the full matrix. Is > there an MPI example where only rank 0 reads the matrix? > > I don't want all ranks to read my input matrix and consume a lot of memory > allocating data for the arrays. > > I have worked with Intel's cluster sparse solver and their documentation > states: > > " Most of the input parameters must be set on the master MPI process only, > and ignored on other processes. Other MPI processes get all required data > from the master MPI process using the MPI communicator, comm. " > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 7 06:58:46 2021 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 7 Dec 2021 07:58:46 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > Hello Matthew, > > That would indeed make sense. > > Full log is attached, I grepped hdf5 in there and didn't find anything > alarming. > At the top of this log: Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no --with-shared-libraries --download-hypre --download-mumps --download-ptscotch --download-scalapack --download-suitesparse --download-superlu_dist --with-scalar-type=complex So the HDF5 option is not being specified. Thanks, Matt Cheers, > > Quentin > > > > > [image: cid:image003.jpg at 01D690CB.3B3FDC10] > > Quentin CHEVALIER ? IA parcours recherche > > LadHyX - Ecole polytechnique > > __________ > > > On Mon, 6 Dec 2021 at 21:39, Matthew Knepley wrote: > >> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >>> Fine. MWE is unchanged : >>> * Run this docker container >>> * Do : python3 -c "from petsc4py import PETSc; >>> PETSc.Viewer().createHDF5('dummy.h5')" >>> >>> Updated attempt at a fix : >>> * cd /usr/local/petsc/ >>> * ./configure PETSC_ARCH= linux-gnu-real-32 PETSC_DIR=/usr/local/petsc >>> --with-hdf5 --force >>> >> >> Did it find HDF5? If not, it will shut it off. You need to send us >> >> $PETSC_DIR/configure.log >> >> so we can see what happened in the configure run. >> >> Thanks, >> >> Matt >> >> >>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 all >>> >>> Still no joy. The same error remains. >>> >>> Quentin >>> >>> >>> >>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet wrote: >>> > >>> > >>> > >>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> > >>> > The PETSC_DIR exactly corresponds to the previous one, so I guess that >>> rules option b) out, except if a specific option is required to overwrite a >>> previous installation of PETSc. As for a), well I thought reconfiguring >>> pretty direct, you're welcome to give me a hint as to what could be wrong >>> in the following process. >>> > >>> > Steps to reproduce this behaviour are as follows : >>> > * Run this docker container >>> > * Do : python3 -c "from petsc4py import PETSc; >>> PETSc.Viewer().createHDF5('dummy.h5')" >>> > >>> > After you get the error Unknown PetscViewer type, feel free to try : >>> > >>> > * cd /usr/local/petsc/ >>> > * ./configure --with-hfd5 >>> > >>> > >>> > It?s hdf5, not hfd5. >>> > It?s PETSC_ARCH, not PETSC-ARCH. >>> > PETSC_ARCH is missing from your configure line. >>> > >>> > Thanks, >>> > Pierre >>> > >>> > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all >>> > >>> > Then repeat the MWE and observe absolutely no behavioural change >>> whatsoever. I'm afraid I don't know PETSc well enough to be surprised by >>> that. >>> > >>> > Quentin >>> > >>> > >>> > >>> > Quentin CHEVALIER ? IA parcours recherche >>> > >>> > LadHyX - Ecole polytechnique >>> > >>> > __________ >>> > >>> > >>> > >>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley >>> wrote: >>> >> >>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >>> >>> >>> It failed all of the tests included in `make >>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with >>> >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such >>> file >>> >>> or directory` >>> >>> >>> >>> I am therefore fairly confident this a "file absence" problem, and >>> not >>> >>> a compilation problem. >>> >>> >>> >>> I repeat that there was no error at compilation stage. The final >>> stage >>> >>> did present `gmake[3]: Nothing to be done for 'libs'.` but that's >>> all. >>> >>> >>> >>> Again, running `./configure --with-hdf5` followed by a `make >>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not >>> >>> change the problem. I get the same error at the same position as >>> >>> before. >>> >> >>> >> >>> >> If you reconfigured and rebuilt, it is impossible to get the same >>> error, so >>> >> >>> >> a) You did not reconfigure >>> >> >>> >> b) Your new build is somewhere else on the machine >>> >> >>> >> Thanks, >>> >> >>> >> Matt >>> >> >>> >>> >>> >>> I will comment I am running on OpenSUSE. >>> >>> >>> >>> Quentin >>> >>> >>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley >>> wrote: >>> >>> > >>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >>> >> >>> >>> >> Hello Matthew and thanks for your quick response. >>> >>> >> >>> >>> >> I'm afraid I did try to snoop around the container and rerun >>> PETSc's >>> >>> >> configure with the --with-hdf5 option, to absolutely no avail. >>> >>> >> >>> >>> >> I didn't see any errors during config or make, but it failed the >>> tests >>> >>> >> (which aren't included in the minimal container I suppose) >>> >>> > >>> >>> > >>> >>> > Failed which tests? What was the error? >>> >>> > >>> >>> > Thanks, >>> >>> > >>> >>> > Matt >>> >>> > >>> >>> >> >>> >>> >> Quentin >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> Quentin CHEVALIER ? IA parcours recherche >>> >>> >> >>> >>> >> LadHyX - Ecole polytechnique >>> >>> >> >>> >>> >> __________ >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley >>> wrote: >>> >>> >> > >>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >>> >> >> >>> >>> >> >> Hello PETSc users, >>> >>> >> >> >>> >>> >> >> This email is a duplicata of this gitlab issue, sorry for any >>> inconvenience caused. >>> >>> >> >> >>> >>> >> >> I want to compute a PETSc vector in real mode, than perform >>> calculations with it in complex mode. I want as much of this process to be >>> parallel as possible. Right now, I compile PETSc in real mode, compute my >>> vector and save it to a file, then switch to complex mode, read it, and >>> move on. >>> >>> >> >> >>> >>> >> >> This creates unexpected behaviour using MPIIO, so on Lisandro >>> Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking >>> inspiration from petsc4py doc, a bitbucket example and another one, all top >>> Google results for 'petsc hdf5') : >>> >>> >> >>> >>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>> >>> >> >>> q.load(viewer) >>> >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, >>> mode=PETSc.ScatterMode.FORWARD) >>> >>> >> >> >>> >>> >> >> >>> >>> >> >> This crashes my code. I obtain traceback : >>> >>> >> >>> >>> >>> >> >>> File "/home/shared/code.py", line 121, in Load >>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', >>> COMM_WORLD) >>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, in >>> petsc4py.PETSc.Viewer.createHDF5 >>> >>> >> >>> petsc4py.PETSc.Error: error code 86 >>> >>> >> >>> [0] PetscViewerSetType() at >>> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> >>> >> >>> [0] Unknown type. Check for miss-spelling or missing package: >>> https://petsc.org/release/install/install/#external-packages >>> >>> >> >>> [0] Unknown PetscViewer type given: hdf5 >>> >>> >> > >>> >>> >> > This means that PETSc has not been configured with HDF5 >>> (--with-hdf5 or --download-hdf5), so the container should be updated. >>> >>> >> > >>> >>> >> > THanks, >>> >>> >> > >>> >>> >> > Matt >>> >>> >> > >>> >>> >> >> >>> >>> >> >> I have petsc4py 3.16 from this docker container (list of >>> dependencies include PETSc and petsc4py). >>> >>> >> >> >>> >>> >> >> I'm pretty sure this is not intended behaviour. Any insight as >>> to how to fix this issue (I tried running ./configure --with-hdf5 to no >>> avail) or more generally to perform this jiggling between real and complex >>> would be much appreciated, >>> >>> >> >> >>> >>> >> >> Kind regards. >>> >>> >> >> >>> >>> >> >> Quentin >>> >>> >> > >>> >>> >> > >>> >>> >> > >>> >>> >> > -- >>> >>> >> > What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> >>> >> > -- Norbert Wiener >>> >>> >> > >>> >>> >> > https://www.cse.buffalo.edu/~knepley/ >>> >>> > >>> >>> > >>> >>> > >>> >>> > -- >>> >>> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> >>> > -- Norbert Wiener >>> >>> > >>> >>> > https://www.cse.buffalo.edu/~knepley/ >>> >> >>> >> >>> >> >>> >> -- >>> >> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> >> -- Norbert Wiener >>> >> >>> >> https://www.cse.buffalo.edu/~knepley/ >>> > >>> > >>> > >>> > >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From quentin.chevalier at polytechnique.edu Tue Dec 7 07:26:30 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Tue, 7 Dec 2021 14:26:30 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: Ok my bad, that log corresponded to a tentative --download-hdf5. This log corresponds to the commands given above and has --with-hdf5 in its options. The whole process still results in the same error. Quentin Quentin CHEVALIER ? IA parcours recherche LadHyX - Ecole polytechnique __________ On Tue, 7 Dec 2021 at 13:59, Matthew Knepley wrote: > > On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier wrote: >> >> Hello Matthew, >> >> That would indeed make sense. >> >> Full log is attached, I grepped hdf5 in there and didn't find anything alarming. > > > At the top of this log: > > Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no --with-shared-libraries --download-hypre --download-mumps --download-ptscotch --download-scalapack --download-suitesparse --download-superlu_dist --with-scalar-type=complex > > > So the HDF5 option is not being specified. > > Thanks, > > Matt > >> Cheers, >> >> Quentin >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> LadHyX - Ecole polytechnique >> >> __________ >> >> >> >> On Mon, 6 Dec 2021 at 21:39, Matthew Knepley wrote: >>> >>> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier wrote: >>>> >>>> Fine. MWE is unchanged : >>>> * Run this docker container >>>> * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" >>>> >>>> Updated attempt at a fix : >>>> * cd /usr/local/petsc/ >>>> * ./configure PETSC_ARCH= linux-gnu-real-32 PETSC_DIR=/usr/local/petsc --with-hdf5 --force >>> >>> >>> Did it find HDF5? If not, it will shut it off. You need to send us >>> >>> $PETSC_DIR/configure.log >>> >>> so we can see what happened in the configure run. >>> >>> Thanks, >>> >>> Matt >>> >>>> >>>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 all >>>> >>>> Still no joy. The same error remains. >>>> >>>> Quentin >>>> >>>> >>>> >>>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet wrote: >>>> > >>>> > >>>> > >>>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier wrote: >>>> > >>>> > The PETSC_DIR exactly corresponds to the previous one, so I guess that rules option b) out, except if a specific option is required to overwrite a previous installation of PETSc. As for a), well I thought reconfiguring pretty direct, you're welcome to give me a hint as to what could be wrong in the following process. >>>> > >>>> > Steps to reproduce this behaviour are as follows : >>>> > * Run this docker container >>>> > * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" >>>> > >>>> > After you get the error Unknown PetscViewer type, feel free to try : >>>> > >>>> > * cd /usr/local/petsc/ >>>> > * ./configure --with-hfd5 >>>> > >>>> > >>>> > It?s hdf5, not hfd5. >>>> > It?s PETSC_ARCH, not PETSC-ARCH. >>>> > PETSC_ARCH is missing from your configure line. >>>> > >>>> > Thanks, >>>> > Pierre >>>> > >>>> > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all >>>> > >>>> > Then repeat the MWE and observe absolutely no behavioural change whatsoever. I'm afraid I don't know PETSc well enough to be surprised by that. >>>> > >>>> > Quentin >>>> > >>>> > >>>> > >>>> > Quentin CHEVALIER ? IA parcours recherche >>>> > >>>> > LadHyX - Ecole polytechnique >>>> > >>>> > __________ >>>> > >>>> > >>>> > >>>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley wrote: >>>> >> >>>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier wrote: >>>> >>> >>>> >>> It failed all of the tests included in `make >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with >>>> >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file >>>> >>> or directory` >>>> >>> >>>> >>> I am therefore fairly confident this a "file absence" problem, and not >>>> >>> a compilation problem. >>>> >>> >>>> >>> I repeat that there was no error at compilation stage. The final stage >>>> >>> did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. >>>> >>> >>>> >>> Again, running `./configure --with-hdf5` followed by a `make >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not >>>> >>> change the problem. I get the same error at the same position as >>>> >>> before. >>>> >> >>>> >> >>>> >> If you reconfigured and rebuilt, it is impossible to get the same error, so >>>> >> >>>> >> a) You did not reconfigure >>>> >> >>>> >> b) Your new build is somewhere else on the machine >>>> >> >>>> >> Thanks, >>>> >> >>>> >> Matt >>>> >> >>>> >>> >>>> >>> I will comment I am running on OpenSUSE. >>>> >>> >>>> >>> Quentin >>>> >>> >>>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley wrote: >>>> >>> > >>>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier wrote: >>>> >>> >> >>>> >>> >> Hello Matthew and thanks for your quick response. >>>> >>> >> >>>> >>> >> I'm afraid I did try to snoop around the container and rerun PETSc's >>>> >>> >> configure with the --with-hdf5 option, to absolutely no avail. >>>> >>> >> >>>> >>> >> I didn't see any errors during config or make, but it failed the tests >>>> >>> >> (which aren't included in the minimal container I suppose) >>>> >>> > >>>> >>> > >>>> >>> > Failed which tests? What was the error? >>>> >>> > >>>> >>> > Thanks, >>>> >>> > >>>> >>> > Matt >>>> >>> > >>>> >>> >> >>>> >>> >> Quentin >>>> >>> >> >>>> >>> >> >>>> >>> >> >>>> >>> >> Quentin CHEVALIER ? IA parcours recherche >>>> >>> >> >>>> >>> >> LadHyX - Ecole polytechnique >>>> >>> >> >>>> >>> >> __________ >>>> >>> >> >>>> >>> >> >>>> >>> >> >>>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley wrote: >>>> >>> >> > >>>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier wrote: >>>> >>> >> >> >>>> >>> >> >> Hello PETSc users, >>>> >>> >> >> >>>> >>> >> >> This email is a duplicata of this gitlab issue, sorry for any inconvenience caused. >>>> >>> >> >> >>>> >>> >> >> I want to compute a PETSc vector in real mode, than perform calculations with it in complex mode. I want as much of this process to be parallel as possible. Right now, I compile PETSc in real mode, compute my vector and save it to a file, then switch to complex mode, read it, and move on. >>>> >>> >> >> >>>> >>> >> >> This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking inspiration from petsc4py doc, a bitbucket example and another one, all top Google results for 'petsc hdf5') : >>>> >>> >> >>> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>>> >>> >> >>> q.load(viewer) >>>> >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) >>>> >>> >> >> >>>> >>> >> >> >>>> >>> >> >> This crashes my code. I obtain traceback : >>>> >>> >> >>> >>>> >>> >> >>> File "/home/shared/code.py", line 121, in Load >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >>>> >>> >> >>> petsc4py.PETSc.Error: error code 86 >>>> >>> >> >>> [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>>> >>> >> >>> [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages >>>> >>> >> >>> [0] Unknown PetscViewer type given: hdf5 >>>> >>> >> > >>>> >>> >> > This means that PETSc has not been configured with HDF5 (--with-hdf5 or --download-hdf5), so the container should be updated. >>>> >>> >> > >>>> >>> >> > THanks, >>>> >>> >> > >>>> >>> >> > Matt >>>> >>> >> > >>>> >>> >> >> >>>> >>> >> >> I have petsc4py 3.16 from this docker container (list of dependencies include PETSc and petsc4py). >>>> >>> >> >> >>>> >>> >> >> I'm pretty sure this is not intended behaviour. Any insight as to how to fix this issue (I tried running ./configure --with-hdf5 to no avail) or more generally to perform this jiggling between real and complex would be much appreciated, >>>> >>> >> >> >>>> >>> >> >> Kind regards. >>>> >>> >> >> >>>> >>> >> >> Quentin >>>> >>> >> > >>>> >>> >> > >>>> >>> >> > >>>> >>> >> > -- >>>> >>> >> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> >>> >> > -- Norbert Wiener >>>> >>> >> > >>>> >>> >> > https://www.cse.buffalo.edu/~knepley/ >>>> >>> > >>>> >>> > >>>> >>> > >>>> >>> > -- >>>> >>> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> >>> > -- Norbert Wiener >>>> >>> > >>>> >>> > https://www.cse.buffalo.edu/~knepley/ >>>> >> >>>> >> >>>> >> >>>> >> -- >>>> >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> >> -- Norbert Wiener >>>> >> >>>> >> https://www.cse.buffalo.edu/~knepley/ >>>> > >>>> > >>>> > >>>> > >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 863767 bytes Desc: not available URL: From knepley at gmail.com Tue Dec 7 07:58:24 2021 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 7 Dec 2021 08:58:24 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Tue, Dec 7, 2021 at 8:26 AM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > Ok my bad, that log corresponded to a tentative --download-hdf5. This > log corresponds to the commands given above and has --with-hdf5 in its > options. > Okay, this configure was successful and found HDF5 > The whole process still results in the same error. > Now send me the complete error output with this PETSc. Thanks, Matt > Quentin > > > > Quentin CHEVALIER ? IA parcours recherche > > LadHyX - Ecole polytechnique > > __________ > > > > On Tue, 7 Dec 2021 at 13:59, Matthew Knepley wrote: > > > > On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> > >> Hello Matthew, > >> > >> That would indeed make sense. > >> > >> Full log is attached, I grepped hdf5 in there and didn't find anything > alarming. > > > > > > At the top of this log: > > > > Configure Options: --configModules=PETSc.Configure > --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 > --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 > --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no > --with-shared-libraries --download-hypre --download-mumps > --download-ptscotch --download-scalapack --download-suitesparse > --download-superlu_dist --with-scalar-type=complex > > > > > > So the HDF5 option is not being specified. > > > > Thanks, > > > > Matt > > > >> Cheers, > >> > >> Quentin > >> > >> > >> > >> > >> Quentin CHEVALIER ? IA parcours recherche > >> > >> LadHyX - Ecole polytechnique > >> > >> __________ > >> > >> > >> > >> On Mon, 6 Dec 2021 at 21:39, Matthew Knepley wrote: > >>> > >>> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >>>> > >>>> Fine. MWE is unchanged : > >>>> * Run this docker container > >>>> * Do : python3 -c "from petsc4py import PETSc; > PETSc.Viewer().createHDF5('dummy.h5')" > >>>> > >>>> Updated attempt at a fix : > >>>> * cd /usr/local/petsc/ > >>>> * ./configure PETSC_ARCH= linux-gnu-real-32 > PETSC_DIR=/usr/local/petsc --with-hdf5 --force > >>> > >>> > >>> Did it find HDF5? If not, it will shut it off. You need to send us > >>> > >>> $PETSC_DIR/configure.log > >>> > >>> so we can see what happened in the configure run. > >>> > >>> Thanks, > >>> > >>> Matt > >>> > >>>> > >>>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 all > >>>> > >>>> Still no joy. The same error remains. > >>>> > >>>> Quentin > >>>> > >>>> > >>>> > >>>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet wrote: > >>>> > > >>>> > > >>>> > > >>>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >>>> > > >>>> > The PETSC_DIR exactly corresponds to the previous one, so I guess > that rules option b) out, except if a specific option is required to > overwrite a previous installation of PETSc. As for a), well I thought > reconfiguring pretty direct, you're welcome to give me a hint as to what > could be wrong in the following process. > >>>> > > >>>> > Steps to reproduce this behaviour are as follows : > >>>> > * Run this docker container > >>>> > * Do : python3 -c "from petsc4py import PETSc; > PETSc.Viewer().createHDF5('dummy.h5')" > >>>> > > >>>> > After you get the error Unknown PetscViewer type, feel free to try : > >>>> > > >>>> > * cd /usr/local/petsc/ > >>>> > * ./configure --with-hfd5 > >>>> > > >>>> > > >>>> > It?s hdf5, not hfd5. > >>>> > It?s PETSC_ARCH, not PETSC-ARCH. > >>>> > PETSC_ARCH is missing from your configure line. > >>>> > > >>>> > Thanks, > >>>> > Pierre > >>>> > > >>>> > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all > >>>> > > >>>> > Then repeat the MWE and observe absolutely no behavioural change > whatsoever. I'm afraid I don't know PETSc well enough to be surprised by > that. > >>>> > > >>>> > Quentin > >>>> > > >>>> > > >>>> > > >>>> > Quentin CHEVALIER ? IA parcours recherche > >>>> > > >>>> > LadHyX - Ecole polytechnique > >>>> > > >>>> > __________ > >>>> > > >>>> > > >>>> > > >>>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley > wrote: > >>>> >> > >>>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >>>> >>> > >>>> >>> It failed all of the tests included in `make > >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, > with > >>>> >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such > file > >>>> >>> or directory` > >>>> >>> > >>>> >>> I am therefore fairly confident this a "file absence" problem, > and not > >>>> >>> a compilation problem. > >>>> >>> > >>>> >>> I repeat that there was no error at compilation stage. The final > stage > >>>> >>> did present `gmake[3]: Nothing to be done for 'libs'.` but that's > all. > >>>> >>> > >>>> >>> Again, running `./configure --with-hdf5` followed by a `make > >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does > not > >>>> >>> change the problem. I get the same error at the same position as > >>>> >>> before. > >>>> >> > >>>> >> > >>>> >> If you reconfigured and rebuilt, it is impossible to get the same > error, so > >>>> >> > >>>> >> a) You did not reconfigure > >>>> >> > >>>> >> b) Your new build is somewhere else on the machine > >>>> >> > >>>> >> Thanks, > >>>> >> > >>>> >> Matt > >>>> >> > >>>> >>> > >>>> >>> I will comment I am running on OpenSUSE. > >>>> >>> > >>>> >>> Quentin > >>>> >>> > >>>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley > wrote: > >>>> >>> > > >>>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >>>> >>> >> > >>>> >>> >> Hello Matthew and thanks for your quick response. > >>>> >>> >> > >>>> >>> >> I'm afraid I did try to snoop around the container and rerun > PETSc's > >>>> >>> >> configure with the --with-hdf5 option, to absolutely no avail. > >>>> >>> >> > >>>> >>> >> I didn't see any errors during config or make, but it failed > the tests > >>>> >>> >> (which aren't included in the minimal container I suppose) > >>>> >>> > > >>>> >>> > > >>>> >>> > Failed which tests? What was the error? > >>>> >>> > > >>>> >>> > Thanks, > >>>> >>> > > >>>> >>> > Matt > >>>> >>> > > >>>> >>> >> > >>>> >>> >> Quentin > >>>> >>> >> > >>>> >>> >> > >>>> >>> >> > >>>> >>> >> Quentin CHEVALIER ? IA parcours recherche > >>>> >>> >> > >>>> >>> >> LadHyX - Ecole polytechnique > >>>> >>> >> > >>>> >>> >> __________ > >>>> >>> >> > >>>> >>> >> > >>>> >>> >> > >>>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley < > knepley at gmail.com> wrote: > >>>> >>> >> > > >>>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >>>> >>> >> >> > >>>> >>> >> >> Hello PETSc users, > >>>> >>> >> >> > >>>> >>> >> >> This email is a duplicata of this gitlab issue, sorry for > any inconvenience caused. > >>>> >>> >> >> > >>>> >>> >> >> I want to compute a PETSc vector in real mode, than perform > calculations with it in complex mode. I want as much of this process to be > parallel as possible. Right now, I compile PETSc in real mode, compute my > vector and save it to a file, then switch to complex mode, read it, and > move on. > >>>> >>> >> >> > >>>> >>> >> >> This creates unexpected behaviour using MPIIO, so on > Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows > (taking inspiration from petsc4py doc, a bitbucket example and another one, > all top Google results for 'petsc hdf5') : > >>>> >>> >> >>> > >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', > COMM_WORLD) > >>>> >>> >> >>> q.load(viewer) > >>>> >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, > mode=PETSc.ScatterMode.FORWARD) > >>>> >>> >> >> > >>>> >>> >> >> > >>>> >>> >> >> This crashes my code. I obtain traceback : > >>>> >>> >> >>> > >>>> >>> >> >>> File "/home/shared/code.py", line 121, in Load > >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', > COMM_WORLD) > >>>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, in > petsc4py.PETSc.Viewer.createHDF5 > >>>> >>> >> >>> petsc4py.PETSc.Error: error code 86 > >>>> >>> >> >>> [0] PetscViewerSetType() at > /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > >>>> >>> >> >>> [0] Unknown type. Check for miss-spelling or missing > package: https://petsc.org/release/install/install/#external-packages > >>>> >>> >> >>> [0] Unknown PetscViewer type given: hdf5 > >>>> >>> >> > > >>>> >>> >> > This means that PETSc has not been configured with HDF5 > (--with-hdf5 or --download-hdf5), so the container should be updated. > >>>> >>> >> > > >>>> >>> >> > THanks, > >>>> >>> >> > > >>>> >>> >> > Matt > >>>> >>> >> > > >>>> >>> >> >> > >>>> >>> >> >> I have petsc4py 3.16 from this docker container (list of > dependencies include PETSc and petsc4py). > >>>> >>> >> >> > >>>> >>> >> >> I'm pretty sure this is not intended behaviour. Any insight > as to how to fix this issue (I tried running ./configure --with-hdf5 to no > avail) or more generally to perform this jiggling between real and complex > would be much appreciated, > >>>> >>> >> >> > >>>> >>> >> >> Kind regards. > >>>> >>> >> >> > >>>> >>> >> >> Quentin > >>>> >>> >> > > >>>> >>> >> > > >>>> >>> >> > > >>>> >>> >> > -- > >>>> >>> >> > What most experimenters take for granted before they begin > their experiments is infinitely more interesting than any results to which > their experiments lead. > >>>> >>> >> > -- Norbert Wiener > >>>> >>> >> > > >>>> >>> >> > https://www.cse.buffalo.edu/~knepley/ > >>>> >>> > > >>>> >>> > > >>>> >>> > > >>>> >>> > -- > >>>> >>> > What most experimenters take for granted before they begin > their experiments is infinitely more interesting than any results to which > their experiments lead. > >>>> >>> > -- Norbert Wiener > >>>> >>> > > >>>> >>> > https://www.cse.buffalo.edu/~knepley/ > >>>> >> > >>>> >> > >>>> >> > >>>> >> -- > >>>> >> What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >>>> >> -- Norbert Wiener > >>>> >> > >>>> >> https://www.cse.buffalo.edu/~knepley/ > >>>> > > >>>> > > >>>> > > >>>> > > >>> > >>> > >>> > >>> -- > >>> What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >>> -- Norbert Wiener > >>> > >>> https://www.cse.buffalo.edu/~knepley/ > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wence at gmx.li Tue Dec 7 07:59:10 2021 From: wence at gmx.li (Lawrence Mitchell) Date: Tue, 7 Dec 2021 13:59:10 +0000 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: > On 7 Dec 2021, at 13:26, Quentin Chevalier wrote: > > Ok my bad, that log corresponded to a tentative --download-hdf5. This > log corresponds to the commands given above and has --with-hdf5 in its > options. OK, so PETSc is configured with HDF5. I assume you have now built it (with make as instructed) You now need to _rebuild_ petsc4py to link against this new PETSc. Something like PETSC_DIR=/path/to/petsc PETSC_ARCH=whatever-arch pip install /path/to/petsc/src/binding/petsc4py I note that the container build cleans out the source tree (see https://github.com/FEniCS/dolfinx/blob/main/docker/Dockerfile#L338), so I think that you have only rerun configure and not make (so you don't yet have a libpetsc that is appropriately linked). Lawrence From quentin.chevalier at polytechnique.edu Tue Dec 7 08:43:15 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Tue, 7 Dec 2021 15:43:15 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: @Matthew, as stated before, error output is unchanged, i.e.the python command below produces the same traceback : # python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('d.h5')" Traceback (most recent call last): File "", line 1, in File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 petsc4py.PETSc.Error: error code 86 [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages [0] Unknown PetscViewer type given: hdf5 @Wence that makes sense. I'd assumed that the original PETSc had been overwritten, and if the linking has gone wrong I'm surprised anything happens with petsc4py at all. Your tentative command gave : ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' Hint: It looks like a path. File '/usr/local/petsc/src/binding/petsc4py' does not exist. So I tested that global variables PETSC_ARCH & PETSC_DIR were correct then ran "pip install petsc4py" to restart petsc4py from scratch. This gives rise to a different error : # python3 -c "from petsc4py import PETSc" Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.9/dist-packages/petsc4py/PETSc.py", line 3, in PETSc = ImportPETSc(ARCH) File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", line 29, in ImportPETSc return Import('petsc4py', 'PETSc', path, arch) File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", line 73, in Import module = import_module(pkg, name, path, arch) File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", line 58, in import_module with f: return imp.load_module(fullname, f, fn, info) File "/usr/lib/python3.9/imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "/usr/lib/python3.9/imp.py", line 342, in load_dynamic return _load(spec) ImportError: /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/PETSc.cpython-39-x86_64-linux-gnu.so: undefined symbol: petscstack Not sure that it a step forward ; looks like petsc4py is broken now. Quentin On Tue, 7 Dec 2021 at 14:58, Matthew Knepley wrote: > > On Tue, Dec 7, 2021 at 8:26 AM Quentin Chevalier wrote: >> >> Ok my bad, that log corresponded to a tentative --download-hdf5. This >> log corresponds to the commands given above and has --with-hdf5 in its >> options. > > > Okay, this configure was successful and found HDF5 > >> >> The whole process still results in the same error. > > > Now send me the complete error output with this PETSc. > > Thanks, > > Matt > >> >> Quentin >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> LadHyX - Ecole polytechnique >> >> __________ >> >> >> >> On Tue, 7 Dec 2021 at 13:59, Matthew Knepley wrote: >> > >> > On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier wrote: >> >> >> >> Hello Matthew, >> >> >> >> That would indeed make sense. >> >> >> >> Full log is attached, I grepped hdf5 in there and didn't find anything alarming. >> > >> > >> > At the top of this log: >> > >> > Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no --with-shared-libraries --download-hypre --download-mumps --download-ptscotch --download-scalapack --download-suitesparse --download-superlu_dist --with-scalar-type=complex >> > >> > >> > So the HDF5 option is not being specified. >> > >> > Thanks, >> > >> > Matt >> > >> >> Cheers, >> >> >> >> Quentin >> >> >> >> >> >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> LadHyX - Ecole polytechnique >> >> >> >> __________ >> >> >> >> >> >> >> >> On Mon, 6 Dec 2021 at 21:39, Matthew Knepley wrote: >> >>> >> >>> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier wrote: >> >>>> >> >>>> Fine. MWE is unchanged : >> >>>> * Run this docker container >> >>>> * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" >> >>>> >> >>>> Updated attempt at a fix : >> >>>> * cd /usr/local/petsc/ >> >>>> * ./configure PETSC_ARCH= linux-gnu-real-32 PETSC_DIR=/usr/local/petsc --with-hdf5 --force >> >>> >> >>> >> >>> Did it find HDF5? If not, it will shut it off. You need to send us >> >>> >> >>> $PETSC_DIR/configure.log >> >>> >> >>> so we can see what happened in the configure run. >> >>> >> >>> Thanks, >> >>> >> >>> Matt >> >>> >> >>>> >> >>>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 all >> >>>> >> >>>> Still no joy. The same error remains. >> >>>> >> >>>> Quentin >> >>>> >> >>>> >> >>>> >> >>>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet wrote: >> >>>> > >> >>>> > >> >>>> > >> >>>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier wrote: >> >>>> > >> >>>> > The PETSC_DIR exactly corresponds to the previous one, so I guess that rules option b) out, except if a specific option is required to overwrite a previous installation of PETSc. As for a), well I thought reconfiguring pretty direct, you're welcome to give me a hint as to what could be wrong in the following process. >> >>>> > >> >>>> > Steps to reproduce this behaviour are as follows : >> >>>> > * Run this docker container >> >>>> > * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" >> >>>> > >> >>>> > After you get the error Unknown PetscViewer type, feel free to try : >> >>>> > >> >>>> > * cd /usr/local/petsc/ >> >>>> > * ./configure --with-hfd5 >> >>>> > >> >>>> > >> >>>> > It?s hdf5, not hfd5. >> >>>> > It?s PETSC_ARCH, not PETSC-ARCH. >> >>>> > PETSC_ARCH is missing from your configure line. >> >>>> > >> >>>> > Thanks, >> >>>> > Pierre >> >>>> > >> >>>> > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all >> >>>> > >> >>>> > Then repeat the MWE and observe absolutely no behavioural change whatsoever. I'm afraid I don't know PETSc well enough to be surprised by that. >> >>>> > >> >>>> > Quentin >> >>>> > >> >>>> > >> >>>> > >> >>>> > Quentin CHEVALIER ? IA parcours recherche >> >>>> > >> >>>> > LadHyX - Ecole polytechnique >> >>>> > >> >>>> > __________ >> >>>> > >> >>>> > >> >>>> > >> >>>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley wrote: >> >>>> >> >> >>>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier wrote: >> >>>> >>> >> >>>> >>> It failed all of the tests included in `make >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with >> >>>> >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file >> >>>> >>> or directory` >> >>>> >>> >> >>>> >>> I am therefore fairly confident this a "file absence" problem, and not >> >>>> >>> a compilation problem. >> >>>> >>> >> >>>> >>> I repeat that there was no error at compilation stage. The final stage >> >>>> >>> did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. >> >>>> >>> >> >>>> >>> Again, running `./configure --with-hdf5` followed by a `make >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not >> >>>> >>> change the problem. I get the same error at the same position as >> >>>> >>> before. >> >>>> >> >> >>>> >> >> >>>> >> If you reconfigured and rebuilt, it is impossible to get the same error, so >> >>>> >> >> >>>> >> a) You did not reconfigure >> >>>> >> >> >>>> >> b) Your new build is somewhere else on the machine >> >>>> >> >> >>>> >> Thanks, >> >>>> >> >> >>>> >> Matt >> >>>> >> >> >>>> >>> >> >>>> >>> I will comment I am running on OpenSUSE. >> >>>> >>> >> >>>> >>> Quentin >> >>>> >>> >> >>>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley wrote: >> >>>> >>> > >> >>>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier wrote: >> >>>> >>> >> >> >>>> >>> >> Hello Matthew and thanks for your quick response. >> >>>> >>> >> >> >>>> >>> >> I'm afraid I did try to snoop around the container and rerun PETSc's >> >>>> >>> >> configure with the --with-hdf5 option, to absolutely no avail. >> >>>> >>> >> >> >>>> >>> >> I didn't see any errors during config or make, but it failed the tests >> >>>> >>> >> (which aren't included in the minimal container I suppose) >> >>>> >>> > >> >>>> >>> > >> >>>> >>> > Failed which tests? What was the error? >> >>>> >>> > >> >>>> >>> > Thanks, >> >>>> >>> > >> >>>> >>> > Matt >> >>>> >>> > >> >>>> >>> >> >> >>>> >>> >> Quentin >> >>>> >>> >> >> >>>> >>> >> >> >>>> >>> >> >> >>>> >>> >> Quentin CHEVALIER ? IA parcours recherche >> >>>> >>> >> >> >>>> >>> >> LadHyX - Ecole polytechnique >> >>>> >>> >> >> >>>> >>> >> __________ >> >>>> >>> >> >> >>>> >>> >> >> >>>> >>> >> >> >>>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley wrote: >> >>>> >>> >> > >> >>>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier wrote: >> >>>> >>> >> >> >> >>>> >>> >> >> Hello PETSc users, >> >>>> >>> >> >> >> >>>> >>> >> >> This email is a duplicata of this gitlab issue, sorry for any inconvenience caused. >> >>>> >>> >> >> >> >>>> >>> >> >> I want to compute a PETSc vector in real mode, than perform calculations with it in complex mode. I want as much of this process to be parallel as possible. Right now, I compile PETSc in real mode, compute my vector and save it to a file, then switch to complex mode, read it, and move on. >> >>>> >>> >> >> >> >>>> >>> >> >> This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking inspiration from petsc4py doc, a bitbucket example and another one, all top Google results for 'petsc hdf5') : >> >>>> >>> >> >>> >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >> >>>> >>> >> >>> q.load(viewer) >> >>>> >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) >> >>>> >>> >> >> >> >>>> >>> >> >> >> >>>> >>> >> >> This crashes my code. I obtain traceback : >> >>>> >>> >> >>> >> >>>> >>> >> >>> File "/home/shared/code.py", line 121, in Load >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >> >>>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >> >>>> >>> >> >>> petsc4py.PETSc.Error: error code 86 >> >>>> >>> >> >>> [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >> >>>> >>> >> >>> [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages >> >>>> >>> >> >>> [0] Unknown PetscViewer type given: hdf5 >> >>>> >>> >> > >> >>>> >>> >> > This means that PETSc has not been configured with HDF5 (--with-hdf5 or --download-hdf5), so the container should be updated. >> >>>> >>> >> > >> >>>> >>> >> > THanks, >> >>>> >>> >> > >> >>>> >>> >> > Matt >> >>>> >>> >> > >> >>>> >>> >> >> >> >>>> >>> >> >> I have petsc4py 3.16 from this docker container (list of dependencies include PETSc and petsc4py). >> >>>> >>> >> >> >> >>>> >>> >> >> I'm pretty sure this is not intended behaviour. Any insight as to how to fix this issue (I tried running ./configure --with-hdf5 to no avail) or more generally to perform this jiggling between real and complex would be much appreciated, >> >>>> >>> >> >> >> >>>> >>> >> >> Kind regards. >> >>>> >>> >> >> >> >>>> >>> >> >> Quentin >> >>>> >>> >> > >> >>>> >>> >> > >> >>>> >>> >> > >> >>>> >>> >> > -- >> >>>> >>> >> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> >>>> >>> >> > -- Norbert Wiener >> >>>> >>> >> > >> >>>> >>> >> > https://www.cse.buffalo.edu/~knepley/ >> >>>> >>> > >> >>>> >>> > >> >>>> >>> > >> >>>> >>> > -- >> >>>> >>> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> >>>> >>> > -- Norbert Wiener >> >>>> >>> > >> >>>> >>> > https://www.cse.buffalo.edu/~knepley/ >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> -- >> >>>> >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> >>>> >> -- Norbert Wiener >> >>>> >> >> >>>> >> https://www.cse.buffalo.edu/~knepley/ >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>> >> >>> >> >>> >> >>> -- >> >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> >>> -- Norbert Wiener >> >>> >> >>> https://www.cse.buffalo.edu/~knepley/ >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From bsmith at petsc.dev Tue Dec 7 09:13:15 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 7 Dec 2021 10:13:15 -0500 Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: <2030978811.184065.1638849869029@mail.yahoo.com> References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> Message-ID: If you use MatLoad() it never has the entire matrix on a single rank at the same time; it efficiently gets the matrix from the file spread out over all the ranks. > On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users wrote: > > I am studying the examples but it seems all ranks read the full matrix. Is there an MPI example where only rank 0 reads the matrix? > > I don't want all ranks to read my input matrix and consume a lot of memory allocating data for the arrays. > > I have worked with Intel's cluster sparse solver and their documentation states: > > " Most of the input parameters must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm. " From wence at gmx.li Tue Dec 7 09:18:26 2021 From: wence at gmx.li (Lawrence Mitchell) Date: Tue, 7 Dec 2021 15:18:26 +0000 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: <42F559F9-8E4B-480E-BEAE-8AEBC24D927C@gmx.li> Comments inline below: > On 7 Dec 2021, at 14:43, Quentin Chevalier wrote: > > @Matthew, as stated before, error output is unchanged, i.e.the python > command below produces the same traceback : > > # python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('d.h5')" > Traceback (most recent call last): > File "", line 1, in > File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 > petsc4py.PETSc.Error: error code 86 > [0] PetscViewerSetType() at > /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > [0] Unknown type. Check for miss-spelling or missing package: > https://petsc.org/release/install/install/#external-packages > [0] Unknown PetscViewer type given: hdf5 > > @Wence that makes sense. I'd assumed that the original PETSc had been > overwritten, and if the linking has gone wrong I'm surprised anything > happens with petsc4py at all. > > Your tentative command gave : > > ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' > Hint: It looks like a path. File > '/usr/local/petsc/src/binding/petsc4py' does not exist. > > So I tested that global variables PETSC_ARCH & PETSC_DIR were correct > then ran "pip install petsc4py" to restart petsc4py from scratch. This downloads petsc4py from pypi. It is not guaranteed to give you a version that matches the PETSc version you have installed (which is the source of your error below) > This > gives rise to a different error : > [...] > ImportError: /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/PETSc.cpython-39-x86_64-linux-gnu.so: > undefined symbol: petscstack > > Not sure that it a step forward ; looks like petsc4py is broken now. The steps to build PETSc with HDF5 support and then get a compatible petsc4py are: 1. Download the PETSc source somehow (https://petsc.org/release/download/) I now assume that this source tree lives in .../petsc 2. cd .../petsc 3. ./configure --with-hdf5 --any-other --configure-flags --you-want 4. Run the appropriate "make" command as suggested by configure 5. Run the appropriate "make check" command as suggested by configure 6. Set PETSC_DIR and PETSC_ARCH appropriately 7. pip install src/binding/petsc4py If you are working the docker container from dolfinx/dolfinx, you can see the commands that are run to install PETSc, and then petsc4py, here https://github.com/FEniCS/dolfinx/blob/main/docker/Dockerfile#L243 If you want to reproduce these versions of PETSc but with the addition of HDF5 support, just add --with-hdf5 to all of the relevant configure lines. Lawrence From liyuansen89 at gmail.com Tue Dec 7 09:25:25 2021 From: liyuansen89 at gmail.com (liyuansen89 at gmail.com) Date: Tue, 7 Dec 2021 09:25:25 -0600 Subject: [petsc-users] install PETSc on windows In-Reply-To: <84672efd-113b-2daf-85d1-e58d3812421d@mcs.anl.gov> References: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> <84672efd-113b-2daf-85d1-e58d3812421d@mcs.anl.gov> Message-ID: <005201d7eb7e$a8c83a20$fa58ae60$@gmail.com> Hi Satish, I have another question. After I run the check command, I got the following output (the attached file), have I successfully installed the library? Is there any error? Thanks -----Original Message----- From: Satish Balay Sent: Monday, December 6, 2021 6:59 PM To: Ning Li Cc: petsc-users Subject: Re: [petsc-users] install PETSc on windows Glad it worked. Thanks for the update. Satish On Mon, 6 Dec 2021, Ning Li wrote: > Thanks for your reply. > > After I added '--with-shared-libraries=0', the configuration stage > passed and now it is executing the 'make' command! > > Thanks very much > > On Mon, Dec 6, 2021 at 5:21 PM Satish Balay wrote: > > > >>> > > Executing: /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe > > ifort -o > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/c onftest.exe > > -MD -O3 -fpp > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.lib > > raries/conftest.o /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ > > SDKs/MPI/Lib/x64/msmpi.lib /cygdrive/c/Program\ Files\ > > \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib > > stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with > > use of other libs; use /NODEFAULTLIB:library <<< > > > > I'm not sure why this link command is giving this error. Can you > > retry with '--with-shared-libraries=0'? > > > > Satish > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > Howdy, > > > > > > I am trying to install PETSc on windows with cygwin but got an mpi error. > > > Could you have a look at my issue and give me some instructions? > > > > > > Here is the information about my environment: > > > 1. operation system: windows 11 > > > 2. visual studio version: 2019 > > > 3. intel one API toolkit is installed 4. Microsoft MS MPI is > > > installed. > > > 5. Intel MPI is uninstalled. > > > 6. PETSc version: 3.16.1 > > > > > > this is my configuration: > > > ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' > > > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > > > --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft > > > SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft > > > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program Files > > > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program > > > Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > > > --with-blas-lapack-lib=['/cygdrive/c/Program Files > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cy > > gdrive/c/Program > > > Files > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive > > /c/Program > > > Files > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > > > --with-scalapack-include='/cygdrive/c/Program Files > > > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > > > --with-scalapack-lib=['/cygdrive/c/Program Files > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib', > > '/cygdrive/c/Program > > > Files > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.l > > > ib'] > > > --with-fortran-interfaces=1 --with-debugging=0 > > > > > > attached is the configure.log file. > > > > > > Thanks > > > > > > Ning Li > > > > > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: ss.png Type: image/png Size: 250022 bytes Desc: not available URL: From balay at mcs.anl.gov Tue Dec 7 09:31:10 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 7 Dec 2021 09:31:10 -0600 (CST) Subject: [petsc-users] install PETSc on windows In-Reply-To: <005201d7eb7e$a8c83a20$fa58ae60$@gmail.com> References: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> <84672efd-113b-2daf-85d1-e58d3812421d@mcs.anl.gov> <005201d7eb7e$a8c83a20$fa58ae60$@gmail.com> Message-ID: Your build is with msmpi - but mpiexec from openmpi got used. You can try compiling and running examples manually [with the correct mpiexec] Satish On Tue, 7 Dec 2021, liyuansen89 at gmail.com wrote: > Hi Satish, > > I have another question. After I run the check command, I got the following > output (the attached file), have I successfully installed the library? Is > there any error? > > Thanks > > > > -----Original Message----- > From: Satish Balay > Sent: Monday, December 6, 2021 6:59 PM > To: Ning Li > Cc: petsc-users > Subject: Re: [petsc-users] install PETSc on windows > > Glad it worked. Thanks for the update. > > Satish > > On Mon, 6 Dec 2021, Ning Li wrote: > > > Thanks for your reply. > > > > After I added '--with-shared-libraries=0', the configuration stage > > passed and now it is executing the 'make' command! > > > > Thanks very much > > > > On Mon, Dec 6, 2021 at 5:21 PM Satish Balay wrote: > > > > > >>> > > > Executing: /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe > > > ifort -o > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/c > onftest.exe > > > -MD -O3 -fpp > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.lib > > > raries/conftest.o /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ > > > SDKs/MPI/Lib/x64/msmpi.lib /cygdrive/c/Program\ Files\ > > > \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib > > > stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with > > > use of other libs; use /NODEFAULTLIB:library <<< > > > > > > I'm not sure why this link command is giving this error. Can you > > > retry with '--with-shared-libraries=0'? > > > > > > Satish > > > > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > Howdy, > > > > > > > > I am trying to install PETSc on windows with cygwin but got an mpi > error. > > > > Could you have a look at my issue and give me some instructions? > > > > > > > > Here is the information about my environment: > > > > 1. operation system: windows 11 > > > > 2. visual studio version: 2019 > > > > 3. intel one API toolkit is installed 4. Microsoft MS MPI is > > > > installed. > > > > 5. Intel MPI is uninstalled. > > > > 6. PETSc version: 3.16.1 > > > > > > > > this is my configuration: > > > > ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' > > > > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > > > > --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft > > > > SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft > > > > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program Files > > > > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program > > > > Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > > > > --with-blas-lapack-lib=['/cygdrive/c/Program Files > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cy > > > gdrive/c/Program > > > > Files > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive > > > /c/Program > > > > Files > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > > > > --with-scalapack-include='/cygdrive/c/Program Files > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > > > > --with-scalapack-lib=['/cygdrive/c/Program Files > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib', > > > '/cygdrive/c/Program > > > > Files > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.l > > > > ib'] > > > > --with-fortran-interfaces=1 --with-debugging=0 > > > > > > > > attached is the configure.log file. > > > > > > > > Thanks > > > > > > > > Ning Li > > > > > > > > > > > > > > From k.sagiyama at imperial.ac.uk Tue Dec 7 09:50:09 2021 From: k.sagiyama at imperial.ac.uk (Sagiyama, Koki) Date: Tue, 7 Dec 2021 15:50:09 +0000 Subject: [petsc-users] DMView and DMLoad In-Reply-To: References: <56ce2135-9757-4292-e33b-c7eea8fb7b2e@ovgu.de> <056E066F-D596-4254-A44A-60BFFD30FE82@erdw.ethz.ch> <6c4e0656-db99-e9da-000f-ab9f7dd62c07@ovgu.de> Message-ID: Hi Berend, I made some small changes to your code to successfully compile it and defined a periodic dm using DMPlexCreateBoxMesh(), but otherwise your code worked fine. I think we would like to see a complete minimal failing example. Can you make the working example that I pasted in earlier email fail just by modifying the dm (i.e., using the periodic mesh you are actually using)? Thanks, Koki ________________________________ From: Berend van Wachem Sent: Monday, December 6, 2021 3:39 PM To: Sagiyama, Koki ; Hapla Vaclav ; PETSc users list ; Lawrence Mitchell Subject: Re: [petsc-users] DMView and DMLoad Dear Koki, Thanks for your email. In the example of your last email DMPlexCoordinatesLoad() takes sF0 (PetscSF) as a third argument. In our code this modification does not fix the error when loading a periodic dm. Are we doing something wrong? I've included an example code at the bottom of this email, including the error output. Thanks and best regards, Berend /**** Write DM + Vec restart ****/ PetscViewerHDF5Open(PETSC_COMM_WORLD, "result", FILE_MODE_WRITE, &H5Viewer); PetscObjectSetName((PetscObject)dm, "plexA"); PetscViewerPushFormat(H5Viewer, PETSC_VIEWER_HDF5_PETSC); DMPlexTopologyView(dm, H5Viewer); DMPlexLabelsView(dm, H5Viewer); DMPlexCoordinatesView(dm, H5Viewer); PetscViewerPopFormat(H5Viewer); DM sdm; PetscSection s; DMClone(dm, &sdm); PetscObjectSetName((PetscObject)sdm, "dmA"); DMGetGlobalSection(dm, &s); DMSetGlobalSection(sdm, s); DMPlexSectionView(dm, H5Viewer, sdm); Vec vec, vecOld; PetscScalar *array, *arrayOld, *xVecArray, *xVecArrayOld; PetscInt numPoints; DMGetGlobalVector(sdm, &vec); DMGetGlobalVector(sdm, &vecOld); /*** Fill the vectors vec and vecOld ***/ VecGetArray(vec, &array); VecGetArray(vecOld, &arrayOld); VecGetLocalSize(xGlobalVector, &numPoints); VecGetArray(xGlobalVector, &xVecArray); VecGetArray(xOldGlobalVector, &xVecArrayOld); for (i = 0; i < numPoints; i++) /* Loop over all internal mesh points */ { array[i] = xVecArray[i]; arrayOld[i] = xVecArrayOld[i]; } VecRestoreArray(vec, &array); VecRestoreArray(vecOld, &arrayOld); VecRestoreArray(xGlobalVector, &xVecArray); VecRestoreArray(xOldGlobalVector, &xVecArrayOld); PetscObjectSetName((PetscObject)vec, "vecA"); PetscObjectSetName((PetscObject)vecOld, "vecB"); DMPlexGlobalVectorView(dm, H5Viewer, sdm, vec); DMPlexGlobalVectorView(dm, H5Viewer, sdm, vecOld); PetscViewerDestroy(&H5Viewer); /*** end of writing ****/ /*** Load ***/ PetscViewerHDF5Open(PETSC_COMM_WORLD, "result", FILE_MODE_READ, &H5Viewer); DMCreate(PETSC_COMM_WORLD, &dm); DMSetType(dm, DMPLEX); PetscObjectSetName((PetscObject)dm, "plexA"); PetscViewerPushFormat(H5Viewer, PETSC_VIEWER_HDF5_PETSC); DMPlexTopologyLoad(dm, H5Viewer, &sfO); DMPlexLabelsLoad(dm, H5Viewer); DMPlexCoordinatesLoad(dm, H5Viewer, sfO); PetscViewerPopFormat(H5Viewer); DMPlexDistribute(dm, Options->Mesh.overlap, &sfDist, &distributedDM); if (distributedDM) { DMDestroy(&dm); dm = distributedDM; PetscObjectSetName((PetscObject)dm, "plexA"); } PetscSFCompose(sfO, sfDist, &sf); PetscSFDestroy(&sfO); PetscSFDestroy(&sfDist); DMClone(dm, &sdm); PetscObjectSetName((PetscObject)sdm, "dmA"); DMPlexSectionLoad(dm, H5Viewer, sdm, sf, &globalDataSF, &localDataSF); /** Load the Vectors **/ DMGetGlobalVector(sdm, &Restart_xGlobalVector); VecSet(Restart_xGlobalVector,0.0); PetscObjectSetName((PetscObject)Restart_xGlobalVector, "vecA"); DMPlexGlobalVectorLoad(dm, H5Viewer, sdm, globalDataSF,Restart_xGlobalVector); DMGetGlobalVector(sdm, &Restart_xOldGlobalVector); VecSet(Restart_xOldGlobalVector,0.0); PetscObjectSetName((PetscObject)Restart_xOldGlobalVector, "vecB"); DMPlexGlobalVectorLoad(dm, H5Viewer, sdm, globalDataSF, Restart_xOldGlobalVector); PetscViewerDestroy(&H5Viewer); /**** The error message when loading is the following ************/ Creating and distributing mesh [0]PETSC ERROR: --------------------- Error Message -------------------------- [0]PETSC ERROR: Invalid argument [0]PETSC ERROR: Number of coordinates loaded 17128 does not match number of vertices 8000 [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.16.1-435-g007f11b901 GIT Date: 2021-12-01 14:31:21 +0000 [0]PETSC ERROR: ./MF3 on a linux-gcc-openmpi-opt named ivt24.ads.uni-magdeburg.de by berend Mon Dec 6 16:11:21 2021 [0]PETSC ERROR: Configure options --with-p4est=yes --with-partemis --with-metis --with-debugging=no --download-metis=yes --download-parmetis=yes --with-errorchecking=no --download-hdf5 --download-zlib --download-p4est [0]PETSC ERROR: #1 DMPlexCoordinatesLoad_HDF5_V0_Private() at /home/berend/src/petsc_main/src/dm/impls/plex/plexhdf5.c:1387 [0]PETSC ERROR: #2 DMPlexCoordinatesLoad_HDF5_Internal() at /home/berend/src/petsc_main/src/dm/impls/plex/plexhdf5.c:1419 [0]PETSC ERROR: #3 DMPlexCoordinatesLoad() at /home/berend/src/petsc_main/src/dm/impls/plex/plex.c:2070 [0]PETSC ERROR: #4 RestartMeshDM() at /home/berend/src/eclipseworkspace/multiflow/src/io/restartmesh.c:81 [0]PETSC ERROR: #5 CreateMeshDM() at /home/berend/src/eclipseworkspace/multiflow/src/mesh/createmesh.c:61 [0]PETSC ERROR: #6 main() at /home/berend/src/eclipseworkspace/multiflow/src/general/main.c:132 [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: --download-hdf5 [0]PETSC ERROR: --download-metis=yes [0]PETSC ERROR: --download-p4est [0]PETSC ERROR: --download-parmetis=yes [0]PETSC ERROR: --download-zlib [0]PETSC ERROR: --with-debugging=no [0]PETSC ERROR: --with-errorchecking=no [0]PETSC ERROR: --with-metis [0]PETSC ERROR: --with-p4est=yes [0]PETSC ERROR: --with-partemis [0]PETSC ERROR: -d results [0]PETSC ERROR: -o run.mf [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 62. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- On 11/19/21 00:26, Sagiyama, Koki wrote: > Hi Berend, > > I was not able to reproduce the issue you are having, but the following > 1D example (and similar 2D examples) worked fine for me using the latest > PETSc. Please note that DMPlexCoordinatesLoad() now takes a PetscSF > object as the third argument, but the default behavior is unchanged. > > /* test_periodic_io.c */ > > #include > #include > #include > > int main(int argc, char **argv) > { > DM dm; > Vec coordinates; > PetscViewer viewer; > PetscViewerFormat format = PETSC_VIEWER_HDF5_PETSC; > PetscSF sfO; > PetscErrorCode ierr; > > ierr = PetscInitialize(&argc, &argv, NULL, NULL); if (ierr) return ierr; > /* Save */ > ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD, "periodic_example.h5", > FILE_MODE_WRITE, &viewer);CHKERRQ(ierr); > { > DM pdm; > PetscInt dim = 1; > const PetscInt faces[1] = {4}; > DMBoundaryType periodicity[] = {DM_BOUNDARY_PERIODIC}; > PetscInt overlap = 1; > > ierr = DMPlexCreateBoxMesh(PETSC_COMM_WORLD, dim, PETSC_FALSE, > faces, NULL, NULL, periodicity, PETSC_TRUE, &dm);CHKERRQ(ierr); > ierr = DMPlexDistribute(dm, overlap, NULL, &pdm);CHKERRQ(ierr); > ierr = DMDestroy(&dm);CHKERRQ(ierr); > dm = pdm; > ierr = PetscObjectSetName((PetscObject)dm, "periodicDM");CHKERRQ(ierr); > } > ierr = DMGetCoordinates(dm, &coordinates);CHKERRQ(ierr); > ierr = PetscPrintf(PETSC_COMM_WORLD, "Coordinates before > saving:\n");CHKERRQ(ierr); > ierr = VecView(coordinates, NULL);CHKERRQ(ierr); > ierr = PetscViewerPushFormat(viewer, format);CHKERRQ(ierr); > ierr = DMPlexTopologyView(dm, viewer);CHKERRQ(ierr); > ierr = DMPlexCoordinatesView(dm, viewer);CHKERRQ(ierr); > ierr = PetscViewerPopFormat(viewer);CHKERRQ(ierr); > ierr = DMDestroy(&dm);CHKERRQ(ierr); > ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); > /* Load */ > ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD, "periodic_example.h5", > FILE_MODE_READ, &viewer);CHKERRQ(ierr); > ierr = DMCreate(PETSC_COMM_WORLD, &dm);CHKERRQ(ierr); > ierr = DMSetType(dm, DMPLEX);CHKERRQ(ierr); > ierr = PetscObjectSetName((PetscObject)dm, "periodicDM");CHKERRQ(ierr); > ierr = PetscViewerPushFormat(viewer, format);CHKERRQ(ierr); > ierr = DMPlexTopologyLoad(dm, viewer, &sfO);CHKERRQ(ierr); > ierr = DMPlexCoordinatesLoad(dm, viewer, sfO);CHKERRQ(ierr); > ierr = PetscViewerPopFormat(viewer);CHKERRQ(ierr); > ierr = DMGetCoordinates(dm, &coordinates);CHKERRQ(ierr); > ierr = PetscPrintf(PETSC_COMM_WORLD, "Coordinates after > loading:\n");CHKERRQ(ierr); > ierr = VecView(coordinates, NULL);CHKERRQ(ierr); > ierr = PetscSFDestroy(&sfO);CHKERRQ(ierr); > ierr = DMDestroy(&dm);CHKERRQ(ierr); > ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); > ierr = PetscFinalize(); > return ierr; > } > > mpiexec -n 2 ./test_periodic_io > > Coordinates before saving: > Vec Object: coordinates 2 MPI processes > type: mpi > Process [0] > 0. > Process [1] > 0.25 > 0.5 > 0.75 > Coordinates after loading: > Vec Object: vertices 2 MPI processes > type: mpi > Process [0] > 0. > 0.25 > 0.5 > 0.75 > Process [1] > > I would also like to note that, with the latest update, we can > optionally load coordinates directly on the distributed dm as (using > your notation): > > /* Distribute dm */ > ... > PetscSFCompose(sfO, sfDist, &sf); > DMPlexCoordinatesLoad(dm, viewer, sf); > > To use this feature, we need to pass "-dm_plex_view_hdf5_storage_version > 2.0.0" option when saving topology/coordinates. > > > Thanks, > Koki > ------------------------------------------------------------------------ > *From:* Berend van Wachem > *Sent:* Wednesday, November 17, 2021 3:16 PM > *To:* Hapla Vaclav ; PETSc users list > ; Lawrence Mitchell ; Sagiyama, > Koki > *Subject:* Re: [petsc-users] DMView and DMLoad > > ******************* > This email originates from outside Imperial. Do not click on links and > attachments unless you recognise the sender. > If you trust the sender, add them to your safe senders list > https://spam.ic.ac.uk/SpamConsole/Senders.aspx > to disable email > stamping for this address. > ******************* > Dear Vaclav, Lawrence, Koki, > > Thanks for your help! Following your advice and following your example > (https://petsc.org/main/docs/manual/dmplex/#saving-and-loading-data-with-hdf5 > ) > > we are able to save and load the DM with a wrapped Vector in h5 format > (PETSC_VIEWER_HDF5_PETSC) successfully. > > For saving, we use something similar to: > > DMPlexTopologyView(dm, viewer); > DMClone(dm, &sdm); > ... > DMPlexSectionView(dm, viewer, sdm); > DMGetLocalVector(sdm, &vec); > ... > DMPlexLocalVectorView(dm, viewer, sdm, vec); > > and for loading: > > DMCreate(PETSC_COMM_WORLD, &dm); > DMSetType(dm, DMPLEX); > ... > PetscViewerPushFormat(viewer, PETSC_VIEWER_HDF5_PETSC); > DMPlexTopologyLoad(dm, viewer, &sfO); > DMPlexLabelsLoad(dm, viewer); > DMPlexCoordinatesLoad(dm, viewer); > PetscViewerPopFormat(viewer); > ... > PetscSFCompose(sfO, sfDist, &sf); > ... > DMClone(dm, &sdm); > DMPlexSectionLoad(dm, viewer, sdm, sf, &globalDataSF, &localDataSF); > DMGetLocalVector(sdm, &vec); > ... > DMPlexLocalVectorLoad(dm, viewer, sdm, localDataSF, vec); > > > This works fine for non-periodic DMs but for periodic cases the line: > > DMPlexCoordinatesLoad(dm, H5Viewer); > > delivers the error message: invalid argument and the number of loaded > coordinates does not match the number of vertices. > > Is this a known shortcoming, or have we forgotten something to load > periodic DMs? > > Best regards, > > Berend. > > > > On 9/22/21 20:59, Hapla Vaclav wrote: >> To avoid confusions here, Berend seems to be specifically demanding XDMF >> (PETSC_VIEWER_HDF5_XDMF). The stuff we are now working on is parallel >> checkpointing in our own HDF5 format (PETSC_VIEWER_HDF5_PETSC), I will >> make a series of MRs on this topic in the following days. >> >> For XDMF, we are specifically missing the ability to write/load DMLabels >> properly. XDMF uses specific cell-local numbering for faces for >> specification of face sets, and face-local numbering for specification >> of edge sets, which is not great wrt DMPlex design. And ParaView doesn't >> show any of these properly so it's hard to debug. Matt, we should talk >> about this soon. >> >> Berend, for now, could you just load the mesh initially from XDMF and >> then use our PETSC_VIEWER_HDF5_PETSC format for subsequent saving/loading? >> >> Thanks, >> >> Vaclav >> >>> On 17 Sep 2021, at 15:46, Lawrence Mitchell >> >> wrote: >>> >>> Hi Berend, >>> >>>> On 14 Sep 2021, at 12:23, Matthew Knepley >>> >> wrote: >>>> >>>> On Tue, Sep 14, 2021 at 5:15 AM Berend van Wachem >>>> >> wrote: >>>> Dear PETSc-team, >>>> >>>> We are trying to save and load distributed DMPlex and its associated >>>> physical fields (created with DMCreateGlobalVector) (Uvelocity, >>>> VVelocity, ...) in HDF5_XDMF format. To achieve this, we do the >>>> following: >>>> >>>> 1) save in the same xdmf.h5 file: >>>> DMView( DM , H5_XDMF_Viewer ); >>>> VecView( UVelocity, H5_XDMF_Viewer ); >>>> >>>> 2) load the dm: >>>> DMPlexCreateFromfile(PETSC_COMM_WORLD, Filename, PETSC_TRUE, DM); >>>> >>>> 3) load the physical field: >>>> VecLoad( UVelocity, H5_XDMF_Viewer ); >>>> >>>> There are no errors in the execution, but the loaded DM is distributed >>>> differently to the original one, which results in the incorrect >>>> placement of the values of the physical fields (UVelocity etc.) in the >>>> domain. >>>> >>>> This approach is used to restart the simulation with the last saved DM. >>>> Is there something we are missing, or there exists alternative routes to >>>> this goal? Can we somehow get the IS of the redistribution, so we can >>>> re-distribute the vector data as well? >>>> >>>> Many thanks, best regards, >>>> >>>> Hi Berend, >>>> >>>> We are in the midst of rewriting this. We want to support saving >>>> multiple meshes, with fields attached to each, >>>> and preserving the discretization (section) information, and allowing >>>> us to load up on a different number of >>>> processes. We plan to be done by October. Vaclav and I are doing this >>>> in collaboration with Koki Sagiyama, >>>> David Ham, and Lawrence Mitchell from the Firedrake team. >>> >>> The core load/save cycle functionality is now in PETSc main. So if >>> you're using main rather than a release, you can get access to it now. >>> This section of the manual shows an example of how to do >>> thingshttps://petsc.org/main/docs/manual/dmplex/#saving-and-loading-data-with-hdf5 >>> > >>> >>> Let us know if things aren't clear! >>> >>> Thanks, >>> >>> Lawrence >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 7 09:52:34 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 7 Dec 2021 10:52:34 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: <42F559F9-8E4B-480E-BEAE-8AEBC24D927C@gmx.li> References: <42F559F9-8E4B-480E-BEAE-8AEBC24D927C@gmx.li> Message-ID: <3BC576A0-2EB4-40B8-8AE2-9EE21A7C023B@petsc.dev> You can also just add --with-petsc4py to your PETSc configure command and it will manage automatically the petsc4py install. So ./configure --with-hdf5 --any-other --configure-flags --you-want --with-petsc4py Some people don't like this approach, I don't understand exactly why not; it should be equivalent (if it is not equivalent then perhaps it could be fixed?). > On Dec 7, 2021, at 10:18 AM, Lawrence Mitchell wrote: > > Comments inline below: > >> On 7 Dec 2021, at 14:43, Quentin Chevalier wrote: >> >> @Matthew, as stated before, error output is unchanged, i.e.the python >> command below produces the same traceback : >> >> # python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('d.h5')" >> Traceback (most recent call last): >> File "", line 1, in >> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >> petsc4py.PETSc.Error: error code 86 >> [0] PetscViewerSetType() at >> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >> [0] Unknown type. Check for miss-spelling or missing package: >> https://petsc.org/release/install/install/#external-packages >> [0] Unknown PetscViewer type given: hdf5 >> >> @Wence that makes sense. I'd assumed that the original PETSc had been >> overwritten, and if the linking has gone wrong I'm surprised anything >> happens with petsc4py at all. >> >> Your tentative command gave : >> >> ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' >> Hint: It looks like a path. File >> '/usr/local/petsc/src/binding/petsc4py' does not exist. >> >> So I tested that global variables PETSC_ARCH & PETSC_DIR were correct >> then ran "pip install petsc4py" to restart petsc4py from scratch. > > This downloads petsc4py from pypi. It is not guaranteed to give you a version that matches the PETSc version you have installed (which is the source of your error below) > > >> This >> gives rise to a different error : >> > [...] >> ImportError: /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/PETSc.cpython-39-x86_64-linux-gnu.so: >> undefined symbol: petscstack >> >> Not sure that it a step forward ; looks like petsc4py is broken now. > > The steps to build PETSc with HDF5 support and then get a compatible petsc4py are: > > 1. Download the PETSc source somehow (https://petsc.org/release/download/) > > I now assume that this source tree lives in .../petsc > > 2. cd .../petsc > > 3. ./configure --with-hdf5 --any-other --configure-flags --you-want > > 4. Run the appropriate "make" command as suggested by configure > > 5. Run the appropriate "make check" command as suggested by configure > > 6. Set PETSC_DIR and PETSC_ARCH appropriately > > 7. pip install src/binding/petsc4py > > If you are working the docker container from dolfinx/dolfinx, you can see the commands that are run to install PETSc, and then petsc4py, here https://github.com/FEniCS/dolfinx/blob/main/docker/Dockerfile#L243 > > If you want to reproduce these versions of PETSc but with the addition of HDF5 support, just add --with-hdf5 to all of the relevant configure lines. > > Lawrence > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Tue Dec 7 09:55:05 2021 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Tue, 7 Dec 2021 16:55:05 +0100 Subject: [petsc-users] CV_CONV_FAILURE with TSSUNDIALS in PETSc In-Reply-To: References: Message-ID: Note that if you have a call to TSSetFromOptions() in your code (again, see TS ex19 in src/ts/tutorials/ex19.c) then you can supply command line options and see a lot of information about the timestepper. If you run the example with the -help argument you'll get a big list of options which may apply. Here's an example with ex19 which sets a small initial timestep and then lets TSBDF adapt: $ ./ex19 -ts_view -ts_monitor -ts_type bdf -ts_initial_timestep 0.0001 -ts_dt 1e-8 0 TS dt 1e-08 time 0. 1 TS dt 2e-08 time 1e-08 2 TS dt 4e-08 time 3e-08 3 TS dt 8e-08 time 7e-08 4 TS dt 1.6e-07 time 1.5e-07 5 TS dt 3.2e-07 time 3.1e-07 6 TS dt 6.4e-07 time 6.3e-07 7 TS dt 1.28e-06 time 1.27e-06 8 TS dt 2.56e-06 time 2.55e-06 9 TS dt 5.12e-06 time 5.11e-06 10 TS dt 1.024e-05 time 1.023e-05 11 TS dt 2.048e-05 time 2.047e-05 12 TS dt 4.096e-05 time 4.095e-05 13 TS dt 8.192e-05 time 8.191e-05 14 TS dt 0.00016384 time 0.00016383 15 TS dt 0.00032768 time 0.00032767 16 TS dt 0.00065536 time 0.00065535 17 TS dt 0.00131072 time 0.00131071 18 TS dt 0.00262144 time 0.00262143 19 TS dt 0.00524288 time 0.00524287 20 TS dt 0.0104858 time 0.0104858 21 TS dt 0.0209715 time 0.0209715 22 TS dt 0.041943 time 0.041943 23 TS dt 0.0838861 time 0.0838861 24 TS dt 0.167772 time 0.167772 25 TS dt 0.177781 time 0.335544 26 TS dt 0.14402 time 0.482242 27 TS dt 0.130984 time 0.626262 TS Object: 1 MPI processes type: bdf Order=2 maximum time=0.5 total number of I function evaluations=120 total number of I Jacobian evaluations=91 total number of nonlinear solver iterations=91 total number of linear solver iterations=91 total number of nonlinear solve failures=0 total number of rejected steps=1 using relative error tolerance of 0.0001, using absolute error tolerance of 0.0001 TSAdapt Object: 1 MPI processes type: basic safety factor 0.9 extra safety factor after step rejection 0.5 clip fastest increase 2. clip fastest decrease 0.1 maximum allowed timestep 1e+20 minimum allowed timestep 1e-20 maximum solution absolute value to be ignored -1. SNES Object: 1 MPI processes type: newtonls maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 total number of linear solver iterations=13 total number of function evaluations=14 norm schedule ALWAYS SNESLineSearch Object: 1 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxstep=1.000000e+08, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 1 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ilu out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 matrix ordering: natural factor fill ratio given 1., needed 1. Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=2, cols=2 package used to perform factorization: petsc total: nonzeros=4, allocated nonzeros=4 using I-node routines: found 1 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2, cols=2 total: nonzeros=4, allocated nonzeros=4 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 1 nodes, limit used is 5 steps 27, ftime 0.626262 Vec Object: 1 MPI processes type: seq -0.635304 -1.98947 As a bit of an aside, we actually just merged an option for TSSUNDIALS into the main branch which might possibly give extra data here. If you supply the -ts_sundials_use_dense option, it will perform a dense linear solve instead of an iterative one. This is of course inefficient in most cases and only works in serial, but it can be very helpful to eliminate the possibility that the issue arises from the parameters of the iterative linear solve. Am Mo., 6. Dez. 2021 um 20:30 Uhr schrieb Matthew Knepley : > On Mon, Dec 6, 2021 at 2:04 PM Sanjoy Kumar Mazumder > wrote: > >> Thank you for your suggestion. I will look into the documentation of >> ts_type bdf and try implementing the same. However, it seems like there is >> not much example on TSBDF available. If you can kindly share with me an >> example on the implementation of TSBDF it would be helpful. >> > > If your problem is defined using IFunction, then it works like any other > implicit solver. You can look at any example that uses -ts_type bdf, such > as TS ex19. > > Thanks, > > Matt > > >> Thank You >> >> Sanjoy >> >> Get Outlook for Android >> ------------------------------ >> *From:* Matthew Knepley >> *Sent:* Monday, December 6, 2021 1:00:49 PM >> *To:* Sanjoy Kumar Mazumder >> *Cc:* petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] CV_CONV_FAILURE with TSSUNDIALS in PETSc >> >> On Mon, Dec 6, 2021 at 12:17 PM Sanjoy Kumar Mazumder < >> mazumder at purdue.edu> wrote: >> >> Hi all, >> >> I am trying to solve a set of coupled stiff ODEs in parallel using >> TSSUNDIALS with SUNDIALS_BDF as 'TSSundialsSetType' in PETSc. I am using a >> sparse Jacobian matrix of type MATMPIAIJ with no preconditioner. It runs >> for some time with a very small initial timestep (~10^-18) and then >> terminates abruptly with the following error: >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> Is there anything I am missing out or not doing properly? Given below is >> the complete error that is showing up after the termination. >> >> >> It is hard to know for us. BDF is implicit, so CVODE has to solve a >> (non)linear system, and it looks like this is what fails. I would send it >> to the CVODE team. >> Alternatively, you can run with -ts_type bdf and we would have more >> information to help you with. >> >> Thanks, >> >> Matt >> >> >> >> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> [5]PETSC ERROR: >> [CVODE ERROR] CVode >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> [7]PETSC ERROR: >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> [16]PETSC ERROR: >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> >> [CVODE ERROR] CVode >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> [19]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [19]PETSC ERROR: Error in external library >> [19]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [19]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [19]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [19]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [19]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [19]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [19]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [19]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [19]PETSC ERROR: #4 User provided function() line 0 in User file >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Error in external library >> [0]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [0]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [0]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [0]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [0]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [0]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [0]PETSC ERROR: #4 User provided function() line 0 in User file >> [1]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [1]PETSC ERROR: Error in external library >> [1]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [1]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [1]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [1]PETSC ERROR: [2]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [2]PETSC ERROR: Error in external library >> [2]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [2]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [2]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [2]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [2]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [2]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [2]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [2]PETSC ERROR: #4 User provided function() line 0 in User file >> [3]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [3]PETSC ERROR: Error in external library >> [3]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [3]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [3]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [3]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [3]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [3]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [3]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [3]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [3]PETSC ERROR: #4 User provided function() line 0 in User file >> [4]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [4]PETSC ERROR: Error in external library >> [4]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [4]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [4]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [4]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [4]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [4]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> --------------------- Error Message >> -------------------------------------------------------------- >> [5]PETSC ERROR: Error in external library >> [5]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [5]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [5]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [5]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [5]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [5]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [5]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [5]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [5]PETSC ERROR: #4 User provided function() line 0 in User file >> --------------------- Error Message >> -------------------------------------------------------------- >> [7]PETSC ERROR: Error in external library >> [7]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [7]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [7]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [7]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [7]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [7]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [7]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [7]PETSC ERROR: #4 User provided function() line 0 in User file >> [8]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [8]PETSC ERROR: Error in external library >> [8]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [8]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [8]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [8]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [9]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [9]PETSC ERROR: Error in external library >> [9]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [9]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [9]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [9]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [9]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [9]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [9]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [9]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [10]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [10]PETSC ERROR: Error in external library >> [10]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [10]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [10]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [10]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [10]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [10]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [10]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [10]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [10]PETSC ERROR: #4 User provided function() line 0 in User file >> [11]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [11]PETSC ERROR: Error in external library >> [11]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [11]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [11]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [11]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [11]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [11]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [11]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [11]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [11]PETSC ERROR: #4 User provided function() line 0 in User file >> [12]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [12]PETSC ERROR: Error in external library >> [12]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [12]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [12]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [12]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [12]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [12]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [12]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [12]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [12]PETSC ERROR: #4 User provided function() line 0 in User file >> [14]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [14]PETSC ERROR: Error in external library >> [14]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [14]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [14]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [14]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [14]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [14]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [15]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [15]PETSC ERROR: Error in external library >> [15]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [15]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [15]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> --------------------- Error Message >> -------------------------------------------------------------- >> [16]PETSC ERROR: Error in external library >> [16]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [16]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [16]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [16]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [16]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [16]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [16]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [16]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [16]PETSC ERROR: #4 User provided function() line 0 in User file >> [17]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [17]PETSC ERROR: Error in external library >> [17]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [17]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [17]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [17]PETSC ERROR: [18]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [18]PETSC ERROR: Error in external library >> [18]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [18]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [18]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [18]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [17]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [17]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [17]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [17]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [17]PETSC ERROR: #4 User provided function() line 0 in User file >> Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 >> --download-fblaslapack --download-sundials=yes --with-debugging >> [1]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [1]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [1]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [1]PETSC ERROR: #4 User provided function() line 0 in User file >> [4]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [4]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [4]PETSC ERROR: #4 User provided function() line 0 in User file >> [8]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [8]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [8]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [8]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [8]PETSC ERROR: #4 User provided function() line 0 in User file >> [9]PETSC ERROR: #4 User provided function() line 0 in User file >> [13]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [13]PETSC ERROR: Error in external library >> [13]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [13]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [13]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [13]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [13]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [13]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [13]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [13]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [13]PETSC ERROR: #4 User provided function() line 0 in User file >> [14]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [14]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [14]PETSC ERROR: #4 User provided function() line 0 in User file >> [15]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [15]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [15]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [15]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [15]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [15]PETSC ERROR: #4 User provided function() line 0 in User file >> Configure options --with-cc-mpicc --with-cxx=mpicxx --with-fc=mpif90 >> --download-fblaslapack --download-sundials=yes --with-debugging >> [18]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [18]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [18]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [18]PETSC ERROR: #4 User provided function() line 0 in User file >> At t = 1.83912e-06 and h = 3.74248e-13, the corrector convergence test >> failed repeatedly or with |h| = hmin. >> >> [6]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [6]PETSC ERROR: Error in external library >> [6]PETSC ERROR: CVode() fails, CV_CONV_FAILURE >> [6]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [6]PETSC ERROR: Petsc Release Version 3.14.5, Mar 03, 2021 >> [6]PETSC ERROR: ./ThO2_CD2_P on a arch-linux-c-debug named >> bell-a027.rcac.purdue.edu by mazumder Mon Dec 6 11:45:05 2021 >> [6]PETSC ERROR: Configure options --with-cc-mpicc --with-cxx=mpicxx >> --with-fc=mpif90 --download-fblaslapack --download-sundials=yes >> --with-debugging >> [6]PETSC ERROR: #1 TSStep_Sundials() line 156 in >> /home/mazumder/petsc-3.14.5/src/ts/impls/implicit/sundials/sundials.c >> [6]PETSC ERROR: #2 TSStep() line 3759 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [6]PETSC ERROR: #3 TSSolve() line 4156 in >> /home/mazumder/petsc-3.14.5/src/ts/interface/ts.c >> [6]PETSC ERROR: #4 User provided function() line 0 in User file >> -------------------------------------------------------------------------- >> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF >> with errorcode 76. >> >> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >> You may or may not see output from other processes, depending on >> exactly when Open MPI kills them. >> -------------------------------------------------------------------------- >> [bell-a027.rcac.purdue.edu:29752] 15 more processes have sent help >> message help-mpi-api.txt / mpi-abort >> [bell-a027.rcac.purdue.edu:29752] Set MCA parameter >> "orte_base_help_aggregate" to 0 to see all help / error messages >> >> Thanks >> >> With regards, >> Sanjoy >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From quentin.chevalier at polytechnique.edu Tue Dec 7 10:15:46 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Tue, 7 Dec 2021 17:15:46 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: <3BC576A0-2EB4-40B8-8AE2-9EE21A7C023B@petsc.dev> References: <42F559F9-8E4B-480E-BEAE-8AEBC24D927C@gmx.li> <3BC576A0-2EB4-40B8-8AE2-9EE21A7C023B@petsc.dev> Message-ID: @Lawrence, thanks for the details, but point 7 fails with : ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' Hint: It looks like a path. File '/usr/local/petsc/src/binding/petsc4py' does not exist. Just like @wence's tentative command. I've changed the dockerfile of dolfinx to add the --with-hdf5 flag and I'm trying to build a new optimised docker image with : echo '{ "cffi_extra_compile_args" : ["-O2", "-march=native" ] }' > dolfinx_jit_parameters.json docker build --target dolfinx --file Dockerfile --build-arg PETSC_SLEPC_OPTFLAGS="-O2 -march=native" --build-arg DOLFINX_CMAKE_CXX_FLAGS="-march=native" . It's slowgoing, but it might eventually do the trick I guess. @bsmith, the --with-petsc4py flag changes nothing. MWE is unchanged : * Run this docker container * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" Updated attempt at a fix : * cd /usr/local/petsc/ * ./configure --with-hdf5 --with-petsc4py --force (turns out the container sets PETSC_ARCH and PETSC_DIR as environment variables by default) * make all This still produces the same error : Traceback (most recent call last): File "", line 1, in File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 petsc4py.PETSc.Error: error code 86 [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages [0] Unknown PetscViewer type given: hdf5 Quentin [image: cid:image003.jpg at 01D690CB.3B3FDC10] Quentin CHEVALIER ? IA parcours recherche LadHyX - Ecole polytechnique __________ On Tue, 7 Dec 2021 at 16:52, Barry Smith wrote: > > You can also just add --with-petsc4py to your PETSc configure command > and it will manage automatically the petsc4py install. So > > ./configure --with-hdf5 --any-other --configure-flags --you-want > --with-petsc4py > > > Some people don't like this approach, I don't understand exactly why not; > it should be equivalent (if it is not equivalent then perhaps it could be > fixed?). > > On Dec 7, 2021, at 10:18 AM, Lawrence Mitchell wrote: > > Comments inline below: > > On 7 Dec 2021, at 14:43, Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > > @Matthew, as stated before, error output is unchanged, i.e.the python > command below produces the same traceback : > > # python3 -c "from petsc4py import PETSc; > PETSc.Viewer().createHDF5('d.h5')" > Traceback (most recent call last): > File "", line 1, in > File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 > petsc4py.PETSc.Error: error code 86 > [0] PetscViewerSetType() at > /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > [0] Unknown type. Check for miss-spelling or missing package: > https://petsc.org/release/install/install/#external-packages > [0] Unknown PetscViewer type given: hdf5 > > @Wence that makes sense. I'd assumed that the original PETSc had been > overwritten, and if the linking has gone wrong I'm surprised anything > happens with petsc4py at all. > > Your tentative command gave : > > ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' > Hint: It looks like a path. File > '/usr/local/petsc/src/binding/petsc4py' does not exist. > > So I tested that global variables PETSC_ARCH & PETSC_DIR were correct > then ran "pip install petsc4py" to restart petsc4py from scratch. > > > This downloads petsc4py from pypi. It is not guaranteed to give you a > version that matches the PETSc version you have installed (which is the > source of your error below) > > > This > gives rise to a different error : > > [...] > > ImportError: > /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/ > PETSc.cpython-39-x86_64-linux-gnu.so: > undefined symbol: petscstack > > Not sure that it a step forward ; looks like petsc4py is broken now. > > > The steps to build PETSc with HDF5 support and then get a compatible > petsc4py are: > > 1. Download the PETSc source somehow (https://petsc.org/release/download/) > > I now assume that this source tree lives in .../petsc > > 2. cd .../petsc > > 3. ./configure --with-hdf5 --any-other --configure-flags --you-want > > 4. Run the appropriate "make" command as suggested by configure > > 5. Run the appropriate "make check" command as suggested by configure > > 6. Set PETSC_DIR and PETSC_ARCH appropriately > > 7. pip install src/binding/petsc4py > > If you are working the docker container from dolfinx/dolfinx, you can see > the commands that are run to install PETSc, and then petsc4py, here > https://github.com/FEniCS/dolfinx/blob/main/docker/Dockerfile#L243 > > If you want to reproduce these versions of PETSc but with the addition of > HDF5 support, just add --with-hdf5 to all of the relevant configure lines. > > Lawrence > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2044 bytes Desc: not available URL: From marco.cisternino at optimad.it Tue Dec 7 10:19:33 2021 From: marco.cisternino at optimad.it (Marco Cisternino) Date: Tue, 7 Dec 2021 16:19:33 +0000 Subject: [petsc-users] Nullspaces Message-ID: Good morning, I'm still struggling with the Poisson equation with Neumann BCs. I discretize the equation by finite volume method and I divide every line of the linear system by the volume of the cell. I could avoid this division, but I'm trying to understand. My mesh is not uniform, i.e. cells have different volumes (it is an octree mesh). Moreover, in my computational domain there are 2 separated sub-domains. I build the null space and then I use MatNullSpaceTest to check it. If I do this: MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); It works If I do this: Vec nsp; VecDuplicate(m_rhs, &nsp); VecSet(nsp,1.0); VecNormalize(nsp, nullptr); MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); It does not work Probably, I have wrong expectations, but should not it be the same? Thanks Marco Cisternino, PhD marco.cisternino at optimad.it ______________________ Optimad Engineering Srl Via Bligny 5, Torino, Italia. +3901119719782 www.optimad.it -------------- next part -------------- An HTML attachment was scrubbed... URL: From liyuansen89 at gmail.com Tue Dec 7 10:19:35 2021 From: liyuansen89 at gmail.com (Ning Li) Date: Tue, 7 Dec 2021 10:19:35 -0600 Subject: [petsc-users] install PETSc on windows In-Reply-To: References: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> <84672efd-113b-2daf-85d1-e58d3812421d@mcs.anl.gov> <005201d7eb7e$a8c83a20$fa58ae60$@gmail.com> Message-ID: I tried to use this new PETSc in my application, and got this HYPRE related error when I built a solution in visual studio. [image: image.png] I have installed the latest HYPRE on my laptop and linked it to my application, but I disabled MPI option when I configured HYPRE . Is this why this error occurred? On Tue, Dec 7, 2021 at 9:31 AM Satish Balay wrote: > Your build is with msmpi - but mpiexec from openmpi got used. > > You can try compiling and running examples manually [with the correct > mpiexec] > > Satish > > On Tue, 7 Dec 2021, liyuansen89 at gmail.com wrote: > > > Hi Satish, > > > > I have another question. After I run the check command, I got the > following > > output (the attached file), have I successfully installed the library? Is > > there any error? > > > > Thanks > > > > > > > > -----Original Message----- > > From: Satish Balay > > Sent: Monday, December 6, 2021 6:59 PM > > To: Ning Li > > Cc: petsc-users > > Subject: Re: [petsc-users] install PETSc on windows > > > > Glad it worked. Thanks for the update. > > > > Satish > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > Thanks for your reply. > > > > > > After I added '--with-shared-libraries=0', the configuration stage > > > passed and now it is executing the 'make' command! > > > > > > Thanks very much > > > > > > On Mon, Dec 6, 2021 at 5:21 PM Satish Balay wrote: > > > > > > > >>> > > > > Executing: /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe > > > > ifort -o > > > > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/c > > onftest.exe > > > > -MD -O3 -fpp > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.lib > > > > raries/conftest.o /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ > > > > SDKs/MPI/Lib/x64/msmpi.lib /cygdrive/c/Program\ Files\ > > > > \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib > > > > stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with > > > > use of other libs; use /NODEFAULTLIB:library <<< > > > > > > > > I'm not sure why this link command is giving this error. Can you > > > > retry with '--with-shared-libraries=0'? > > > > > > > > Satish > > > > > > > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > > > Howdy, > > > > > > > > > > I am trying to install PETSc on windows with cygwin but got an mpi > > error. > > > > > Could you have a look at my issue and give me some instructions? > > > > > > > > > > Here is the information about my environment: > > > > > 1. operation system: windows 11 > > > > > 2. visual studio version: 2019 > > > > > 3. intel one API toolkit is installed 4. Microsoft MS MPI is > > > > > installed. > > > > > 5. Intel MPI is uninstalled. > > > > > 6. PETSc version: 3.16.1 > > > > > > > > > > this is my configuration: > > > > > ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' > > > > > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > > > > > --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft > > > > > SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft > > > > > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program Files > > > > > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program > > > > > Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > > > > > --with-blas-lapack-lib=['/cygdrive/c/Program Files > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cy > > > > gdrive/c/Program > > > > > Files > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive > > > > /c/Program > > > > > Files > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > > > > > --with-scalapack-include='/cygdrive/c/Program Files > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > > > > > --with-scalapack-lib=['/cygdrive/c/Program Files > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib', > > > > '/cygdrive/c/Program > > > > > Files > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.l > > > > > ib'] > > > > > --with-fortran-interfaces=1 --with-debugging=0 > > > > > > > > > > attached is the configure.log file. > > > > > > > > > > Thanks > > > > > > > > > > Ning Li > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 29744 bytes Desc: not available URL: From balay at mcs.anl.gov Tue Dec 7 10:28:07 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 7 Dec 2021 10:28:07 -0600 (CST) Subject: [petsc-users] install PETSc on windows In-Reply-To: References: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> <84672efd-113b-2daf-85d1-e58d3812421d@mcs.anl.gov> <005201d7eb7e$a8c83a20$fa58ae60$@gmail.com> Message-ID: Yes - you need to build hypre with the same mpi, compilers (compiler options) as petsc. Satish On Tue, 7 Dec 2021, Ning Li wrote: > I tried to use this new PETSc in my application, and got this HYPRE related > error when I built a solution in visual studio. > [image: image.png] > I have installed the latest HYPRE on my laptop and linked it to my > application, but I disabled MPI option when I configured HYPRE . > Is this why this error occurred? > > On Tue, Dec 7, 2021 at 9:31 AM Satish Balay wrote: > > > Your build is with msmpi - but mpiexec from openmpi got used. > > > > You can try compiling and running examples manually [with the correct > > mpiexec] > > > > Satish > > > > On Tue, 7 Dec 2021, liyuansen89 at gmail.com wrote: > > > > > Hi Satish, > > > > > > I have another question. After I run the check command, I got the > > following > > > output (the attached file), have I successfully installed the library? Is > > > there any error? > > > > > > Thanks > > > > > > > > > > > > -----Original Message----- > > > From: Satish Balay > > > Sent: Monday, December 6, 2021 6:59 PM > > > To: Ning Li > > > Cc: petsc-users > > > Subject: Re: [petsc-users] install PETSc on windows > > > > > > Glad it worked. Thanks for the update. > > > > > > Satish > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > Thanks for your reply. > > > > > > > > After I added '--with-shared-libraries=0', the configuration stage > > > > passed and now it is executing the 'make' command! > > > > > > > > Thanks very much > > > > > > > > On Mon, Dec 6, 2021 at 5:21 PM Satish Balay wrote: > > > > > > > > > >>> > > > > > Executing: /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe > > > > > ifort -o > > > > > > > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/c > > > onftest.exe > > > > > -MD -O3 -fpp > > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.lib > > > > > raries/conftest.o /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ > > > > > SDKs/MPI/Lib/x64/msmpi.lib /cygdrive/c/Program\ Files\ > > > > > \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib > > > > > stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with > > > > > use of other libs; use /NODEFAULTLIB:library <<< > > > > > > > > > > I'm not sure why this link command is giving this error. Can you > > > > > retry with '--with-shared-libraries=0'? > > > > > > > > > > Satish > > > > > > > > > > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > > > > > Howdy, > > > > > > > > > > > > I am trying to install PETSc on windows with cygwin but got an mpi > > > error. > > > > > > Could you have a look at my issue and give me some instructions? > > > > > > > > > > > > Here is the information about my environment: > > > > > > 1. operation system: windows 11 > > > > > > 2. visual studio version: 2019 > > > > > > 3. intel one API toolkit is installed 4. Microsoft MS MPI is > > > > > > installed. > > > > > > 5. Intel MPI is uninstalled. > > > > > > 6. PETSc version: 3.16.1 > > > > > > > > > > > > this is my configuration: > > > > > > ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' > > > > > > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > > > > > > --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft > > > > > > SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft > > > > > > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program Files > > > > > > (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program > > > > > > Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > > > > > > --with-blas-lapack-lib=['/cygdrive/c/Program Files > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cy > > > > > gdrive/c/Program > > > > > > Files > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive > > > > > /c/Program > > > > > > Files > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > > > > > > --with-scalapack-include='/cygdrive/c/Program Files > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > > > > > > --with-scalapack-lib=['/cygdrive/c/Program Files > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib', > > > > > '/cygdrive/c/Program > > > > > > Files > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.l > > > > > > ib'] > > > > > > --with-fortran-interfaces=1 --with-debugging=0 > > > > > > > > > > > > attached is the configure.log file. > > > > > > > > > > > > Thanks > > > > > > > > > > > > Ning Li > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From mfadams at lbl.gov Tue Dec 7 10:46:53 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 7 Dec 2021 11:46:53 -0500 Subject: [petsc-users] Nullspaces In-Reply-To: References: Message-ID: Can you please give more details on what 'does not work' is. More detail on how you judge what works would also be useful. Mark On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino < marco.cisternino at optimad.it> wrote: > Good morning, > > I?m still struggling with the Poisson equation with Neumann BCs. > > I discretize the equation by finite volume method and I divide every line > of the linear system by the volume of the cell. I could avoid this > division, but I?m trying to understand. > > My mesh is not uniform, i.e. cells have different volumes (it is an octree > mesh). > > Moreover, in my computational domain there are 2 separated sub-domains. > > I build the null space and then I use MatNullSpaceTest to check it. > > > > If I do this: > > MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); > > It works > > > > If I do this: > > Vec nsp; > > VecDuplicate(m_rhs, &nsp); > > VecSet(nsp,1.0); > > VecNormalize(nsp, nullptr); > > MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); > > It does not work > > > > Probably, I have wrong expectations, but should not it be the same? > > > > Thanks > > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > > ______________________ > > Optimad Engineering Srl > > Via Bligny 5, Torino, Italia. > +3901119719782 > www.optimad.it > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 7 10:47:26 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 7 Dec 2021 11:47:26 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: <42F559F9-8E4B-480E-BEAE-8AEBC24D927C@gmx.li> <3BC576A0-2EB4-40B8-8AE2-9EE21A7C023B@petsc.dev> Message-ID: <041E9A8A-B6EC-4D8B-A321-9417BC959B59@petsc.dev> Something is really wrong with the process, you should not have to waste all this time on a simple build that in theory is super simple given it is using docker. We debug these things by looking at the output files from PETSc: configure.log make.log and the output from make check. Are these files available when you do the docker install business? Without these files we are in the dark just completely guessing what is happening. If the docker is using a prebuilt PETSc inside iteself then there is no way to add hdf5 after the fact. Barry > On Dec 7, 2021, at 11:15 AM, Quentin Chevalier wrote: > > @Lawrence, thanks for the details, but point 7 fails with : > > ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' > Hint: It looks like a path. File '/usr/local/petsc/src/binding/petsc4py' does not exist. > > Just like @wence's tentative command. I've changed the dockerfile of dolfinx to add the --with-hdf5 flag and I'm trying to build a new optimised docker image with : > > echo '{ "cffi_extra_compile_args" : ["-O2", "-march=native" ] }' > dolfinx_jit_parameters.json > docker build --target dolfinx --file Dockerfile --build-arg PETSC_SLEPC_OPTFLAGS="-O2 -march=native" --build-arg DOLFINX_CMAKE_CXX_FLAGS="-march=native" . > > It's slowgoing, but it might eventually do the trick I guess. > > @bsmith, the --with-petsc4py flag changes nothing. > > MWE is unchanged : > * Run this docker container > * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" > > Updated attempt at a fix : > * cd /usr/local/petsc/ > * ./configure --with-hdf5 --with-petsc4py --force (turns out the container sets PETSC_ARCH and PETSC_DIR as environment variables by default) > * make all > > This still produces the same error : > > Traceback (most recent call last): > File "", line 1, in > File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 > petsc4py.PETSc.Error: error code 86 > [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages > [0] Unknown PetscViewer type given: hdf5 > > Quentin > > > > > > Quentin CHEVALIER ? IA parcours recherche > LadHyX - Ecole polytechnique > > __________ > > > > On Tue, 7 Dec 2021 at 16:52, Barry Smith > wrote: > > You can also just add --with-petsc4py to your PETSc configure command and it will manage automatically the petsc4py install. So > > ./configure --with-hdf5 --any-other --configure-flags --you-want --with-petsc4py > > > Some people don't like this approach, I don't understand exactly why not; it should be equivalent (if it is not equivalent then perhaps it could be fixed?). > >> On Dec 7, 2021, at 10:18 AM, Lawrence Mitchell > wrote: >> >> Comments inline below: >> >>> On 7 Dec 2021, at 14:43, Quentin Chevalier > wrote: >>> >>> @Matthew, as stated before, error output is unchanged, i.e.the python >>> command below produces the same traceback : >>> >>> # python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('d.h5')" >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >>> petsc4py.PETSc.Error: error code 86 >>> [0] PetscViewerSetType() at >>> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> [0] Unknown type. Check for miss-spelling or missing package: >>> https://petsc.org/release/install/install/#external-packages >>> [0] Unknown PetscViewer type given: hdf5 >>> >>> @Wence that makes sense. I'd assumed that the original PETSc had been >>> overwritten, and if the linking has gone wrong I'm surprised anything >>> happens with petsc4py at all. >>> >>> Your tentative command gave : >>> >>> ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' >>> Hint: It looks like a path. File >>> '/usr/local/petsc/src/binding/petsc4py' does not exist. >>> >>> So I tested that global variables PETSC_ARCH & PETSC_DIR were correct >>> then ran "pip install petsc4py" to restart petsc4py from scratch. >> >> This downloads petsc4py from pypi. It is not guaranteed to give you a version that matches the PETSc version you have installed (which is the source of your error below) >> >> >>> This >>> gives rise to a different error : >>> >> [...] >>> ImportError: /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/PETSc.cpython-39-x86_64-linux-gnu.so : >>> undefined symbol: petscstack >>> >>> Not sure that it a step forward ; looks like petsc4py is broken now. >> >> The steps to build PETSc with HDF5 support and then get a compatible petsc4py are: >> >> 1. Download the PETSc source somehow (https://petsc.org/release/download/ ) >> >> I now assume that this source tree lives in .../petsc >> >> 2. cd .../petsc >> >> 3. ./configure --with-hdf5 --any-other --configure-flags --you-want >> >> 4. Run the appropriate "make" command as suggested by configure >> >> 5. Run the appropriate "make check" command as suggested by configure >> >> 6. Set PETSC_DIR and PETSC_ARCH appropriately >> >> 7. pip install src/binding/petsc4py >> >> If you are working the docker container from dolfinx/dolfinx, you can see the commands that are run to install PETSc, and then petsc4py, here https://github.com/FEniCS/dolfinx/blob/main/docker/Dockerfile#L243 >> >> If you want to reproduce these versions of PETSc but with the addition of HDF5 support, just add --with-hdf5 to all of the relevant configure lines. >> >> Lawrence >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.cisternino at optimad.it Tue Dec 7 10:52:27 2021 From: marco.cisternino at optimad.it (Marco Cisternino) Date: Tue, 7 Dec 2021 16:52:27 +0000 Subject: [petsc-users] Nullspaces In-Reply-To: References: Message-ID: I?m sorry, I believed it was clear: ?it works? means MatNullSpaceTest returns true ?it does not work? means MatNullSpaceTest returns false Is it enough? Thanks Marco Cisternino, PhD marco.cisternino at optimad.it ______________________ Optimad Engineering Srl Via Bligny 5, Torino, Italia. +3901119719782 www.optimad.it From: Mark Adams Sent: marted? 7 dicembre 2021 17:47 To: Marco Cisternino Cc: petsc-users Subject: Re: [petsc-users] Nullspaces Can you please give more details on what 'does not work' is. More detail on how you judge what works would also be useful. Mark On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino > wrote: Good morning, I?m still struggling with the Poisson equation with Neumann BCs. I discretize the equation by finite volume method and I divide every line of the linear system by the volume of the cell. I could avoid this division, but I?m trying to understand. My mesh is not uniform, i.e. cells have different volumes (it is an octree mesh). Moreover, in my computational domain there are 2 separated sub-domains. I build the null space and then I use MatNullSpaceTest to check it. If I do this: MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); It works If I do this: Vec nsp; VecDuplicate(m_rhs, &nsp); VecSet(nsp,1.0); VecNormalize(nsp, nullptr); MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); It does not work Probably, I have wrong expectations, but should not it be the same? Thanks Marco Cisternino, PhD marco.cisternino at optimad.it ______________________ Optimad Engineering Srl Via Bligny 5, Torino, Italia. +3901119719782 www.optimad.it -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 7 10:53:03 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 7 Dec 2021 11:53:03 -0500 Subject: [petsc-users] Nullspaces In-Reply-To: References: Message-ID: <031CCD92-2E61-44C3-9348-8D554C5F2FF7@petsc.dev> A side note: The MatNullSpaceTest tells you that the null space you provided is in the null space of the operator, it does not say if you have found the entire null space. In your case with two subdomains the null space is actually two dimensional; all constant on one domain (0 on the other) and 0 on the first domain and all constant on the second. So you need to pass two vectors into the MatNullSpaceCreate(). But regardless the test should work for all constant on both domains. > On Dec 7, 2021, at 11:19 AM, Marco Cisternino wrote: > > Good morning, > I?m still struggling with the Poisson equation with Neumann BCs. > I discretize the equation by finite volume method and I divide every line of the linear system by the volume of the cell. I could avoid this division, but I?m trying to understand. > My mesh is not uniform, i.e. cells have different volumes (it is an octree mesh). > Moreover, in my computational domain there are 2 separated sub-domains. > I build the null space and then I use MatNullSpaceTest to check it. > > If I do this: > MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); > It works > > If I do this: > Vec nsp; > VecDuplicate(m_rhs, &nsp); > VecSet(nsp,1.0); > VecNormalize(nsp, nullptr); > MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); > It does not work > > Probably, I have wrong expectations, but should not it be the same? > > Thanks > > Marco Cisternino, PhD > marco.cisternino at optimad.it > ______________________ > Optimad Engineering Srl > Via Bligny 5, Torino, Italia. > +3901119719782 > www.optimad.it -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.cisternino at optimad.it Tue Dec 7 11:01:38 2021 From: marco.cisternino at optimad.it (Marco Cisternino) Date: Tue, 7 Dec 2021 17:01:38 +0000 Subject: [petsc-users] Nullspaces In-Reply-To: <031CCD92-2E61-44C3-9348-8D554C5F2FF7@petsc.dev> References: <031CCD92-2E61-44C3-9348-8D554C5F2FF7@petsc.dev> Message-ID: Thanks Barry. I already did it, I mean a constant vec per domain with 1 on rows of that domain and zero on rows of the other one, and exactly I did this: Vec *constants = nullptr; PetscInt nConstants = nActiveDomains; VecDuplicateVecs(m_solution, nConstants, &constants); for (PetscInt i = 0; i < nConstants; ++i) { VecSet(constants[i],0.0); } std::vector rawConstants(nConstants); for (PetscInt i = 0; i < nConstants; ++i) { VecGetArray(constants[i], &rawConstants[i]); } long nRows = getRowCount(); const CBaseLinearSolverMapping *cellRowMapping = getMapping(); for (long cellRow = 0; cellRow < nRows; ++cellRow) { std::size_t cellRawId = cellRowMapping->getRowRawCell(cellRow); int cellDomain = m_grid->getCellStorage().getDomains().rawAt(cellRawId); if (m_grid->isCellDomainActive(cellDomain)) { std::size_t activeDomainId = activeDomainIds[cellDomain]; rawConstants[activeDomainId][cellRow] = 1.; } } for (PetscInt i = 0; i < nConstants; ++i) { VecRestoreArray(constants[i], &rawConstants[i]); VecAssemblyBegin(constants[i]); VecAssemblyEnd(constants[i]); VecNormalize(constants[i], nullptr); } MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, &nullspace); But this does not pass the test if I divide by cell volume every row of the linear system. It works if I do not perform the division Thanks Marco Cisternino, PhD marco.cisternino at optimad.it ______________________ Optimad Engineering Srl Via Bligny 5, Torino, Italia. +3901119719782 www.optimad.it From: Barry Smith Sent: marted? 7 dicembre 2021 17:53 To: Marco Cisternino Cc: petsc-users Subject: Re: [petsc-users] Nullspaces A side note: The MatNullSpaceTest tells you that the null space you provided is in the null space of the operator, it does not say if you have found the entire null space. In your case with two subdomains the null space is actually two dimensional; all constant on one domain (0 on the other) and 0 on the first domain and all constant on the second. So you need to pass two vectors into the MatNullSpaceCreate(). But regardless the test should work for all constant on both domains. On Dec 7, 2021, at 11:19 AM, Marco Cisternino > wrote: Good morning, I?m still struggling with the Poisson equation with Neumann BCs. I discretize the equation by finite volume method and I divide every line of the linear system by the volume of the cell. I could avoid this division, but I?m trying to understand. My mesh is not uniform, i.e. cells have different volumes (it is an octree mesh). Moreover, in my computational domain there are 2 separated sub-domains. I build the null space and then I use MatNullSpaceTest to check it. If I do this: MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); It works If I do this: Vec nsp; VecDuplicate(m_rhs, &nsp); VecSet(nsp,1.0); VecNormalize(nsp, nullptr); MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); It does not work Probably, I have wrong expectations, but should not it be the same? Thanks Marco Cisternino, PhD marco.cisternino at optimad.it ______________________ Optimad Engineering Srl Via Bligny 5, Torino, Italia. +3901119719782 www.optimad.it -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Dec 7 11:39:01 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 7 Dec 2021 12:39:01 -0500 Subject: [petsc-users] Nullspaces In-Reply-To: References: <031CCD92-2E61-44C3-9348-8D554C5F2FF7@petsc.dev> Message-ID: > for (PetscInt i = 0; i < nConstants; ++i) { > > VecRestoreArray(constants[i], &rawConstants[i]); > > VecAssemblyBegin(constants[i]); > > VecAssemblyEnd(constants[i]); > > VecNormalize(constants[i], nullptr); > This does not look correct. > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fischega at westinghouse.com Tue Dec 7 11:44:27 2021 From: fischega at westinghouse.com (Fischer, Greg A.) Date: Tue, 7 Dec 2021 17:44:27 +0000 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: <8F6A0614-C133-4E41-9F3D-E64E33603630@petsc.dev> References: <874k7ld5x1.fsf@jedbrown.org> <8F6A0614-C133-4E41-9F3D-E64E33603630@petsc.dev> Message-ID: Attached are outputs with these options. What should I make of these? From: Barry Smith Sent: Monday, December 6, 2021 2:16 PM To: Fischer, Greg A. via petsc-users Cc: Fischer, Greg A. Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility [External Email] What do you get for -ksp_type ibcgs -ksp_monitor -ksp_monitor_true_residual with and without -ksp_pc_side right ? On Dec 6, 2021, at 12:00 PM, Jed Brown > wrote: "Fischer, Greg A. via petsc-users" > writes: Hello petsc-users, I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. IBCGS uses the preconditioned norm by default while GCR uses the unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or KSPSetNormType() to make IBCGS use unpreconditioned. Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? Thanks, Greg ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ksp-monitor_output.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ksp-monitor_output-pc_side_right.txt URL: From samuelestes91 at gmail.com Tue Dec 7 11:54:04 2021 From: samuelestes91 at gmail.com (Samuel Estes) Date: Tue, 7 Dec 2021 11:54:04 -0600 Subject: [petsc-users] PETSc object creation/destruction with adaptive grid Message-ID: Hi, I have a code implementing a finite element method with an adaptive grid. Rather than destroying the PETSc objects (the Jacobian matrix and RHS residual vector) every time the code refines the grid, we want to (over-)allocate some "padding" for these objects so that we only destroy/create new PETSc objects when the number of new nodes exceeds the padding. In order to solve the linear system with padding, we just fill the matrix with ones on the diagonal and fill the residual with zeros for the padded parts. Is this a reasonable way to do things? I'm sure that this problem has come up before but I haven't found anything about it. I would like to know what other solutions people have come up with when using PETSc for adaptive problems since PETSc does not support dynamic reallocation of objects (i.e. I don't think you are allowed to change the size of a PETSc matrix). If this problem has come up before, can you please point me to a link to the discussion? In particular, I'm curious if there is any way to solve the padded linear system without having to fill in values for the unused parts of the system. The linear solver obviously complains if there are rows of the matrix without any values, I'm just wondering if it's possible to get the linear solver to ignore them? Thanks! Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 7 12:16:30 2021 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 7 Dec 2021 13:16:30 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Tue, Dec 7, 2021 at 9:43 AM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > @Matthew, as stated before, error output is unchanged, i.e.the python > command below produces the same traceback : > > # python3 -c "from petsc4py import PETSc; > PETSc.Viewer().createHDF5('d.h5')" > Traceback (most recent call last): > File "", line 1, in > File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 > petsc4py.PETSc.Error: error code 86 > [0] PetscViewerSetType() at > /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > [0] Unknown type. Check for miss-spelling or missing package: > https://petsc.org/release/install/install/#external-packages > [0] Unknown PetscViewer type given: hdf5 > The reason I wanted the output was that the C output shows the configure options that the PETSc library was built with, However, Python seems to be eating this, so I cannot check. It seems like using this container is counter-productive. If it was built correctly, making these changes would be trivial. Send mail to FEniCS (I am guessing Chris Richardson maintains this), and ask how they intend people to change these options. Thanks, Matt. > @Wence that makes sense. I'd assumed that the original PETSc had been > overwritten, and if the linking has gone wrong I'm surprised anything > happens with petsc4py at all. > > Your tentative command gave : > > ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' > Hint: It looks like a path. File > '/usr/local/petsc/src/binding/petsc4py' does not exist. > > So I tested that global variables PETSC_ARCH & PETSC_DIR were correct > then ran "pip install petsc4py" to restart petsc4py from scratch. This > gives rise to a different error : > # python3 -c "from petsc4py import PETSc" > Traceback (most recent call last): > File "", line 1, in > File "/usr/local/lib/python3.9/dist-packages/petsc4py/PETSc.py", > line 3, in > PETSc = ImportPETSc(ARCH) > File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", > line 29, in ImportPETSc > return Import('petsc4py', 'PETSc', path, arch) > File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", > line 73, in Import > module = import_module(pkg, name, path, arch) > File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", > line 58, in import_module > with f: return imp.load_module(fullname, f, fn, info) > File "/usr/lib/python3.9/imp.py", line 242, in load_module > return load_dynamic(name, filename, file) > File "/usr/lib/python3.9/imp.py", line 342, in load_dynamic > return _load(spec) > ImportError: > /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/ > PETSc.cpython-39-x86_64-linux-gnu.so: > undefined symbol: petscstack > > Not sure that it a step forward ; looks like petsc4py is broken now. > > Quentin > > On Tue, 7 Dec 2021 at 14:58, Matthew Knepley wrote: > > > > On Tue, Dec 7, 2021 at 8:26 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> > >> Ok my bad, that log corresponded to a tentative --download-hdf5. This > >> log corresponds to the commands given above and has --with-hdf5 in its > >> options. > > > > > > Okay, this configure was successful and found HDF5 > > > >> > >> The whole process still results in the same error. > > > > > > Now send me the complete error output with this PETSc. > > > > Thanks, > > > > Matt > > > >> > >> Quentin > >> > >> > >> > >> Quentin CHEVALIER ? IA parcours recherche > >> > >> LadHyX - Ecole polytechnique > >> > >> __________ > >> > >> > >> > >> On Tue, 7 Dec 2021 at 13:59, Matthew Knepley wrote: > >> > > >> > On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> >> > >> >> Hello Matthew, > >> >> > >> >> That would indeed make sense. > >> >> > >> >> Full log is attached, I grepped hdf5 in there and didn't find > anything alarming. > >> > > >> > > >> > At the top of this log: > >> > > >> > Configure Options: --configModules=PETSc.Configure > --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 > --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 > --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no > --with-shared-libraries --download-hypre --download-mumps > --download-ptscotch --download-scalapack --download-suitesparse > --download-superlu_dist --with-scalar-type=complex > >> > > >> > > >> > So the HDF5 option is not being specified. > >> > > >> > Thanks, > >> > > >> > Matt > >> > > >> >> Cheers, > >> >> > >> >> Quentin > >> >> > >> >> > >> >> > >> >> > >> >> Quentin CHEVALIER ? IA parcours recherche > >> >> > >> >> LadHyX - Ecole polytechnique > >> >> > >> >> __________ > >> >> > >> >> > >> >> > >> >> On Mon, 6 Dec 2021 at 21:39, Matthew Knepley > wrote: > >> >>> > >> >>> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> >>>> > >> >>>> Fine. MWE is unchanged : > >> >>>> * Run this docker container > >> >>>> * Do : python3 -c "from petsc4py import PETSc; > PETSc.Viewer().createHDF5('dummy.h5')" > >> >>>> > >> >>>> Updated attempt at a fix : > >> >>>> * cd /usr/local/petsc/ > >> >>>> * ./configure PETSC_ARCH= linux-gnu-real-32 > PETSC_DIR=/usr/local/petsc --with-hdf5 --force > >> >>> > >> >>> > >> >>> Did it find HDF5? If not, it will shut it off. You need to send us > >> >>> > >> >>> $PETSC_DIR/configure.log > >> >>> > >> >>> so we can see what happened in the configure run. > >> >>> > >> >>> Thanks, > >> >>> > >> >>> Matt > >> >>> > >> >>>> > >> >>>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 all > >> >>>> > >> >>>> Still no joy. The same error remains. > >> >>>> > >> >>>> Quentin > >> >>>> > >> >>>> > >> >>>> > >> >>>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet > wrote: > >> >>>> > > >> >>>> > > >> >>>> > > >> >>>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> >>>> > > >> >>>> > The PETSC_DIR exactly corresponds to the previous one, so I > guess that rules option b) out, except if a specific option is required to > overwrite a previous installation of PETSc. As for a), well I thought > reconfiguring pretty direct, you're welcome to give me a hint as to what > could be wrong in the following process. > >> >>>> > > >> >>>> > Steps to reproduce this behaviour are as follows : > >> >>>> > * Run this docker container > >> >>>> > * Do : python3 -c "from petsc4py import PETSc; > PETSc.Viewer().createHDF5('dummy.h5')" > >> >>>> > > >> >>>> > After you get the error Unknown PetscViewer type, feel free to > try : > >> >>>> > > >> >>>> > * cd /usr/local/petsc/ > >> >>>> > * ./configure --with-hfd5 > >> >>>> > > >> >>>> > > >> >>>> > It?s hdf5, not hfd5. > >> >>>> > It?s PETSC_ARCH, not PETSC-ARCH. > >> >>>> > PETSC_ARCH is missing from your configure line. > >> >>>> > > >> >>>> > Thanks, > >> >>>> > Pierre > >> >>>> > > >> >>>> > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 > all > >> >>>> > > >> >>>> > Then repeat the MWE and observe absolutely no behavioural change > whatsoever. I'm afraid I don't know PETSc well enough to be surprised by > that. > >> >>>> > > >> >>>> > Quentin > >> >>>> > > >> >>>> > > >> >>>> > > >> >>>> > Quentin CHEVALIER ? IA parcours recherche > >> >>>> > > >> >>>> > LadHyX - Ecole polytechnique > >> >>>> > > >> >>>> > __________ > >> >>>> > > >> >>>> > > >> >>>> > > >> >>>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley > wrote: > >> >>>> >> > >> >>>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> >>>> >>> > >> >>>> >>> It failed all of the tests included in `make > >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 > check`, with > >> >>>> >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No > such file > >> >>>> >>> or directory` > >> >>>> >>> > >> >>>> >>> I am therefore fairly confident this a "file absence" problem, > and not > >> >>>> >>> a compilation problem. > >> >>>> >>> > >> >>>> >>> I repeat that there was no error at compilation stage. The > final stage > >> >>>> >>> did present `gmake[3]: Nothing to be done for 'libs'.` but > that's all. > >> >>>> >>> > >> >>>> >>> Again, running `./configure --with-hdf5` followed by a `make > >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` > does not > >> >>>> >>> change the problem. I get the same error at the same position > as > >> >>>> >>> before. > >> >>>> >> > >> >>>> >> > >> >>>> >> If you reconfigured and rebuilt, it is impossible to get the > same error, so > >> >>>> >> > >> >>>> >> a) You did not reconfigure > >> >>>> >> > >> >>>> >> b) Your new build is somewhere else on the machine > >> >>>> >> > >> >>>> >> Thanks, > >> >>>> >> > >> >>>> >> Matt > >> >>>> >> > >> >>>> >>> > >> >>>> >>> I will comment I am running on OpenSUSE. > >> >>>> >>> > >> >>>> >>> Quentin > >> >>>> >>> > >> >>>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley < > knepley at gmail.com> wrote: > >> >>>> >>> > > >> >>>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> >>>> >>> >> > >> >>>> >>> >> Hello Matthew and thanks for your quick response. > >> >>>> >>> >> > >> >>>> >>> >> I'm afraid I did try to snoop around the container and > rerun PETSc's > >> >>>> >>> >> configure with the --with-hdf5 option, to absolutely no > avail. > >> >>>> >>> >> > >> >>>> >>> >> I didn't see any errors during config or make, but it > failed the tests > >> >>>> >>> >> (which aren't included in the minimal container I suppose) > >> >>>> >>> > > >> >>>> >>> > > >> >>>> >>> > Failed which tests? What was the error? > >> >>>> >>> > > >> >>>> >>> > Thanks, > >> >>>> >>> > > >> >>>> >>> > Matt > >> >>>> >>> > > >> >>>> >>> >> > >> >>>> >>> >> Quentin > >> >>>> >>> >> > >> >>>> >>> >> > >> >>>> >>> >> > >> >>>> >>> >> Quentin CHEVALIER ? IA parcours recherche > >> >>>> >>> >> > >> >>>> >>> >> LadHyX - Ecole polytechnique > >> >>>> >>> >> > >> >>>> >>> >> __________ > >> >>>> >>> >> > >> >>>> >>> >> > >> >>>> >>> >> > >> >>>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley < > knepley at gmail.com> wrote: > >> >>>> >>> >> > > >> >>>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> >>>> >>> >> >> > >> >>>> >>> >> >> Hello PETSc users, > >> >>>> >>> >> >> > >> >>>> >>> >> >> This email is a duplicata of this gitlab issue, sorry > for any inconvenience caused. > >> >>>> >>> >> >> > >> >>>> >>> >> >> I want to compute a PETSc vector in real mode, than > perform calculations with it in complex mode. I want as much of this > process to be parallel as possible. Right now, I compile PETSc in real > mode, compute my vector and save it to a file, then switch to complex mode, > read it, and move on. > >> >>>> >>> >> >> > >> >>>> >>> >> >> This creates unexpected behaviour using MPIIO, so on > Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows > (taking inspiration from petsc4py doc, a bitbucket example and another one, > all top Google results for 'petsc hdf5') : > >> >>>> >>> >> >>> > >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', > COMM_WORLD) > >> >>>> >>> >> >>> q.load(viewer) > >> >>>> >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, > mode=PETSc.ScatterMode.FORWARD) > >> >>>> >>> >> >> > >> >>>> >>> >> >> > >> >>>> >>> >> >> This crashes my code. I obtain traceback : > >> >>>> >>> >> >>> > >> >>>> >>> >> >>> File "/home/shared/code.py", line 121, in Load > >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', > COMM_WORLD) > >> >>>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, in > petsc4py.PETSc.Viewer.createHDF5 > >> >>>> >>> >> >>> petsc4py.PETSc.Error: error code 86 > >> >>>> >>> >> >>> [0] PetscViewerSetType() at > /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 > >> >>>> >>> >> >>> [0] Unknown type. Check for miss-spelling or missing > package: https://petsc.org/release/install/install/#external-packages > >> >>>> >>> >> >>> [0] Unknown PetscViewer type given: hdf5 > >> >>>> >>> >> > > >> >>>> >>> >> > This means that PETSc has not been configured with HDF5 > (--with-hdf5 or --download-hdf5), so the container should be updated. > >> >>>> >>> >> > > >> >>>> >>> >> > THanks, > >> >>>> >>> >> > > >> >>>> >>> >> > Matt > >> >>>> >>> >> > > >> >>>> >>> >> >> > >> >>>> >>> >> >> I have petsc4py 3.16 from this docker container (list of > dependencies include PETSc and petsc4py). > >> >>>> >>> >> >> > >> >>>> >>> >> >> I'm pretty sure this is not intended behaviour. Any > insight as to how to fix this issue (I tried running ./configure > --with-hdf5 to no avail) or more generally to perform this jiggling between > real and complex would be much appreciated, > >> >>>> >>> >> >> > >> >>>> >>> >> >> Kind regards. > >> >>>> >>> >> >> > >> >>>> >>> >> >> Quentin > >> >>>> >>> >> > > >> >>>> >>> >> > > >> >>>> >>> >> > > >> >>>> >>> >> > -- > >> >>>> >>> >> > What most experimenters take for granted before they > begin their experiments is infinitely more interesting than any results to > which their experiments lead. > >> >>>> >>> >> > -- Norbert Wiener > >> >>>> >>> >> > > >> >>>> >>> >> > https://www.cse.buffalo.edu/~knepley/ > >> >>>> >>> > > >> >>>> >>> > > >> >>>> >>> > > >> >>>> >>> > -- > >> >>>> >>> > What most experimenters take for granted before they begin > their experiments is infinitely more interesting than any results to which > their experiments lead. > >> >>>> >>> > -- Norbert Wiener > >> >>>> >>> > > >> >>>> >>> > https://www.cse.buffalo.edu/~knepley/ > >> >>>> >> > >> >>>> >> > >> >>>> >> > >> >>>> >> -- > >> >>>> >> What most experimenters take for granted before they begin > their experiments is infinitely more interesting than any results to which > their experiments lead. > >> >>>> >> -- Norbert Wiener > >> >>>> >> > >> >>>> >> https://www.cse.buffalo.edu/~knepley/ > >> >>>> > > >> >>>> > > >> >>>> > > >> >>>> > > >> >>> > >> >>> > >> >>> > >> >>> -- > >> >>> What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >> >>> -- Norbert Wiener > >> >>> > >> >>> https://www.cse.buffalo.edu/~knepley/ > >> > > >> > > >> > > >> > -- > >> > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >> > -- Norbert Wiener > >> > > >> > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 7 12:25:43 2021 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 7 Dec 2021 13:25:43 -0500 Subject: [petsc-users] Nullspaces In-Reply-To: References: Message-ID: On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino < marco.cisternino at optimad.it> wrote: > Good morning, > > I?m still struggling with the Poisson equation with Neumann BCs. > > I discretize the equation by finite volume method and I divide every line > of the linear system by the volume of the cell. I could avoid this > division, but I?m trying to understand. > > My mesh is not uniform, i.e. cells have different volumes (it is an octree > mesh). > > Moreover, in my computational domain there are 2 separated sub-domains. > > I build the null space and then I use MatNullSpaceTest to check it. > > > > If I do this: > > MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); > > It works > This produces the normalized constant vector. > If I do this: > > Vec nsp; > > VecDuplicate(m_rhs, &nsp); > > VecSet(nsp,1.0); > > VecNormalize(nsp, nullptr); > > MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); > > It does not work > This is also the normalized constant vector. So you are saying that these two vectors give different results with MatNullSpaceTest()? Something must be wrong in the code. Can you send a minimal example of this? I will go through and debug it. Thanks, Matt > Probably, I have wrong expectations, but should not it be the same? > > > > Thanks > > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > > ______________________ > > Optimad Engineering Srl > > Via Bligny 5, Torino, Italia. > +3901119719782 > www.optimad.it > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.cisternino at optimad.it Tue Dec 7 12:36:11 2021 From: marco.cisternino at optimad.it (Marco Cisternino) Date: Tue, 7 Dec 2021 18:36:11 +0000 Subject: [petsc-users] Nullspaces In-Reply-To: References: Message-ID: I will, as soon as possible... Scarica Outlook per Android ________________________________ From: Matthew Knepley Sent: Tuesday, December 7, 2021 7:25:43 PM To: Marco Cisternino Cc: petsc-users Subject: Re: [petsc-users] Nullspaces On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino > wrote: Good morning, I?m still struggling with the Poisson equation with Neumann BCs. I discretize the equation by finite volume method and I divide every line of the linear system by the volume of the cell. I could avoid this division, but I?m trying to understand. My mesh is not uniform, i.e. cells have different volumes (it is an octree mesh). Moreover, in my computational domain there are 2 separated sub-domains. I build the null space and then I use MatNullSpaceTest to check it. If I do this: MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); It works This produces the normalized constant vector. If I do this: Vec nsp; VecDuplicate(m_rhs, &nsp); VecSet(nsp,1.0); VecNormalize(nsp, nullptr); MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); It does not work This is also the normalized constant vector. So you are saying that these two vectors give different results with MatNullSpaceTest()? Something must be wrong in the code. Can you send a minimal example of this? I will go through and debug it. Thanks, Matt Probably, I have wrong expectations, but should not it be the same? Thanks Marco Cisternino, PhD marco.cisternino at optimad.it ______________________ Optimad Engineering Srl Via Bligny 5, Torino, Italia. +3901119719782 www.optimad.it -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 7 13:08:34 2021 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 7 Dec 2021 14:08:34 -0500 Subject: [petsc-users] PETSc object creation/destruction with adaptive grid In-Reply-To: References: Message-ID: On Tue, Dec 7, 2021 at 12:54 PM Samuel Estes wrote: > Hi, > > I have a code implementing a finite element method with an adaptive grid. > Rather than destroying the PETSc objects (the Jacobian matrix and RHS > residual vector) every time the code refines the grid, we want to > (over-)allocate some "padding" for these objects so that we only > destroy/create new PETSc objects when the number of new nodes exceeds the > padding. In order to solve the linear system with padding, we just fill the > matrix with ones on the diagonal and fill the residual with zeros for the > padded parts. > > Is this a reasonable way to do things? > It can be. You have to be careful that the 1 on the diagonal is not too much different in magnitude from the other rows. > I'm sure that this problem has come up before but I haven't found anything > about it. I would like to know what other solutions people have come up > with when using PETSc for adaptive problems since PETSc does not support > dynamic reallocation of objects (i.e. I don't think you are allowed to > change the size of a PETSc matrix). > What I do is just rebuild the structures when the mesh changes. I am mostly looking at implicit things, so the solver and mesh adaptation cost swamp allocation and copying. > If this problem has come up before, can you please point me to a link to > the discussion? > I can't recall having it before, but maybe Satish can remember. > In particular, I'm curious if there is any way to solve the padded linear > system without having to fill in values for the unused parts of the system. > The linear solver obviously complains if there are rows of the matrix > without any values, I'm just wondering if it's possible to get the linear > solver to ignore them? > Actually, we do have a thing that does something similar right now. SNESVI_RS "freezes" the rows of the Jacobian associated with active constraints, We could do this same freezing with your rows I think. However, that is as much copying and allocation as just eliminating those rows, so maybe it does not buy you much. Thanks Matt > Thanks! > > Sam > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 7 14:44:07 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 7 Dec 2021 15:44:07 -0500 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: References: <874k7ld5x1.fsf@jedbrown.org> <8F6A0614-C133-4E41-9F3D-E64E33603630@petsc.dev> Message-ID: When the preconditioned and non-preconditioner residual norms as so hugely different this usually indicates something is badly scaled or something "bad" is happening within the preconditioner. 0 KSP preconditioned resid norm 2.434689662304e-01 true resid norm 1.413757649058e+09 ||r(i)||/||b|| 1.442188243865e-04 .... 19 KSP preconditioned resid norm 4.505576228085e-15 true resid norm 1.810377537457e-02 ||r(i)||/||b|| 1.846784138157e-15 notice that at the end the true residual is not particularly small. Is your b really big and does the matrix have really large values in it? What are the two runs? Left and right preconditioning with ibcgs or something else? > On Dec 7, 2021, at 12:44 PM, Fischer, Greg A. wrote: > > Attached are outputs with these options. > > What should I make of these? > > From: Barry Smith > > Sent: Monday, December 6, 2021 2:16 PM > To: Fischer, Greg A. via petsc-users > > Cc: Fischer, Greg A. > > Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility > > [External Email] > > What do you get for -ksp_type ibcgs -ksp_monitor -ksp_monitor_true_residual > > with and without -ksp_pc_side right ? > > > > On Dec 6, 2021, at 12:00 PM, Jed Brown > wrote: > > "Fischer, Greg A. via petsc-users" > writes: > > > Hello petsc-users, > > I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) > > I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. > > IBCGS uses the preconditioned norm by default while GCR uses the unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or KSPSetNormType() to make IBCGS use unpreconditioned. > > > Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? > > Thanks, > Greg > > > ________________________________ > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). > > > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liyuansen89 at gmail.com Tue Dec 7 14:50:13 2021 From: liyuansen89 at gmail.com (Ning Li) Date: Tue, 7 Dec 2021 14:50:13 -0600 Subject: [petsc-users] install PETSc on windows In-Reply-To: References: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> <84672efd-113b-2daf-85d1-e58d3812421d@mcs.anl.gov> <005201d7eb7e$a8c83a20$fa58ae60$@gmail.com> Message-ID: I rebuilt HYPRE with cmake on my windows laptop. But this time I got more linker problems. Could you have a look at these linker errors? Thanks [image: image.png] On Tue, Dec 7, 2021 at 10:28 AM Satish Balay wrote: > Yes - you need to build hypre with the same mpi, compilers (compiler > options) as petsc. > > Satish > > On Tue, 7 Dec 2021, Ning Li wrote: > > > I tried to use this new PETSc in my application, and got this HYPRE > related > > error when I built a solution in visual studio. > > [image: image.png] > > I have installed the latest HYPRE on my laptop and linked it to my > > application, but I disabled MPI option when I configured HYPRE . > > Is this why this error occurred? > > > > On Tue, Dec 7, 2021 at 9:31 AM Satish Balay wrote: > > > > > Your build is with msmpi - but mpiexec from openmpi got used. > > > > > > You can try compiling and running examples manually [with the correct > > > mpiexec] > > > > > > Satish > > > > > > On Tue, 7 Dec 2021, liyuansen89 at gmail.com wrote: > > > > > > > Hi Satish, > > > > > > > > I have another question. After I run the check command, I got the > > > following > > > > output (the attached file), have I successfully installed the > library? Is > > > > there any error? > > > > > > > > Thanks > > > > > > > > > > > > > > > > -----Original Message----- > > > > From: Satish Balay > > > > Sent: Monday, December 6, 2021 6:59 PM > > > > To: Ning Li > > > > Cc: petsc-users > > > > Subject: Re: [petsc-users] install PETSc on windows > > > > > > > > Glad it worked. Thanks for the update. > > > > > > > > Satish > > > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > > > Thanks for your reply. > > > > > > > > > > After I added '--with-shared-libraries=0', the configuration stage > > > > > passed and now it is executing the 'make' command! > > > > > > > > > > Thanks very much > > > > > > > > > > On Mon, Dec 6, 2021 at 5:21 PM Satish Balay > wrote: > > > > > > > > > > > >>> > > > > > > Executing: /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe > > > > > > ifort -o > > > > > > > > > > > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/c > > > > onftest.exe > > > > > > -MD -O3 -fpp > > > > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.lib > > > > > > raries/conftest.o /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ > > > > > > SDKs/MPI/Lib/x64/msmpi.lib /cygdrive/c/Program\ Files\ > > > > > > \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib > > > > > > stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts > with > > > > > > use of other libs; use /NODEFAULTLIB:library <<< > > > > > > > > > > > > I'm not sure why this link command is giving this error. Can you > > > > > > retry with '--with-shared-libraries=0'? > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > > > > > > > Howdy, > > > > > > > > > > > > > > I am trying to install PETSc on windows with cygwin but got an > mpi > > > > error. > > > > > > > Could you have a look at my issue and give me some > instructions? > > > > > > > > > > > > > > Here is the information about my environment: > > > > > > > 1. operation system: windows 11 > > > > > > > 2. visual studio version: 2019 > > > > > > > 3. intel one API toolkit is installed 4. Microsoft MS MPI is > > > > > > > installed. > > > > > > > 5. Intel MPI is uninstalled. > > > > > > > 6. PETSc version: 3.16.1 > > > > > > > > > > > > > > this is my configuration: > > > > > > > ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' > > > > > > > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > > > > > > > --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft > > > > > > > SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft > > > > > > > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program > Files > > > > > > > (x86)/Microsoft > SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program > > > > > > > Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > > > > > > > --with-blas-lapack-lib=['/cygdrive/c/Program Files > > > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cy > > > > > > gdrive/c/Program > > > > > > > Files > > > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive > > > > > > /c/Program > > > > > > > Files > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > > > > > > > --with-scalapack-include='/cygdrive/c/Program Files > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > > > > > > > --with-scalapack-lib=['/cygdrive/c/Program Files > > > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib', > > > > > > '/cygdrive/c/Program > > > > > > > Files > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.l > > > > > > > ib'] > > > > > > > --with-fortran-interfaces=1 --with-debugging=0 > > > > > > > > > > > > > > attached is the configure.log file. > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > Ning Li > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 113371 bytes Desc: not available URL: From fischega at westinghouse.com Tue Dec 7 15:25:15 2021 From: fischega at westinghouse.com (Fischer, Greg A.) Date: Tue, 7 Dec 2021 21:25:15 +0000 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: References: <874k7ld5x1.fsf@jedbrown.org> <8F6A0614-C133-4E41-9F3D-E64E33603630@petsc.dev> Message-ID: I've attached typical examples of "A" and "b". Do these qualify as large values? The two runs are with and without "-ksp_pc_side right". From: Barry Smith Sent: Tuesday, December 7, 2021 3:44 PM To: Fischer, Greg A. Cc: Fischer, Greg A. via petsc-users Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility [External Email] When the preconditioned and non-preconditioner residual norms as so hugely different this usually indicates something is badly scaled or something "bad" is happening within the preconditioner. 0 KSP preconditioned resid norm 2.434689662304e-01 true resid norm 1.413757649058e+09 ||r(i)||/||b|| 1.442188243865e-04 .... 19 KSP preconditioned resid norm 4.505576228085e-15 true resid norm 1.810377537457e-02 ||r(i)||/||b|| 1.846784138157e-15 notice that at the end the true residual is not particularly small. Is your b really big and does the matrix have really large values in it? What are the two runs? Left and right preconditioning with ibcgs or something else? On Dec 7, 2021, at 12:44 PM, Fischer, Greg A. > wrote: Attached are outputs with these options. What should I make of these? From: Barry Smith > Sent: Monday, December 6, 2021 2:16 PM To: Fischer, Greg A. via petsc-users > Cc: Fischer, Greg A. > Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility [External Email] What do you get for -ksp_type ibcgs -ksp_monitor -ksp_monitor_true_residual with and without -ksp_pc_side right ? On Dec 6, 2021, at 12:00 PM, Jed Brown > wrote: "Fischer, Greg A. via petsc-users" > writes: Hello petsc-users, I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. IBCGS uses the preconditioned norm by default while GCR uses the unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or KSPSetNormType() to make IBCGS use unpreconditioned. Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? Thanks, Greg ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). ________________________________ This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: typ_A.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: typ_b.txt URL: From jed at jedbrown.org Tue Dec 7 15:46:19 2021 From: jed at jedbrown.org (Jed Brown) Date: Tue, 07 Dec 2021 14:46:19 -0700 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: References: <874k7ld5x1.fsf@jedbrown.org> <8F6A0614-C133-4E41-9F3D-E64E33603630@petsc.dev> Message-ID: <87ilw0ay1g.fsf@jedbrown.org> "Fischer, Greg A. via petsc-users" writes: > I've attached typical examples of "A" and "b". Do these qualify as large values? Yeah, the matrix values are of order 1e12. Usually modelers choose units so that these are closer to order 1. (One can do this by formal non-dimensionalization, but just choosing representative mass, length, and time units is often more convenient.) From bsmith at petsc.dev Tue Dec 7 15:48:26 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 7 Dec 2021 16:48:26 -0500 Subject: [petsc-users] KSPBuildResidual and KSPType compatibility In-Reply-To: References: <874k7ld5x1.fsf@jedbrown.org> <8F6A0614-C133-4E41-9F3D-E64E33603630@petsc.dev> Message-ID: <15396A20-FFE3-47B4-B7D7-C1BE32E08E8D@petsc.dev> So left preconditioning converges fine, the true residual gets as small as one could expect. Right preconditioning fails completely; I have no explanation for this, could be a bug in the PETSc implementation for right preconditioning with this method. What do you get with -ksp_type bcgs both left and right preconditioning? Barry > On Dec 7, 2021, at 4:25 PM, Fischer, Greg A. wrote: > > I?ve attached typical examples of ?A? and ?b?. Do these qualify as large values? > > The two runs are with and without ?-ksp_pc_side right?. > > From: Barry Smith > > Sent: Tuesday, December 7, 2021 3:44 PM > To: Fischer, Greg A. > > Cc: Fischer, Greg A. via petsc-users > > Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility > > [External Email] > > When the preconditioned and non-preconditioner residual norms as so hugely different this usually indicates something is badly scaled or something "bad" is happening within the preconditioner. > > 0 KSP preconditioned resid norm 2.434689662304e-01 true resid norm 1.413757649058e+09 ||r(i)||/||b|| 1.442188243865e-04 > .... > 19 KSP preconditioned resid norm 4.505576228085e-15 true resid norm 1.810377537457e-02 ||r(i)||/||b|| 1.846784138157e-15 > > notice that at the end the true residual is not particularly small. Is your b really big and does the matrix have really large values in it? > > What are the two runs? Left and right preconditioning with ibcgs or something else? > > > On Dec 7, 2021, at 12:44 PM, Fischer, Greg A. > wrote: > > Attached are outputs with these options. > > What should I make of these? > > From: Barry Smith > > Sent: Monday, December 6, 2021 2:16 PM > To: Fischer, Greg A. via petsc-users > > Cc: Fischer, Greg A. > > Subject: Re: [petsc-users] KSPBuildResidual and KSPType compatibility > > [External Email] > > What do you get for -ksp_type ibcgs -ksp_monitor -ksp_monitor_true_residual > > with and without -ksp_pc_side right ? > > > > > On Dec 6, 2021, at 12:00 PM, Jed Brown > wrote: > > "Fischer, Greg A. via petsc-users" > writes: > > > > Hello petsc-users, > > I would like to check convergence against the infinity norm, so I defined my own convergence test routine with KSPSetConvergenceTest. (I understand that it may be computationally expensive.) > > I would like to do this with the "ibcgs" method. When I use KSPBuildResidual and calculate the NORM_2 against the output vector, I get a value that differs from the 2-norm that gets passed into the convergence test function. However, when I switch to the "gcr" method, the value I calculate matches the function input value. > > IBCGS uses the preconditioned norm by default while GCR uses the unpreconditioned norm. You can use -ksp_norm_type unpreconditioned or KSPSetNormType() to make IBCGS use unpreconditioned. > > > > Is the KSPBuildResidual function only compatible with a subset of the KSPType methods? If I want to evaluate convergence against the infinity norm, do I need to set KSPSetInitialGuessNonzero and continually re-start the solver with a lower tolerance values until I get a satisfactory value of the infinity norm? > > Thanks, > Greg > > > ________________________________ > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). > > > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). > > > > > This e-mail may contain proprietary information of the sending organization. Any unauthorized or improper disclosure, copying, distribution, or use of the contents of this e-mail and attached document(s) is prohibited. The information contained in this e-mail and attached document(s) is intended only for the personal and private use of the recipient(s) named above. If you have received this communication in error, please notify the sender immediately by email and delete the original e-mail and attached document(s). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From faraz_hussain at yahoo.com Tue Dec 7 21:05:51 2021 From: faraz_hussain at yahoo.com (Faraz Hussain) Date: Wed, 8 Dec 2021 03:05:51 +0000 (UTC) Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> Message-ID: <2025431869.573432.1638932751081@mail.yahoo.com> Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you wrote, "it efficiently gets the matrix from the file spread out over all the ranks.". However, in my application I only want rank 0 to read and assemble the matrix. I do not want other ranks trying to get the matrix data. The reason is the matrix is already in memory when my application is ready to call the petsc solver. So if I am running with multiple ranks, I don't want all ranks assembling the matrix.? This would require a total re-write of my application which is not possible . I realize this may sounds confusing. If so, I'll see if I can create an example that shows the issue. On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith wrote: ? If you use MatLoad() it never has the entire matrix on a single rank at the same time; it efficiently gets the matrix from the file spread out over all the ranks. > On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users wrote: > > I am studying the examples but it seems all ranks read the full matrix. Is there an MPI example where only rank 0 reads the matrix? > > I don't want all ranks to read my input matrix and consume a lot of memory allocating data for the arrays. > > I have worked with Intel's cluster sparse solver and their documentation states: > > " Most of the input parameters must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm. " From knepley at gmail.com Tue Dec 7 21:17:53 2021 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 7 Dec 2021 22:17:53 -0500 Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: <2025431869.573432.1638932751081@mail.yahoo.com> References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> <2025431869.573432.1638932751081@mail.yahoo.com> Message-ID: On Tue, Dec 7, 2021 at 10:06 PM Faraz Hussain via petsc-users < petsc-users at mcs.anl.gov> wrote: > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you > wrote, "it efficiently gets the matrix from the file spread out over all > the ranks.". > > However, in my application I only want rank 0 to read and assemble the > matrix. I do not want other ranks trying to get the matrix data. The reason > is the matrix is already in memory when my application is ready to call the > petsc solver. > > So if I am running with multiple ranks, I don't want all ranks assembling > the matrix. This would require a total re-write of my application which is > not possible . I realize this may sounds confusing. If so, I'll see if I > can create an example that shows the issue. > MPI is distributed memory parallelism. If we want to use multiple ranks, then parts of the matrix must be in the different memories of the different processes. If you already assemble your matrix on process 0, then you need to communicate it to the other processes, perhaps using MatGetSubmatrix(). THanks, Matt > On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith < > bsmith at petsc.dev> wrote: > > > > > > > If you use MatLoad() it never has the entire matrix on a single rank at > the same time; it efficiently gets the matrix from the file spread out over > all the ranks. > > > On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > > I am studying the examples but it seems all ranks read the full matrix. > Is there an MPI example where only rank 0 reads the matrix? > > > > I don't want all ranks to read my input matrix and consume a lot of > memory allocating data for the arrays. > > > > I have worked with Intel's cluster sparse solver and their documentation > states: > > > > " Most of the input parameters must be set on the master MPI process > only, and ignored on other processes. Other MPI processes get all required > data from the master MPI process using the MPI communicator, comm. " > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From faraz_hussain at yahoo.com Tue Dec 7 21:25:11 2021 From: faraz_hussain at yahoo.com (Faraz Hussain) Date: Wed, 8 Dec 2021 03:25:11 +0000 (UTC) Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> <2025431869.573432.1638932751081@mail.yahoo.com> Message-ID: <612622419.570224.1638933911247@mail.yahoo.com> Thanks, that makes sense. I guess I was hoping petsc ksp is like intel's cluster sparse solver where it handles distributing the matrix to the other ranks for you. It sounds like that is not the case and I need to manually distribute the matrix to the ranks? On Tuesday, December 7, 2021, 10:18:04 PM EST, Matthew Knepley wrote: On Tue, Dec 7, 2021 at 10:06 PM Faraz Hussain via petsc-users wrote: > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you wrote, "it efficiently gets the matrix from the file spread out over all the ranks.". > > However, in my application I only want rank 0 to read and assemble the matrix. I do not want other ranks trying to get the matrix data. The reason is the matrix is already in memory when my application is ready to call the petsc solver. > > So if I am running with multiple ranks, I don't want all ranks assembling the matrix.? This would require a total re-write of my application which is not possible . I realize this may sounds confusing. If so, I'll see if I can create an example that shows the issue. MPI is distributed memory parallelism. If we want to use multiple ranks, then parts of the matrix must be in the different memories of the different processes. If you already?assemble your matrix on process 0, then you need to communicate it to the other processes, perhaps using MatGetSubmatrix(). ? THanks, ? ? Matt ? >??On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith wrote: > > > > > > > ? If you use MatLoad() it never has the entire matrix on a single rank at the same time; it efficiently gets the matrix from the file spread out over all the ranks. > >> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users wrote: >> >> I am studying the examples but it seems all ranks read the full matrix. Is there an MPI example where only rank 0 reads the matrix? >> >> I don't want all ranks to read my input matrix and consume a lot of memory allocating data for the arrays. >> >> I have worked with Intel's cluster sparse solver and their documentation states: >> >> " Most of the input parameters must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm. " > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ From junchao.zhang at gmail.com Tue Dec 7 21:33:12 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 7 Dec 2021 21:33:12 -0600 Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: <2025431869.573432.1638932751081@mail.yahoo.com> References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> <2025431869.573432.1638932751081@mail.yahoo.com> Message-ID: On Tue, Dec 7, 2021 at 9:06 PM Faraz Hussain via petsc-users < petsc-users at mcs.anl.gov> wrote: > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you > wrote, "it efficiently gets the matrix from the file spread out over all > the ranks.". > > However, in my application I only want rank 0 to read and assemble the > matrix. I do not want other ranks trying to get the matrix data. The reason > is the matrix is already in memory when my application is ready to call the > petsc solver. What is the data structure of your matrix in memory? > > > So if I am running with multiple ranks, I don't want all ranks assembling > the matrix. This would require a total re-write of my application which is > not possible . I realize this may sounds confusing. If so, I'll see if I > can create an example that shows the issue. > > > > > > On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith < > bsmith at petsc.dev> wrote: > > > > > > > If you use MatLoad() it never has the entire matrix on a single rank at > the same time; it efficiently gets the matrix from the file spread out over > all the ranks. > > > On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > > I am studying the examples but it seems all ranks read the full matrix. > Is there an MPI example where only rank 0 reads the matrix? > > > > I don't want all ranks to read my input matrix and consume a lot of > memory allocating data for the arrays. > > > > I have worked with Intel's cluster sparse solver and their documentation > states: > > > > " Most of the input parameters must be set on the master MPI process > only, and ignored on other processes. Other MPI processes get all required > data from the master MPI process using the MPI communicator, comm. " > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 7 21:42:01 2021 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 7 Dec 2021 22:42:01 -0500 Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: <612622419.570224.1638933911247@mail.yahoo.com> References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> <2025431869.573432.1638932751081@mail.yahoo.com> <612622419.570224.1638933911247@mail.yahoo.com> Message-ID: On Tue, Dec 7, 2021 at 10:25 PM Faraz Hussain wrote: > Thanks, that makes sense. I guess I was hoping petsc ksp is like intel's > cluster sparse solver where it handles distributing the matrix to the other > ranks for you. > > It sounds like that is not the case and I need to manually distribute the > matrix to the ranks? > You can call https://petsc.org/main/docs/manualpages/Mat/MatCreateSubMatricesMPI.html to distribute the matrix. Thanks, Matt > On Tuesday, December 7, 2021, 10:18:04 PM EST, Matthew Knepley < > knepley at gmail.com> wrote: > > > > > > On Tue, Dec 7, 2021 at 10:06 PM Faraz Hussain via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you > wrote, "it efficiently gets the matrix from the file spread out over all > the ranks.". > > > > However, in my application I only want rank 0 to read and assemble the > matrix. I do not want other ranks trying to get the matrix data. The reason > is the matrix is already in memory when my application is ready to call the > petsc solver. > > > > So if I am running with multiple ranks, I don't want all ranks > assembling the matrix. This would require a total re-write of my > application which is not possible . I realize this may sounds confusing. If > so, I'll see if I can create an example that shows the issue. > > MPI is distributed memory parallelism. If we want to use multiple ranks, > then parts of the matrix must > be in the different memories of the different processes. If you > already assemble your matrix on process > 0, then you need to communicate it to the other processes, perhaps using > MatGetSubmatrix(). > > THanks, > > Matt > > > On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith < > bsmith at petsc.dev> wrote: > > > > > > > > > > > > > > If you use MatLoad() it never has the entire matrix on a single rank > at the same time; it efficiently gets the matrix from the file spread out > over all the ranks. > > > >> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> > >> I am studying the examples but it seems all ranks read the full matrix. > Is there an MPI example where only rank 0 reads the matrix? > >> > >> I don't want all ranks to read my input matrix and consume a lot of > memory allocating data for the arrays. > >> > >> I have worked with Intel's cluster sparse solver and their > documentation states: > >> > >> " Most of the input parameters must be set on the master MPI process > only, and ignored on other processes. Other MPI processes get all required > data from the master MPI process using the MPI communicator, comm. " > > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From faraz_hussain at yahoo.com Tue Dec 7 22:04:28 2021 From: faraz_hussain at yahoo.com (Faraz Hussain) Date: Wed, 8 Dec 2021 04:04:28 +0000 (UTC) Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> <2025431869.573432.1638932751081@mail.yahoo.com> Message-ID: <1421478133.591975.1638936268044@mail.yahoo.com> The matrix in memory is in IJV (Spooles ) or CSR3 ( Pardiso ). The application was written to use a variety of different direct solvers but Spooles and Pardiso are what I am most familiar with. On Tuesday, December 7, 2021, 10:33:24 PM EST, Junchao Zhang wrote: On Tue, Dec 7, 2021 at 9:06 PM Faraz Hussain via petsc-users wrote: > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you wrote, "it efficiently gets the matrix from the file spread out over all the ranks.". > > However, in my application I only want rank 0 to read and assemble the matrix. I do not want other ranks trying to get the matrix data. The reason is the matrix is already in memory when my application is ready to call the petsc solver. What is the data structure of your matrix in memory? ? >?? > > So if I am running with multiple ranks, I don't want all ranks assembling the matrix.? This would require a total re-write of my application which is not possible . I realize this may sounds confusing. If so, I'll see if I can create an example that shows the issue. > > > > > > On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith wrote: > > > > > > > ? If you use MatLoad() it never has the entire matrix on a single rank at the same time; it efficiently gets the matrix from the file spread out over all the ranks. > >> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users wrote: >> >> I am studying the examples but it seems all ranks read the full matrix. Is there an MPI example where only rank 0 reads the matrix? >> >> I don't want all ranks to read my input matrix and consume a lot of memory allocating data for the arrays. >> >> I have worked with Intel's cluster sparse solver and their documentation states: >> >> " Most of the input parameters must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm. " > > From junchao.zhang at gmail.com Tue Dec 7 23:32:08 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 7 Dec 2021 23:32:08 -0600 Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: <1421478133.591975.1638936268044@mail.yahoo.com> References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> <2025431869.573432.1638932751081@mail.yahoo.com> <1421478133.591975.1638936268044@mail.yahoo.com> Message-ID: On Tue, Dec 7, 2021 at 10:04 PM Faraz Hussain wrote: > The matrix in memory is in IJV (Spooles ) or CSR3 ( Pardiso ). The > application was written to use a variety of different direct solvers but > Spooles and Pardiso are what I am most familiar with. > I assume the CSR3 has the a, i, j arrays used in petsc's MATAIJ. You can create a MPIAIJ matrix A with MatCreateMPIAIJWithArrays , with only rank 0 providing data (i.e., other ranks just have m=n=0, i=j=a=NULL) Then you call MatGetSubMatrix (A,isrow,iscol,reuse,&B) to redistribute the imbalanced A to a balanced matrix B. You can use PetscLayoutCreate() and friends to create a row map and a column map (as if they are B's) and use them to get ranges of rows/cols each rank wants to own, and then build the isrow, iscol with ISCreateStride() My approach is kind of verbose. I would let Jed and Matt comment whether there are better ones. > > > > > > On Tuesday, December 7, 2021, 10:33:24 PM EST, Junchao Zhang < > junchao.zhang at gmail.com> wrote: > > > > > > > > On Tue, Dec 7, 2021 at 9:06 PM Faraz Hussain via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you > wrote, "it efficiently gets the matrix from the file spread out over all > the ranks.". > > > > However, in my application I only want rank 0 to read and assemble the > matrix. I do not want other ranks trying to get the matrix data. The reason > is the matrix is already in memory when my application is ready to call the > petsc solver. > What is the data structure of your matrix in memory? > > > > > > > So if I am running with multiple ranks, I don't want all ranks > assembling the matrix. This would require a total re-write of my > application which is not possible . I realize this may sounds confusing. If > so, I'll see if I can create an example that shows the issue. > > > > > > > > > > > > On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith < > bsmith at petsc.dev> wrote: > > > > > > > > > > > > > > If you use MatLoad() it never has the entire matrix on a single rank > at the same time; it efficiently gets the matrix from the file spread out > over all the ranks. > > > >> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> > >> I am studying the examples but it seems all ranks read the full matrix. > Is there an MPI example where only rank 0 reads the matrix? > >> > >> I don't want all ranks to read my input matrix and consume a lot of > memory allocating data for the arrays. > >> > >> I have worked with Intel's cluster sparse solver and their > documentation states: > >> > >> " Most of the input parameters must be set on the master MPI process > only, and ignored on other processes. Other MPI processes get all required > data from the master MPI process using the MPI communicator, comm. " > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From quentin.chevalier at polytechnique.edu Wed Dec 8 03:07:33 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Wed, 8 Dec 2021 10:07:33 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: @all thanks for your time it's heartening to see a lively community. @Barry I've restarted the container and grabbed the .log file directly after the docker magic. I've tried a make check, it unsurprisingly spews the same answer as before : Running check examples to verify correct installation Using PETSC_DIR=/usr/local/petsc and PETSC_ARCH=linux-gnu-real-32 /usr/bin/bash: line 1: cd: src/snes/tutorials: No such file or directory /usr/bin/bash: line 1: cd: src/snes/tutorials: No such file or directory gmake[3]: *** No rule to make target 'testex19'. Stop. gmake[2]: *** [makefile:155: check_build] Error 2 @Matthew ok will do, but I think @Lawrence has already provided that answer. It's possible to change the dockerfile and recompute the dolfinx image with hdf5, only it is a time-consuming process. Quentin [image: cid:image003.jpg at 01D690CB.3B3FDC10] Quentin CHEVALIER ? IA parcours recherche LadHyX - Ecole polytechnique __________ On Tue, 7 Dec 2021 at 19:16, Matthew Knepley wrote: > On Tue, Dec 7, 2021 at 9:43 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> @Matthew, as stated before, error output is unchanged, i.e.the python >> command below produces the same traceback : >> >> # python3 -c "from petsc4py import PETSc; >> PETSc.Viewer().createHDF5('d.h5')" >> Traceback (most recent call last): >> File "", line 1, in >> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >> petsc4py.PETSc.Error: error code 86 >> [0] PetscViewerSetType() at >> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >> [0] Unknown type. Check for miss-spelling or missing package: >> https://petsc.org/release/install/install/#external-packages >> [0] Unknown PetscViewer type given: hdf5 >> > > The reason I wanted the output was that the C output shows the configure > options that the PETSc library > was built with, However, Python seems to be eating this, so I cannot check. > > It seems like using this container is counter-productive. If it was built > correctly, making these changes would be trivial. > Send mail to FEniCS (I am guessing Chris Richardson maintains this), and > ask how they intend people to change these > options. > > Thanks, > > Matt. > > >> @Wence that makes sense. I'd assumed that the original PETSc had been >> overwritten, and if the linking has gone wrong I'm surprised anything >> happens with petsc4py at all. >> >> Your tentative command gave : >> >> ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' >> Hint: It looks like a path. File >> '/usr/local/petsc/src/binding/petsc4py' does not exist. >> >> So I tested that global variables PETSC_ARCH & PETSC_DIR were correct >> then ran "pip install petsc4py" to restart petsc4py from scratch. This >> gives rise to a different error : >> # python3 -c "from petsc4py import PETSc" >> Traceback (most recent call last): >> File "", line 1, in >> File "/usr/local/lib/python3.9/dist-packages/petsc4py/PETSc.py", >> line 3, in >> PETSc = ImportPETSc(ARCH) >> File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >> line 29, in ImportPETSc >> return Import('petsc4py', 'PETSc', path, arch) >> File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >> line 73, in Import >> module = import_module(pkg, name, path, arch) >> File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >> line 58, in import_module >> with f: return imp.load_module(fullname, f, fn, info) >> File "/usr/lib/python3.9/imp.py", line 242, in load_module >> return load_dynamic(name, filename, file) >> File "/usr/lib/python3.9/imp.py", line 342, in load_dynamic >> return _load(spec) >> ImportError: >> /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/ >> PETSc.cpython-39-x86_64-linux-gnu.so: >> undefined symbol: petscstack >> >> Not sure that it a step forward ; looks like petsc4py is broken now. >> >> Quentin >> >> On Tue, 7 Dec 2021 at 14:58, Matthew Knepley wrote: >> > >> > On Tue, Dec 7, 2021 at 8:26 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> Ok my bad, that log corresponded to a tentative --download-hdf5. This >> >> log corresponds to the commands given above and has --with-hdf5 in its >> >> options. >> > >> > >> > Okay, this configure was successful and found HDF5 >> > >> >> >> >> The whole process still results in the same error. >> > >> > >> > Now send me the complete error output with this PETSc. >> > >> > Thanks, >> > >> > Matt >> > >> >> >> >> Quentin >> >> >> >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> LadHyX - Ecole polytechnique >> >> >> >> __________ >> >> >> >> >> >> >> >> On Tue, 7 Dec 2021 at 13:59, Matthew Knepley >> wrote: >> >> > >> >> > On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >> Hello Matthew, >> >> >> >> >> >> That would indeed make sense. >> >> >> >> >> >> Full log is attached, I grepped hdf5 in there and didn't find >> anything alarming. >> >> > >> >> > >> >> > At the top of this log: >> >> > >> >> > Configure Options: --configModules=PETSc.Configure >> --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 >> --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 >> --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no >> --with-shared-libraries --download-hypre --download-mumps >> --download-ptscotch --download-scalapack --download-suitesparse >> --download-superlu_dist --with-scalar-type=complex >> >> > >> >> > >> >> > So the HDF5 option is not being specified. >> >> > >> >> > Thanks, >> >> > >> >> > Matt >> >> > >> >> >> Cheers, >> >> >> >> >> >> Quentin >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> >> >> LadHyX - Ecole polytechnique >> >> >> >> >> >> __________ >> >> >> >> >> >> >> >> >> >> >> >> On Mon, 6 Dec 2021 at 21:39, Matthew Knepley >> wrote: >> >> >>> >> >> >>> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >>>> >> >> >>>> Fine. MWE is unchanged : >> >> >>>> * Run this docker container >> >> >>>> * Do : python3 -c "from petsc4py import PETSc; >> PETSc.Viewer().createHDF5('dummy.h5')" >> >> >>>> >> >> >>>> Updated attempt at a fix : >> >> >>>> * cd /usr/local/petsc/ >> >> >>>> * ./configure PETSC_ARCH= linux-gnu-real-32 >> PETSC_DIR=/usr/local/petsc --with-hdf5 --force >> >> >>> >> >> >>> >> >> >>> Did it find HDF5? If not, it will shut it off. You need to send us >> >> >>> >> >> >>> $PETSC_DIR/configure.log >> >> >>> >> >> >>> so we can see what happened in the configure run. >> >> >>> >> >> >>> Thanks, >> >> >>> >> >> >>> Matt >> >> >>> >> >> >>>> >> >> >>>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 >> all >> >> >>>> >> >> >>>> Still no joy. The same error remains. >> >> >>>> >> >> >>>> Quentin >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet >> wrote: >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >>>> > >> >> >>>> > The PETSC_DIR exactly corresponds to the previous one, so I >> guess that rules option b) out, except if a specific option is required to >> overwrite a previous installation of PETSc. As for a), well I thought >> reconfiguring pretty direct, you're welcome to give me a hint as to what >> could be wrong in the following process. >> >> >>>> > >> >> >>>> > Steps to reproduce this behaviour are as follows : >> >> >>>> > * Run this docker container >> >> >>>> > * Do : python3 -c "from petsc4py import PETSc; >> PETSc.Viewer().createHDF5('dummy.h5')" >> >> >>>> > >> >> >>>> > After you get the error Unknown PetscViewer type, feel free to >> try : >> >> >>>> > >> >> >>>> > * cd /usr/local/petsc/ >> >> >>>> > * ./configure --with-hfd5 >> >> >>>> > >> >> >>>> > >> >> >>>> > It?s hdf5, not hfd5. >> >> >>>> > It?s PETSC_ARCH, not PETSC-ARCH. >> >> >>>> > PETSC_ARCH is missing from your configure line. >> >> >>>> > >> >> >>>> > Thanks, >> >> >>>> > Pierre >> >> >>>> > >> >> >>>> > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 >> all >> >> >>>> > >> >> >>>> > Then repeat the MWE and observe absolutely no behavioural >> change whatsoever. I'm afraid I don't know PETSc well enough to be >> surprised by that. >> >> >>>> > >> >> >>>> > Quentin >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > Quentin CHEVALIER ? IA parcours recherche >> >> >>>> > >> >> >>>> > LadHyX - Ecole polytechnique >> >> >>>> > >> >> >>>> > __________ >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley >> wrote: >> >> >>>> >> >> >> >>>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >>>> >>> >> >> >>>> >>> It failed all of the tests included in `make >> >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 >> check`, with >> >> >>>> >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No >> such file >> >> >>>> >>> or directory` >> >> >>>> >>> >> >> >>>> >>> I am therefore fairly confident this a "file absence" >> problem, and not >> >> >>>> >>> a compilation problem. >> >> >>>> >>> >> >> >>>> >>> I repeat that there was no error at compilation stage. The >> final stage >> >> >>>> >>> did present `gmake[3]: Nothing to be done for 'libs'.` but >> that's all. >> >> >>>> >>> >> >> >>>> >>> Again, running `./configure --with-hdf5` followed by a `make >> >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` >> does not >> >> >>>> >>> change the problem. I get the same error at the same position >> as >> >> >>>> >>> before. >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> If you reconfigured and rebuilt, it is impossible to get the >> same error, so >> >> >>>> >> >> >> >>>> >> a) You did not reconfigure >> >> >>>> >> >> >> >>>> >> b) Your new build is somewhere else on the machine >> >> >>>> >> >> >> >>>> >> Thanks, >> >> >>>> >> >> >> >>>> >> Matt >> >> >>>> >> >> >> >>>> >>> >> >> >>>> >>> I will comment I am running on OpenSUSE. >> >> >>>> >>> >> >> >>>> >>> Quentin >> >> >>>> >>> >> >> >>>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley < >> knepley at gmail.com> wrote: >> >> >>>> >>> > >> >> >>>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >>>> >>> >> >> >> >>>> >>> >> Hello Matthew and thanks for your quick response. >> >> >>>> >>> >> >> >> >>>> >>> >> I'm afraid I did try to snoop around the container and >> rerun PETSc's >> >> >>>> >>> >> configure with the --with-hdf5 option, to absolutely no >> avail. >> >> >>>> >>> >> >> >> >>>> >>> >> I didn't see any errors during config or make, but it >> failed the tests >> >> >>>> >>> >> (which aren't included in the minimal container I suppose) >> >> >>>> >>> > >> >> >>>> >>> > >> >> >>>> >>> > Failed which tests? What was the error? >> >> >>>> >>> > >> >> >>>> >>> > Thanks, >> >> >>>> >>> > >> >> >>>> >>> > Matt >> >> >>>> >>> > >> >> >>>> >>> >> >> >> >>>> >>> >> Quentin >> >> >>>> >>> >> >> >> >>>> >>> >> >> >> >>>> >>> >> >> >> >>>> >>> >> Quentin CHEVALIER ? IA parcours recherche >> >> >>>> >>> >> >> >> >>>> >>> >> LadHyX - Ecole polytechnique >> >> >>>> >>> >> >> >> >>>> >>> >> __________ >> >> >>>> >>> >> >> >> >>>> >>> >> >> >> >>>> >>> >> >> >> >>>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley < >> knepley at gmail.com> wrote: >> >> >>>> >>> >> > >> >> >>>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> Hello PETSc users, >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> This email is a duplicata of this gitlab issue, sorry >> for any inconvenience caused. >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> I want to compute a PETSc vector in real mode, than >> perform calculations with it in complex mode. I want as much of this >> process to be parallel as possible. Right now, I compile PETSc in real >> mode, compute my vector and save it to a file, then switch to complex mode, >> read it, and move on. >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> This creates unexpected behaviour using MPIIO, so on >> Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows >> (taking inspiration from petsc4py doc, a bitbucket example and another one, >> all top Google results for 'petsc hdf5') : >> >> >>>> >>> >> >>> >> >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', >> COMM_WORLD) >> >> >>>> >>> >> >>> q.load(viewer) >> >> >>>> >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, >> mode=PETSc.ScatterMode.FORWARD) >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> This crashes my code. I obtain traceback : >> >> >>>> >>> >> >>> >> >> >>>> >>> >> >>> File "/home/shared/code.py", line 121, in Load >> >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', >> COMM_WORLD) >> >> >>>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, in >> petsc4py.PETSc.Viewer.createHDF5 >> >> >>>> >>> >> >>> petsc4py.PETSc.Error: error code 86 >> >> >>>> >>> >> >>> [0] PetscViewerSetType() at >> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >> >> >>>> >>> >> >>> [0] Unknown type. Check for miss-spelling or missing >> package: https://petsc.org/release/install/install/#external-packages >> >> >>>> >>> >> >>> [0] Unknown PetscViewer type given: hdf5 >> >> >>>> >>> >> > >> >> >>>> >>> >> > This means that PETSc has not been configured with HDF5 >> (--with-hdf5 or --download-hdf5), so the container should be updated. >> >> >>>> >>> >> > >> >> >>>> >>> >> > THanks, >> >> >>>> >>> >> > >> >> >>>> >>> >> > Matt >> >> >>>> >>> >> > >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> I have petsc4py 3.16 from this docker container (list >> of dependencies include PETSc and petsc4py). >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> I'm pretty sure this is not intended behaviour. Any >> insight as to how to fix this issue (I tried running ./configure >> --with-hdf5 to no avail) or more generally to perform this jiggling between >> real and complex would be much appreciated, >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> Kind regards. >> >> >>>> >>> >> >> >> >> >>>> >>> >> >> Quentin >> >> >>>> >>> >> > >> >> >>>> >>> >> > >> >> >>>> >>> >> > >> >> >>>> >>> >> > -- >> >> >>>> >>> >> > What most experimenters take for granted before they >> begin their experiments is infinitely more interesting than any results to >> which their experiments lead. >> >> >>>> >>> >> > -- Norbert Wiener >> >> >>>> >>> >> > >> >> >>>> >>> >> > https://www.cse.buffalo.edu/~knepley/ >> >> >>>> >>> > >> >> >>>> >>> > >> >> >>>> >>> > >> >> >>>> >>> > -- >> >> >>>> >>> > What most experimenters take for granted before they begin >> their experiments is infinitely more interesting than any results to which >> their experiments lead. >> >> >>>> >>> > -- Norbert Wiener >> >> >>>> >>> > >> >> >>>> >>> > https://www.cse.buffalo.edu/~knepley/ >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> -- >> >> >>>> >> What most experimenters take for granted before they begin >> their experiments is infinitely more interesting than any results to which >> their experiments lead. >> >> >>>> >> -- Norbert Wiener >> >> >>>> >> >> >> >>>> >> https://www.cse.buffalo.edu/~knepley/ >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>> >> >> >>> >> >> >>> >> >> >>> -- >> >> >>> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> >> >>> -- Norbert Wiener >> >> >>> >> >> >>> https://www.cse.buffalo.edu/~knepley/ >> >> > >> >> > >> >> > >> >> > -- >> >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> >> > -- Norbert Wiener >> >> > >> >> > https://www.cse.buffalo.edu/~knepley/ >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2044 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: text/x-log Size: 69447 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 4890722 bytes Desc: not available URL: From quentin.chevalier at polytechnique.edu Wed Dec 8 04:03:54 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Wed, 8 Dec 2021 11:03:54 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: For the record, I tried changing the dockerfile with the --with-hdf5 flag and rebuilding a docker image, this changes the traceback to : HDF5-DIAG: Error detected in HDF5 (1.12.1) MPI-process 0: #000: H5F.c line 620 in H5Fopen(): unable to open file major: File accessibility minor: Unable to open file #001: H5VLcallback.c line 3501 in H5VL_file_open(): failed to iterate over available VOL connector plugins major: Virtual Object Layer minor: Iteration failed #002: H5PLpath.c line 578 in H5PL__path_table_iterate(): can't iterate over plugins in plugin path '(null)' major: Plugin for dynamically loaded library minor: Iteration failed #003: H5PLpath.c line 620 in H5PL__path_table_iterate_process_path(): can't open directory: /usr/local/hdf5/lib/plugin major: Plugin for dynamically loaded library minor: Can't open directory or file #004: H5VLcallback.c line 3351 in H5VL__file_open(): open failed major: Virtual Object Layer minor: Can't open object #005: H5VLnative_file.c line 97 in H5VL__native_file_open(): unable to open file major: File accessibility minor: Unable to open file #006: H5Fint.c line 1834 in H5F_open(): unable to open file: name = 'dummy.h5', tent_flags = 0 major: File accessibility minor: Unable to open file #007: H5FD.c line 723 in H5FD_open(): open failed major: Virtual File Layer minor: Unable to initialize object #008: H5FDmpio.c line 850 in H5FD__mpio_open(): MPI_File_open failed: MPI error string is 'File does not exist, error stack: ADIOI_UFS_OPEN(37): File dummy.h5 does not exist' major: Internal error (too specific to document in detail) minor: Some MPI function failed HDF5-DIAG: Error detected in HDF5 (1.12.1) MPI-process 0: #000: H5F.c line 707 in H5Fclose(): not a file ID major: Invalid arguments to routine minor: Inappropriate type petsc4py.PETSc.Error: error code 76 [0] PetscObjectDestroy() at /usr/local/petsc/src/sys/objects/destroy.c:59 [0] PetscViewerDestroy() at /usr/local/petsc/src/sys/classes/viewer/interface/view.c:119 [0] PetscViewerDestroy_HDF5() at /usr/local/petsc/src/sys/classes/viewer/impls/hdf5/hdf5v.c:93 [0] PetscViewerFileClose_HDF5() at /usr/local/petsc/src/sys/classes/viewer/impls/hdf5/hdf5v.c:82 [0] Error in external library [0] Error in HDF5 call H5Fclose() Status -1 Exception ignored in: 'petsc4py.PETSc.Object.__dealloc__' Traceback (most recent call last): File "", line 1, in petsc4py.PETSc.Error: error code 76 [0] PetscObjectDestroy() at /usr/local/petsc/src/sys/objects/destroy.c:59 [0] PetscViewerDestroy() at /usr/local/petsc/src/sys/classes/viewer/interface/view.c:119 [0] PetscViewerDestroy_HDF5() at /usr/local/petsc/src/sys/classes/viewer/impls/hdf5/hdf5v.c:93 [0] PetscViewerFileClose_HDF5() at /usr/local/petsc/src/sys/classes/viewer/impls/hdf5/hdf5v.c:82 [0] Error in external library [0] Error in HDF5 call H5Fclose() Status -1 Traceback (most recent call last): File "", line 1, in File "PETSc/Viewer.pyx", line 184, in petsc4py.PETSc.Viewer.createHDF5 petsc4py.PETSc.Error: error code 76 [0] PetscObjectDestroy() at /usr/local/petsc/src/sys/objects/destroy.c:59 [0] PetscViewerDestroy() at /usr/local/petsc/src/sys/classes/viewer/interface/view.c:119 [0] PetscViewerDestroy_HDF5() at /usr/local/petsc/src/sys/classes/viewer/impls/hdf5/hdf5v.c:93 [0] PetscViewerFileClose_HDF5() at /usr/local/petsc/src/sys/classes/viewer/impls/hdf5/hdf5v.c:82 [0] Error in external library [0] Error in HDF5 call H5Fclose() Status -1 Since I'm calling upon the routine to create the file, and I have root permissions over the container, it's got me confused. Quentin Quentin CHEVALIER ? IA parcours recherche LadHyX - Ecole polytechnique __________ On Wed, 8 Dec 2021 at 10:07, Quentin Chevalier wrote: > > @all thanks for your time it's heartening to see a lively community. > > @Barry I've restarted the container and grabbed the .log file directly after the docker magic. I've tried a make check, it unsurprisingly spews the same answer as before : > > Running check examples to verify correct installation > Using PETSC_DIR=/usr/local/petsc and PETSC_ARCH=linux-gnu-real-32 > /usr/bin/bash: line 1: cd: src/snes/tutorials: No such file or directory > /usr/bin/bash: line 1: cd: src/snes/tutorials: No such file or directory > gmake[3]: *** No rule to make target 'testex19'. Stop. > gmake[2]: *** [makefile:155: check_build] Error 2 > > > @Matthew ok will do, but I think @Lawrence has already provided that answer. It's possible to change the dockerfile and recompute the dolfinx image with hdf5, only it is a time-consuming process. > > Quentin > > > > Quentin CHEVALIER ? IA parcours recherche > > LadHyX - Ecole polytechnique > > __________ > > > > On Tue, 7 Dec 2021 at 19:16, Matthew Knepley wrote: >> >> On Tue, Dec 7, 2021 at 9:43 AM Quentin Chevalier wrote: >>> >>> @Matthew, as stated before, error output is unchanged, i.e.the python >>> command below produces the same traceback : >>> >>> # python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('d.h5')" >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >>> petsc4py.PETSc.Error: error code 86 >>> [0] PetscViewerSetType() at >>> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> [0] Unknown type. Check for miss-spelling or missing package: >>> https://petsc.org/release/install/install/#external-packages >>> [0] Unknown PetscViewer type given: hdf5 >> >> >> The reason I wanted the output was that the C output shows the configure options that the PETSc library >> was built with, However, Python seems to be eating this, so I cannot check. >> >> It seems like using this container is counter-productive. If it was built correctly, making these changes would be trivial. >> Send mail to FEniCS (I am guessing Chris Richardson maintains this), and ask how they intend people to change these >> options. >> >> Thanks, >> >> Matt. >> >>> >>> @Wence that makes sense. I'd assumed that the original PETSc had been >>> overwritten, and if the linking has gone wrong I'm surprised anything >>> happens with petsc4py at all. >>> >>> Your tentative command gave : >>> >>> ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' >>> Hint: It looks like a path. File >>> '/usr/local/petsc/src/binding/petsc4py' does not exist. >>> >>> So I tested that global variables PETSC_ARCH & PETSC_DIR were correct >>> then ran "pip install petsc4py" to restart petsc4py from scratch. This >>> gives rise to a different error : >>> # python3 -c "from petsc4py import PETSc" >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "/usr/local/lib/python3.9/dist-packages/petsc4py/PETSc.py", >>> line 3, in >>> PETSc = ImportPETSc(ARCH) >>> File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >>> line 29, in ImportPETSc >>> return Import('petsc4py', 'PETSc', path, arch) >>> File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >>> line 73, in Import >>> module = import_module(pkg, name, path, arch) >>> File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >>> line 58, in import_module >>> with f: return imp.load_module(fullname, f, fn, info) >>> File "/usr/lib/python3.9/imp.py", line 242, in load_module >>> return load_dynamic(name, filename, file) >>> File "/usr/lib/python3.9/imp.py", line 342, in load_dynamic >>> return _load(spec) >>> ImportError: /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/PETSc.cpython-39-x86_64-linux-gnu.so: >>> undefined symbol: petscstack >>> >>> Not sure that it a step forward ; looks like petsc4py is broken now. >>> >>> Quentin >>> >>> On Tue, 7 Dec 2021 at 14:58, Matthew Knepley wrote: >>> > >>> > On Tue, Dec 7, 2021 at 8:26 AM Quentin Chevalier wrote: >>> >> >>> >> Ok my bad, that log corresponded to a tentative --download-hdf5. This >>> >> log corresponds to the commands given above and has --with-hdf5 in its >>> >> options. >>> > >>> > >>> > Okay, this configure was successful and found HDF5 >>> > >>> >> >>> >> The whole process still results in the same error. >>> > >>> > >>> > Now send me the complete error output with this PETSc. >>> > >>> > Thanks, >>> > >>> > Matt >>> > >>> >> >>> >> Quentin >>> >> >>> >> >>> >> >>> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >>> >> LadHyX - Ecole polytechnique >>> >> >>> >> __________ >>> >> >>> >> >>> >> >>> >> On Tue, 7 Dec 2021 at 13:59, Matthew Knepley wrote: >>> >> > >>> >> > On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier wrote: >>> >> >> >>> >> >> Hello Matthew, >>> >> >> >>> >> >> That would indeed make sense. >>> >> >> >>> >> >> Full log is attached, I grepped hdf5 in there and didn't find anything alarming. >>> >> > >>> >> > >>> >> > At the top of this log: >>> >> > >>> >> > Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no --with-shared-libraries --download-hypre --download-mumps --download-ptscotch --download-scalapack --download-suitesparse --download-superlu_dist --with-scalar-type=complex >>> >> > >>> >> > >>> >> > So the HDF5 option is not being specified. >>> >> > >>> >> > Thanks, >>> >> > >>> >> > Matt >>> >> > >>> >> >> Cheers, >>> >> >> >>> >> >> Quentin >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >> >>> >> >> LadHyX - Ecole polytechnique >>> >> >> >>> >> >> __________ >>> >> >> >>> >> >> >>> >> >> >>> >> >> On Mon, 6 Dec 2021 at 21:39, Matthew Knepley wrote: >>> >> >>> >>> >> >>> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier wrote: >>> >> >>>> >>> >> >>>> Fine. MWE is unchanged : >>> >> >>>> * Run this docker container >>> >> >>>> * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" >>> >> >>>> >>> >> >>>> Updated attempt at a fix : >>> >> >>>> * cd /usr/local/petsc/ >>> >> >>>> * ./configure PETSC_ARCH= linux-gnu-real-32 PETSC_DIR=/usr/local/petsc --with-hdf5 --force >>> >> >>> >>> >> >>> >>> >> >>> Did it find HDF5? If not, it will shut it off. You need to send us >>> >> >>> >>> >> >>> $PETSC_DIR/configure.log >>> >> >>> >>> >> >>> so we can see what happened in the configure run. >>> >> >>> >>> >> >>> Thanks, >>> >> >>> >>> >> >>> Matt >>> >> >>> >>> >> >>>> >>> >> >>>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 all >>> >> >>>> >>> >> >>>> Still no joy. The same error remains. >>> >> >>>> >>> >> >>>> Quentin >>> >> >>>> >>> >> >>>> >>> >> >>>> >>> >> >>>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet wrote: >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier wrote: >>> >> >>>> > >>> >> >>>> > The PETSC_DIR exactly corresponds to the previous one, so I guess that rules option b) out, except if a specific option is required to overwrite a previous installation of PETSc. As for a), well I thought reconfiguring pretty direct, you're welcome to give me a hint as to what could be wrong in the following process. >>> >> >>>> > >>> >> >>>> > Steps to reproduce this behaviour are as follows : >>> >> >>>> > * Run this docker container >>> >> >>>> > * Do : python3 -c "from petsc4py import PETSc; PETSc.Viewer().createHDF5('dummy.h5')" >>> >> >>>> > >>> >> >>>> > After you get the error Unknown PetscViewer type, feel free to try : >>> >> >>>> > >>> >> >>>> > * cd /usr/local/petsc/ >>> >> >>>> > * ./configure --with-hfd5 >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > It?s hdf5, not hfd5. >>> >> >>>> > It?s PETSC_ARCH, not PETSC-ARCH. >>> >> >>>> > PETSC_ARCH is missing from your configure line. >>> >> >>>> > >>> >> >>>> > Thanks, >>> >> >>>> > Pierre >>> >> >>>> > >>> >> >>>> > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all >>> >> >>>> > >>> >> >>>> > Then repeat the MWE and observe absolutely no behavioural change whatsoever. I'm afraid I don't know PETSc well enough to be surprised by that. >>> >> >>>> > >>> >> >>>> > Quentin >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > Quentin CHEVALIER ? IA parcours recherche >>> >> >>>> > >>> >> >>>> > LadHyX - Ecole polytechnique >>> >> >>>> > >>> >> >>>> > __________ >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley wrote: >>> >> >>>> >> >>> >> >>>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier wrote: >>> >> >>>> >>> >>> >> >>>> >>> It failed all of the tests included in `make >>> >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 check`, with >>> >> >>>> >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No such file >>> >> >>>> >>> or directory` >>> >> >>>> >>> >>> >> >>>> >>> I am therefore fairly confident this a "file absence" problem, and not >>> >> >>>> >>> a compilation problem. >>> >> >>>> >>> >>> >> >>>> >>> I repeat that there was no error at compilation stage. The final stage >>> >> >>>> >>> did present `gmake[3]: Nothing to be done for 'libs'.` but that's all. >>> >> >>>> >>> >>> >> >>>> >>> Again, running `./configure --with-hdf5` followed by a `make >>> >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` does not >>> >> >>>> >>> change the problem. I get the same error at the same position as >>> >> >>>> >>> before. >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> If you reconfigured and rebuilt, it is impossible to get the same error, so >>> >> >>>> >> >>> >> >>>> >> a) You did not reconfigure >>> >> >>>> >> >>> >> >>>> >> b) Your new build is somewhere else on the machine >>> >> >>>> >> >>> >> >>>> >> Thanks, >>> >> >>>> >> >>> >> >>>> >> Matt >>> >> >>>> >> >>> >> >>>> >>> >>> >> >>>> >>> I will comment I am running on OpenSUSE. >>> >> >>>> >>> >>> >> >>>> >>> Quentin >>> >> >>>> >>> >>> >> >>>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley wrote: >>> >> >>>> >>> > >>> >> >>>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier wrote: >>> >> >>>> >>> >> >>> >> >>>> >>> >> Hello Matthew and thanks for your quick response. >>> >> >>>> >>> >> >>> >> >>>> >>> >> I'm afraid I did try to snoop around the container and rerun PETSc's >>> >> >>>> >>> >> configure with the --with-hdf5 option, to absolutely no avail. >>> >> >>>> >>> >> >>> >> >>>> >>> >> I didn't see any errors during config or make, but it failed the tests >>> >> >>>> >>> >> (which aren't included in the minimal container I suppose) >>> >> >>>> >>> > >>> >> >>>> >>> > >>> >> >>>> >>> > Failed which tests? What was the error? >>> >> >>>> >>> > >>> >> >>>> >>> > Thanks, >>> >> >>>> >>> > >>> >> >>>> >>> > Matt >>> >> >>>> >>> > >>> >> >>>> >>> >> >>> >> >>>> >>> >> Quentin >>> >> >>>> >>> >> >>> >> >>>> >>> >> >>> >> >>>> >>> >> >>> >> >>>> >>> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >>>> >>> >> >>> >> >>>> >>> >> LadHyX - Ecole polytechnique >>> >> >>>> >>> >> >>> >> >>>> >>> >> __________ >>> >> >>>> >>> >> >>> >> >>>> >>> >> >>> >> >>>> >>> >> >>> >> >>>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley wrote: >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier wrote: >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> Hello PETSc users, >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> This email is a duplicata of this gitlab issue, sorry for any inconvenience caused. >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> I want to compute a PETSc vector in real mode, than perform calculations with it in complex mode. I want as much of this process to be parallel as possible. Right now, I compile PETSc in real mode, compute my vector and save it to a file, then switch to complex mode, read it, and move on. >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> This creates unexpected behaviour using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows (taking inspiration from petsc4py doc, a bitbucket example and another one, all top Google results for 'petsc hdf5') : >>> >> >>>> >>> >> >>> >>> >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>> >> >>>> >>> >> >>> q.load(viewer) >>> >> >>>> >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> This crashes my code. I obtain traceback : >>> >> >>>> >>> >> >>> >>> >> >>>> >>> >> >>> File "/home/shared/code.py", line 121, in Load >>> >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>> >> >>>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >>> >> >>>> >>> >> >>> petsc4py.PETSc.Error: error code 86 >>> >> >>>> >>> >> >>> [0] PetscViewerSetType() at /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> >> >>>> >>> >> >>> [0] Unknown type. Check for miss-spelling or missing package: https://petsc.org/release/install/install/#external-packages >>> >> >>>> >>> >> >>> [0] Unknown PetscViewer type given: hdf5 >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > This means that PETSc has not been configured with HDF5 (--with-hdf5 or --download-hdf5), so the container should be updated. >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > THanks, >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > Matt >>> >> >>>> >>> >> > >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> I have petsc4py 3.16 from this docker container (list of dependencies include PETSc and petsc4py). >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> I'm pretty sure this is not intended behaviour. Any insight as to how to fix this issue (I tried running ./configure --with-hdf5 to no avail) or more generally to perform this jiggling between real and complex would be much appreciated, >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> Kind regards. >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> Quentin >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > -- >>> >> >>>> >>> >> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> >> >>>> >>> >> > -- Norbert Wiener >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > https://www.cse.buffalo.edu/~knepley/ >>> >> >>>> >>> > >>> >> >>>> >>> > >>> >> >>>> >>> > >>> >> >>>> >>> > -- >>> >> >>>> >>> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> >> >>>> >>> > -- Norbert Wiener >>> >> >>>> >>> > >>> >> >>>> >>> > https://www.cse.buffalo.edu/~knepley/ >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> -- >>> >> >>>> >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> >> >>>> >> -- Norbert Wiener >>> >> >>>> >> >>> >> >>>> >> https://www.cse.buffalo.edu/~knepley/ >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> -- >>> >> >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> >> >>> -- Norbert Wiener >>> >> >>> >>> >> >>> https://www.cse.buffalo.edu/~knepley/ >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> >> > -- Norbert Wiener >>> >> > >>> >> > https://www.cse.buffalo.edu/~knepley/ >>> > >>> > >>> > >>> > -- >>> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> > -- Norbert Wiener >>> > >>> > https://www.cse.buffalo.edu/~knepley/ >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ From d.scott at epcc.ed.ac.uk Wed Dec 8 05:11:19 2021 From: d.scott at epcc.ed.ac.uk (David Scott) Date: Wed, 8 Dec 2021 11:11:19 +0000 Subject: [petsc-users] Error Writing Large HDF5 File In-Reply-To: References: Message-ID: <01317123-b35e-5c5b-bbc9-9964c04acb1c@epcc.ed.ac.uk> Hello Lawrence, Thanks for the swift response. I am sorry that mine has been much slower. The information that you provided was useful. I had been trying to generate the initial state using just one MPI process. I rewrote my code slightly so that I could use more than one process. Once I had done this I succeeded in generating an initial state by employing two MPI processes. I do not understand in detail why one process was insufficient because it seems to me that I was within the stated limits (the maximum size of file generated using two processes is 2.6GB and the number of elements is less than 310,000,000) but I do not doubt that your suspicion is correct. Best wishes, David On 26/11/2021 15:25, Lawrence Mitchell wrote: > This email was sent to you by someone outside the University. > You should only click on links or attachments if you are certain that > the email is genuine and the content is safe. > This is failing setting the chunksize: > https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/da/gr2.c#L517 > > It is hard for me to follow this code, but it looks like the chunk is > just set to the total extent of the DA (on the process?). This can > grow too large for HDF5, which has limits described here > https://support.hdfgroup.org/HDF5/doc/Advanced/Chunking/ > > chunks must have fewer than 2^32-1 elements and a max size of 4GB > > I suspect you?re hitting that limit. > > I guess this part of the code should be fully refactored to set > sensible chunk sizes. That same document suggests 1MB is a good option. > > Lawrence > > On Fri, 26 Nov 2021 at 15:10, David Scott wrote: > > Hello, > > I am trying to write out HDF5 files. The program I have works all > right > up to a point but then fails when I double the size of the file that I > am trying to write out. I have attached a file containing the error > message and another file containing the short program that I used to > generate it. > > The first of these files reports that Petsc 3.14.2 was used. I > know that > this is old but I have obtained the same result with Petsc 3.16.1. > Also, > I have observed the same behaviour on two different machines. I tried > using 64 bit indices but it made no difference. > > The program runs successfully with the following command line options > > -global_dim_x 1344 -global_dim_y 336 -global_dim_z 336 > > but fails with these > > -global_dim_x 2688 -global_dim_y 336 -global_dim_z 336 > > All the best, > > David > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. Is e buidheann > carthannais a th? ann an Oilthigh Dh?n ?ideann, cl?raichte an > Alba, ?ireamh cl?raidh SC005336. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 8 06:22:52 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Dec 2021 07:22:52 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Wed, Dec 8, 2021 at 4:08 AM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > @all thanks for your time it's heartening to see a lively community. > > @Barry I've restarted the container and grabbed the .log file directly > after the docker magic. I've tried a make check, it unsurprisingly spews > the same answer as before : > > Running check examples to verify correct installation > Using PETSC_DIR=/usr/local/petsc and PETSC_ARCH=linux-gnu-real-32 > /usr/bin/bash: line 1: cd: src/snes/tutorials: No such file or directory > /usr/bin/bash: line 1: cd: src/snes/tutorials: No such file or directory > gmake[3]: *** No rule to make target 'testex19'. Stop. > gmake[2]: *** [makefile:155: check_build] Error 2 > This happens if you run 'make check' without defining PETSC_DIR in your environment, since we are including makefiles with PETSC_DIR in the path and make does not allow proper error messages in that case. Thanks, Matt > @Matthew ok will do, but I think @Lawrence has already provided that > answer. It's possible to change the dockerfile and recompute the dolfinx > image with hdf5, only it is a time-consuming process. > > Quentin > > > > [image: cid:image003.jpg at 01D690CB.3B3FDC10] > > Quentin CHEVALIER ? IA parcours recherche > > LadHyX - Ecole polytechnique > > __________ > > > On Tue, 7 Dec 2021 at 19:16, Matthew Knepley wrote: > >> On Tue, Dec 7, 2021 at 9:43 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >>> @Matthew, as stated before, error output is unchanged, i.e.the python >>> command below produces the same traceback : >>> >>> # python3 -c "from petsc4py import PETSc; >>> PETSc.Viewer().createHDF5('d.h5')" >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "PETSc/Viewer.pyx", line 182, in petsc4py.PETSc.Viewer.createHDF5 >>> petsc4py.PETSc.Error: error code 86 >>> [0] PetscViewerSetType() at >>> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> [0] Unknown type. Check for miss-spelling or missing package: >>> https://petsc.org/release/install/install/#external-packages >>> [0] Unknown PetscViewer type given: hdf5 >>> >> >> The reason I wanted the output was that the C output shows the configure >> options that the PETSc library >> was built with, However, Python seems to be eating this, so I cannot >> check. >> >> It seems like using this container is counter-productive. If it was built >> correctly, making these changes would be trivial. >> Send mail to FEniCS (I am guessing Chris Richardson maintains this), and >> ask how they intend people to change these >> options. >> >> Thanks, >> >> Matt. >> >> >>> @Wence that makes sense. I'd assumed that the original PETSc had been >>> overwritten, and if the linking has gone wrong I'm surprised anything >>> happens with petsc4py at all. >>> >>> Your tentative command gave : >>> >>> ERROR: Invalid requirement: '/usr/local/petsc/src/binding/petsc4py' >>> Hint: It looks like a path. File >>> '/usr/local/petsc/src/binding/petsc4py' does not exist. >>> >>> So I tested that global variables PETSC_ARCH & PETSC_DIR were correct >>> then ran "pip install petsc4py" to restart petsc4py from scratch. This >>> gives rise to a different error : >>> # python3 -c "from petsc4py import PETSc" >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "/usr/local/lib/python3.9/dist-packages/petsc4py/PETSc.py", >>> line 3, in >>> PETSc = ImportPETSc(ARCH) >>> File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >>> line 29, in ImportPETSc >>> return Import('petsc4py', 'PETSc', path, arch) >>> File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >>> line 73, in Import >>> module = import_module(pkg, name, path, arch) >>> File "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >>> line 58, in import_module >>> with f: return imp.load_module(fullname, f, fn, info) >>> File "/usr/lib/python3.9/imp.py", line 242, in load_module >>> return load_dynamic(name, filename, file) >>> File "/usr/lib/python3.9/imp.py", line 342, in load_dynamic >>> return _load(spec) >>> ImportError: >>> /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/ >>> PETSc.cpython-39-x86_64-linux-gnu.so: >>> undefined symbol: petscstack >>> >>> Not sure that it a step forward ; looks like petsc4py is broken now. >>> >>> Quentin >>> >>> On Tue, 7 Dec 2021 at 14:58, Matthew Knepley wrote: >>> > >>> > On Tue, Dec 7, 2021 at 8:26 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >>> >> Ok my bad, that log corresponded to a tentative --download-hdf5. This >>> >> log corresponds to the commands given above and has --with-hdf5 in its >>> >> options. >>> > >>> > >>> > Okay, this configure was successful and found HDF5 >>> > >>> >> >>> >> The whole process still results in the same error. >>> > >>> > >>> > Now send me the complete error output with this PETSc. >>> > >>> > Thanks, >>> > >>> > Matt >>> > >>> >> >>> >> Quentin >>> >> >>> >> >>> >> >>> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >>> >> LadHyX - Ecole polytechnique >>> >> >>> >> __________ >>> >> >>> >> >>> >> >>> >> On Tue, 7 Dec 2021 at 13:59, Matthew Knepley >>> wrote: >>> >> > >>> >> > On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >>> >> >> Hello Matthew, >>> >> >> >>> >> >> That would indeed make sense. >>> >> >> >>> >> >> Full log is attached, I grepped hdf5 in there and didn't find >>> anything alarming. >>> >> > >>> >> > >>> >> > At the top of this log: >>> >> > >>> >> > Configure Options: --configModules=PETSc.Configure >>> --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 >>> --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 >>> --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no >>> --with-shared-libraries --download-hypre --download-mumps >>> --download-ptscotch --download-scalapack --download-suitesparse >>> --download-superlu_dist --with-scalar-type=complex >>> >> > >>> >> > >>> >> > So the HDF5 option is not being specified. >>> >> > >>> >> > Thanks, >>> >> > >>> >> > Matt >>> >> > >>> >> >> Cheers, >>> >> >> >>> >> >> Quentin >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >> >>> >> >> LadHyX - Ecole polytechnique >>> >> >> >>> >> >> __________ >>> >> >> >>> >> >> >>> >> >> >>> >> >> On Mon, 6 Dec 2021 at 21:39, Matthew Knepley >>> wrote: >>> >> >>> >>> >> >>> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >>>> >>> >> >>>> Fine. MWE is unchanged : >>> >> >>>> * Run this docker container >>> >> >>>> * Do : python3 -c "from petsc4py import PETSc; >>> PETSc.Viewer().createHDF5('dummy.h5')" >>> >> >>>> >>> >> >>>> Updated attempt at a fix : >>> >> >>>> * cd /usr/local/petsc/ >>> >> >>>> * ./configure PETSC_ARCH= linux-gnu-real-32 >>> PETSC_DIR=/usr/local/petsc --with-hdf5 --force >>> >> >>> >>> >> >>> >>> >> >>> Did it find HDF5? If not, it will shut it off. You need to send us >>> >> >>> >>> >> >>> $PETSC_DIR/configure.log >>> >> >>> >>> >> >>> so we can see what happened in the configure run. >>> >> >>> >>> >> >>> Thanks, >>> >> >>> >>> >> >>> Matt >>> >> >>> >>> >> >>>> >>> >> >>>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= linux-gnu-real-32 >>> all >>> >> >>>> >>> >> >>>> Still no joy. The same error remains. >>> >> >>>> >>> >> >>>> Quentin >>> >> >>>> >>> >> >>>> >>> >> >>>> >>> >> >>>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet >>> wrote: >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >>>> > >>> >> >>>> > The PETSC_DIR exactly corresponds to the previous one, so I >>> guess that rules option b) out, except if a specific option is required to >>> overwrite a previous installation of PETSc. As for a), well I thought >>> reconfiguring pretty direct, you're welcome to give me a hint as to what >>> could be wrong in the following process. >>> >> >>>> > >>> >> >>>> > Steps to reproduce this behaviour are as follows : >>> >> >>>> > * Run this docker container >>> >> >>>> > * Do : python3 -c "from petsc4py import PETSc; >>> PETSc.Viewer().createHDF5('dummy.h5')" >>> >> >>>> > >>> >> >>>> > After you get the error Unknown PetscViewer type, feel free to >>> try : >>> >> >>>> > >>> >> >>>> > * cd /usr/local/petsc/ >>> >> >>>> > * ./configure --with-hfd5 >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > It?s hdf5, not hfd5. >>> >> >>>> > It?s PETSC_ARCH, not PETSC-ARCH. >>> >> >>>> > PETSC_ARCH is missing from your configure line. >>> >> >>>> > >>> >> >>>> > Thanks, >>> >> >>>> > Pierre >>> >> >>>> > >>> >> >>>> > * make PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 >>> all >>> >> >>>> > >>> >> >>>> > Then repeat the MWE and observe absolutely no behavioural >>> change whatsoever. I'm afraid I don't know PETSc well enough to be >>> surprised by that. >>> >> >>>> > >>> >> >>>> > Quentin >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > Quentin CHEVALIER ? IA parcours recherche >>> >> >>>> > >>> >> >>>> > LadHyX - Ecole polytechnique >>> >> >>>> > >>> >> >>>> > __________ >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >> >>>> >> >>> >> >>>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >>>> >>> >>> >> >>>> >>> It failed all of the tests included in `make >>> >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 >>> check`, with >>> >> >>>> >>> the error `/usr/bin/bash: line 1: cd: src/snes/tutorials: No >>> such file >>> >> >>>> >>> or directory` >>> >> >>>> >>> >>> >> >>>> >>> I am therefore fairly confident this a "file absence" >>> problem, and not >>> >> >>>> >>> a compilation problem. >>> >> >>>> >>> >>> >> >>>> >>> I repeat that there was no error at compilation stage. The >>> final stage >>> >> >>>> >>> did present `gmake[3]: Nothing to be done for 'libs'.` but >>> that's all. >>> >> >>>> >>> >>> >> >>>> >>> Again, running `./configure --with-hdf5` followed by a `make >>> >> >>>> >>> PETSC_DIR=/usr/local/petsc PETSC-ARCH=linux-gnu-real-32 all` >>> does not >>> >> >>>> >>> change the problem. I get the same error at the same >>> position as >>> >> >>>> >>> before. >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> If you reconfigured and rebuilt, it is impossible to get the >>> same error, so >>> >> >>>> >> >>> >> >>>> >> a) You did not reconfigure >>> >> >>>> >> >>> >> >>>> >> b) Your new build is somewhere else on the machine >>> >> >>>> >> >>> >> >>>> >> Thanks, >>> >> >>>> >> >>> >> >>>> >> Matt >>> >> >>>> >> >>> >> >>>> >>> >>> >> >>>> >>> I will comment I am running on OpenSUSE. >>> >> >>>> >>> >>> >> >>>> >>> Quentin >>> >> >>>> >>> >>> >> >>>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >> >>>> >>> > >>> >> >>>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >>>> >>> >> >>> >> >>>> >>> >> Hello Matthew and thanks for your quick response. >>> >> >>>> >>> >> >>> >> >>>> >>> >> I'm afraid I did try to snoop around the container and >>> rerun PETSc's >>> >> >>>> >>> >> configure with the --with-hdf5 option, to absolutely no >>> avail. >>> >> >>>> >>> >> >>> >> >>>> >>> >> I didn't see any errors during config or make, but it >>> failed the tests >>> >> >>>> >>> >> (which aren't included in the minimal container I suppose) >>> >> >>>> >>> > >>> >> >>>> >>> > >>> >> >>>> >>> > Failed which tests? What was the error? >>> >> >>>> >>> > >>> >> >>>> >>> > Thanks, >>> >> >>>> >>> > >>> >> >>>> >>> > Matt >>> >> >>>> >>> > >>> >> >>>> >>> >> >>> >> >>>> >>> >> Quentin >>> >> >>>> >>> >> >>> >> >>>> >>> >> >>> >> >>>> >>> >> >>> >> >>>> >>> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >>>> >>> >> >>> >> >>>> >>> >> LadHyX - Ecole polytechnique >>> >> >>>> >>> >> >>> >> >>>> >>> >> __________ >>> >> >>>> >>> >> >>> >> >>>> >>> >> >>> >> >>>> >>> >> >>> >> >>>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> Hello PETSc users, >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> This email is a duplicata of this gitlab issue, sorry >>> for any inconvenience caused. >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> I want to compute a PETSc vector in real mode, than >>> perform calculations with it in complex mode. I want as much of this >>> process to be parallel as possible. Right now, I compile PETSc in real >>> mode, compute my vector and save it to a file, then switch to complex mode, >>> read it, and move on. >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> This creates unexpected behaviour using MPIIO, so on >>> Lisandro Dalcinl's advice I'm moving to HDF5 format. My code is as follows >>> (taking inspiration from petsc4py doc, a bitbucket example and another one, >>> all top Google results for 'petsc hdf5') : >>> >> >>>> >>> >> >>> >>> >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, 'r', >>> COMM_WORLD) >>> >> >>>> >>> >> >>> q.load(viewer) >>> >> >>>> >>> >> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, >>> mode=PETSc.ScatterMode.FORWARD) >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> This crashes my code. I obtain traceback : >>> >> >>>> >>> >> >>> >>> >> >>>> >>> >> >>> File "/home/shared/code.py", line 121, in Load >>> >> >>>> >>> >> >>> viewer = PETSc.Viewer().createHDF5(file_name, >>> 'r', COMM_WORLD) >>> >> >>>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, in >>> petsc4py.PETSc.Viewer.createHDF5 >>> >> >>>> >>> >> >>> petsc4py.PETSc.Error: error code 86 >>> >> >>>> >>> >> >>> [0] PetscViewerSetType() at >>> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> >> >>>> >>> >> >>> [0] Unknown type. Check for miss-spelling or missing >>> package: https://petsc.org/release/install/install/#external-packages >>> >> >>>> >>> >> >>> [0] Unknown PetscViewer type given: hdf5 >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > This means that PETSc has not been configured with HDF5 >>> (--with-hdf5 or --download-hdf5), so the container should be updated. >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > THanks, >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > Matt >>> >> >>>> >>> >> > >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> I have petsc4py 3.16 from this docker container (list >>> of dependencies include PETSc and petsc4py). >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> I'm pretty sure this is not intended behaviour. Any >>> insight as to how to fix this issue (I tried running ./configure >>> --with-hdf5 to no avail) or more generally to perform this jiggling between >>> real and complex would be much appreciated, >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> Kind regards. >>> >> >>>> >>> >> >> >>> >> >>>> >>> >> >> Quentin >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > -- >>> >> >>>> >>> >> > What most experimenters take for granted before they >>> begin their experiments is infinitely more interesting than any results to >>> which their experiments lead. >>> >> >>>> >>> >> > -- Norbert Wiener >>> >> >>>> >>> >> > >>> >> >>>> >>> >> > https://www.cse.buffalo.edu/~knepley/ >>> >> >>>> >>> > >>> >> >>>> >>> > >>> >> >>>> >>> > >>> >> >>>> >>> > -- >>> >> >>>> >>> > What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> >> >>>> >>> > -- Norbert Wiener >>> >> >>>> >>> > >>> >> >>>> >>> > https://www.cse.buffalo.edu/~knepley/ >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> -- >>> >> >>>> >> What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> >> >>>> >> -- Norbert Wiener >>> >> >>>> >> >>> >> >>>> >> https://www.cse.buffalo.edu/~knepley/ >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> -- >>> >> >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> >> >>> -- Norbert Wiener >>> >> >>> >>> >> >>> https://www.cse.buffalo.edu/~knepley/ >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> >> > -- Norbert Wiener >>> >> > >>> >> > https://www.cse.buffalo.edu/~knepley/ >>> > >>> > >>> > >>> > -- >>> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> > -- Norbert Wiener >>> > >>> > https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Dec 8 12:31:40 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 8 Dec 2021 12:31:40 -0600 (CST) Subject: [petsc-users] install PETSc on windows In-Reply-To: References: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> <84672efd-113b-2daf-85d1-e58d3812421d@mcs.anl.gov> <005201d7eb7e$a8c83a20$fa58ae60$@gmail.com> Message-ID: Do you get these errors with a petsc example? BTW: Best to send text [via copy/paste] - instead if images. Satish On Tue, 7 Dec 2021, Ning Li wrote: > I rebuilt HYPRE with cmake on my windows laptop. But this time I got more > linker problems. Could you have a look at these linker errors? Thanks > [image: image.png] > > > On Tue, Dec 7, 2021 at 10:28 AM Satish Balay wrote: > > > Yes - you need to build hypre with the same mpi, compilers (compiler > > options) as petsc. > > > > Satish > > > > On Tue, 7 Dec 2021, Ning Li wrote: > > > > > I tried to use this new PETSc in my application, and got this HYPRE > > related > > > error when I built a solution in visual studio. > > > [image: image.png] > > > I have installed the latest HYPRE on my laptop and linked it to my > > > application, but I disabled MPI option when I configured HYPRE . > > > Is this why this error occurred? > > > > > > On Tue, Dec 7, 2021 at 9:31 AM Satish Balay wrote: > > > > > > > Your build is with msmpi - but mpiexec from openmpi got used. > > > > > > > > You can try compiling and running examples manually [with the correct > > > > mpiexec] > > > > > > > > Satish > > > > > > > > On Tue, 7 Dec 2021, liyuansen89 at gmail.com wrote: > > > > > > > > > Hi Satish, > > > > > > > > > > I have another question. After I run the check command, I got the > > > > following > > > > > output (the attached file), have I successfully installed the > > library? Is > > > > > there any error? > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > From: Satish Balay > > > > > Sent: Monday, December 6, 2021 6:59 PM > > > > > To: Ning Li > > > > > Cc: petsc-users > > > > > Subject: Re: [petsc-users] install PETSc on windows > > > > > > > > > > Glad it worked. Thanks for the update. > > > > > > > > > > Satish > > > > > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > > > > > Thanks for your reply. > > > > > > > > > > > > After I added '--with-shared-libraries=0', the configuration stage > > > > > > passed and now it is executing the 'make' command! > > > > > > > > > > > > Thanks very much > > > > > > > > > > > > On Mon, Dec 6, 2021 at 5:21 PM Satish Balay > > wrote: > > > > > > > > > > > > > >>> > > > > > > > Executing: /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe > > > > > > > ifort -o > > > > > > > > > > > > > > > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/c > > > > > onftest.exe > > > > > > > -MD -O3 -fpp > > > > > > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.lib > > > > > > > raries/conftest.o /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ > > > > > > > SDKs/MPI/Lib/x64/msmpi.lib /cygdrive/c/Program\ Files\ > > > > > > > \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib > > > > > > > stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts > > with > > > > > > > use of other libs; use /NODEFAULTLIB:library <<< > > > > > > > > > > > > > > I'm not sure why this link command is giving this error. Can you > > > > > > > retry with '--with-shared-libraries=0'? > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > > > > > > > > > Howdy, > > > > > > > > > > > > > > > > I am trying to install PETSc on windows with cygwin but got an > > mpi > > > > > error. > > > > > > > > Could you have a look at my issue and give me some > > instructions? > > > > > > > > > > > > > > > > Here is the information about my environment: > > > > > > > > 1. operation system: windows 11 > > > > > > > > 2. visual studio version: 2019 > > > > > > > > 3. intel one API toolkit is installed 4. Microsoft MS MPI is > > > > > > > > installed. > > > > > > > > 5. Intel MPI is uninstalled. > > > > > > > > 6. PETSc version: 3.16.1 > > > > > > > > > > > > > > > > this is my configuration: > > > > > > > > ./configure --with-cc='win32fe cl' --with-fc='win32fe ifort' > > > > > > > > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > > > > > > > > --with-mpi-include=['/cygdrive/c/Program Files (x86)/Microsoft > > > > > > > > SDKs/MPI/Include','/cygdrive/c/Program Files (x86)/Microsoft > > > > > > > > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program > > Files > > > > > > > > (x86)/Microsoft > > SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program > > > > > > > > Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > > > > > > > > --with-blas-lapack-lib=['/cygdrive/c/Program Files > > > > > > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cy > > > > > > > gdrive/c/Program > > > > > > > > Files > > > > > > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive > > > > > > > /c/Program > > > > > > > > Files > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > > > > > > > > --with-scalapack-include='/cygdrive/c/Program Files > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > > > > > > > > --with-scalapack-lib=['/cygdrive/c/Program Files > > > > > > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib', > > > > > > > '/cygdrive/c/Program > > > > > > > > Files > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.l > > > > > > > > ib'] > > > > > > > > --with-fortran-interfaces=1 --with-debugging=0 > > > > > > > > > > > > > > > > attached is the configure.log file. > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > Ning Li > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From liyuansen89 at gmail.com Wed Dec 8 14:16:14 2021 From: liyuansen89 at gmail.com (Ning Li) Date: Wed, 8 Dec 2021 14:16:14 -0600 Subject: [petsc-users] install PETSc on windows In-Reply-To: References: <17158857-4047-ecc0-1331-645216114cbb@mcs.anl.gov> <84672efd-113b-2daf-85d1-e58d3812421d@mcs.anl.gov> <005201d7eb7e$a8c83a20$fa58ae60$@gmail.com> Message-ID: No, I did not get these errors with petsc examples. Actually I do not know that we have petsc examples and how to run them. I got this error when I used PETSc in my own application. Now I have solved this problem. it is caused as I did not use the right path for "Additional include directories". Thanks On Wed, Dec 8, 2021 at 12:31 PM Satish Balay wrote: > Do you get these errors with a petsc example? > > BTW: Best to send text [via copy/paste] - instead if images. > > Satish > > On Tue, 7 Dec 2021, Ning Li wrote: > > > I rebuilt HYPRE with cmake on my windows laptop. But this time I got more > > linker problems. Could you have a look at these linker errors? Thanks > > [image: image.png] > > > > > > On Tue, Dec 7, 2021 at 10:28 AM Satish Balay wrote: > > > > > Yes - you need to build hypre with the same mpi, compilers (compiler > > > options) as petsc. > > > > > > Satish > > > > > > On Tue, 7 Dec 2021, Ning Li wrote: > > > > > > > I tried to use this new PETSc in my application, and got this HYPRE > > > related > > > > error when I built a solution in visual studio. > > > > [image: image.png] > > > > I have installed the latest HYPRE on my laptop and linked it to my > > > > application, but I disabled MPI option when I configured HYPRE . > > > > Is this why this error occurred? > > > > > > > > On Tue, Dec 7, 2021 at 9:31 AM Satish Balay > wrote: > > > > > > > > > Your build is with msmpi - but mpiexec from openmpi got used. > > > > > > > > > > You can try compiling and running examples manually [with the > correct > > > > > mpiexec] > > > > > > > > > > Satish > > > > > > > > > > On Tue, 7 Dec 2021, liyuansen89 at gmail.com wrote: > > > > > > > > > > > Hi Satish, > > > > > > > > > > > > I have another question. After I run the check command, I got the > > > > > following > > > > > > output (the attached file), have I successfully installed the > > > library? Is > > > > > > there any error? > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > From: Satish Balay > > > > > > Sent: Monday, December 6, 2021 6:59 PM > > > > > > To: Ning Li > > > > > > Cc: petsc-users > > > > > > Subject: Re: [petsc-users] install PETSc on windows > > > > > > > > > > > > Glad it worked. Thanks for the update. > > > > > > > > > > > > Satish > > > > > > > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > > > > > > > Thanks for your reply. > > > > > > > > > > > > > > After I added '--with-shared-libraries=0', the configuration > stage > > > > > > > passed and now it is executing the 'make' command! > > > > > > > > > > > > > > Thanks very much > > > > > > > > > > > > > > On Mon, Dec 6, 2021 at 5:21 PM Satish Balay > > > > wrote: > > > > > > > > > > > > > > > >>> > > > > > > > > Executing: > /home/LiNin/petsc-3.16.1/lib/petsc/bin/win32fe/win32fe > > > > > > > > ifort -o > > > > > > > > > > > > > > > > > > > > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.libraries/c > > > > > > onftest.exe > > > > > > > > -MD -O3 -fpp > > > > > > > > > > > /cygdrive/c/Users/LiNin/AppData/Local/Temp/petsc-5ly209eh/config.lib > > > > > > > > raries/conftest.o /cygdrive/c/Program\ Files\ > \(x86\)/Microsoft\ > > > > > > > > SDKs/MPI/Lib/x64/msmpi.lib /cygdrive/c/Program\ Files\ > > > > > > > > \(x86\)/Microsoft\ SDKs/MPI/Lib/x64/msmpifec.lib Ws2_32.lib > > > > > > > > stdout: LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts > > > with > > > > > > > > use of other libs; use /NODEFAULTLIB:library <<< > > > > > > > > > > > > > > > > I'm not sure why this link command is giving this error. Can > you > > > > > > > > retry with '--with-shared-libraries=0'? > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > > On Mon, 6 Dec 2021, Ning Li wrote: > > > > > > > > > > > > > > > > > Howdy, > > > > > > > > > > > > > > > > > > I am trying to install PETSc on windows with cygwin but > got an > > > mpi > > > > > > error. > > > > > > > > > Could you have a look at my issue and give me some > > > instructions? > > > > > > > > > > > > > > > > > > Here is the information about my environment: > > > > > > > > > 1. operation system: windows 11 > > > > > > > > > 2. visual studio version: 2019 > > > > > > > > > 3. intel one API toolkit is installed 4. Microsoft MS MPI > is > > > > > > > > > installed. > > > > > > > > > 5. Intel MPI is uninstalled. > > > > > > > > > 6. PETSc version: 3.16.1 > > > > > > > > > > > > > > > > > > this is my configuration: > > > > > > > > > ./configure --with-cc='win32fe cl' --with-fc='win32fe > ifort' > > > > > > > > > --prefix=~/petsc-opt/ --PETSC_ARCH=windows-intel-opt > > > > > > > > > --with-mpi-include=['/cygdrive/c/Program Files > (x86)/Microsoft > > > > > > > > > SDKs/MPI/Include','/cygdrive/c/Program Files > (x86)/Microsoft > > > > > > > > > SDKs/MPI/Include/x64'] --with-mpi-lib=['/cygdrive/c/Program > > > Files > > > > > > > > > (x86)/Microsoft > > > SDKs/MPI/Lib/x64/msmpi.lib','/cygdrive/c/Program > > > > > > > > > Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib'] > > > > > > > > > --with-blas-lapack-lib=['/cygdrive/c/Program Files > > > > > > > > > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_intel_lp64.lib','/cy > > > > > > > > gdrive/c/Program > > > > > > > > > Files > > > > > > > > > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_core.lib','/cygdrive > > > > > > > > /c/Program > > > > > > > > > Files > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_sequential.lib'] > > > > > > > > > --with-scalapack-include='/cygdrive/c/Program Files > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/include' > > > > > > > > > --with-scalapack-lib=['/cygdrive/c/Program Files > > > > > > > > > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_scalapack_lp64.lib', > > > > > > > > '/cygdrive/c/Program > > > > > > > > > Files > > > > > > > > > > > > (x86)/Intel/oneAPI/mkl/2021.3.0/lib/intel64/mkl_blacs_msmpi_lp64.l > > > > > > > > > ib'] > > > > > > > > > --with-fortran-interfaces=1 --with-debugging=0 > > > > > > > > > > > > > > > > > > attached is the configure.log file. > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > Ning Li > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Vladislav.Pimanov at skoltech.ru Thu Dec 9 17:57:33 2021 From: Vladislav.Pimanov at skoltech.ru (Vladislav Pimanov) Date: Thu, 9 Dec 2021 23:57:33 +0000 Subject: [petsc-users] SLEPc error "The inner product is not well defined: indefinite matrix" when solving generalized EVP with positive semidefinite B Message-ID: Dear PETSc community! Could you possibly help me with SLEPc generalized eigenproblem issue. I am trying to solve Ax = \lambda Bx where A and B are positive semi-definite matrices, both have only constant vector in nullspace. To be precise, A is the Schur complement of the Stokes matrix and B is the Laplacian matrix (classical SIMPLE preconditioner). I specified EPS_GHEP problem type and tried the Krylov-Schur and Lanczos methods. However, they resulted in error "The inner product is not well defined: indefinite matrix". I used EPSSetDeflationSpace() to specify the constant nullspace following slepc manual, and the result were correct for problems Ax = \lambda x and Bx = \lambda x separately. I also tried EPS_GHIEP problem type, but got strange results. P.S full error message: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: The inner product is not well defined: indefinite matrix [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.13.4, Aug 01, 2020 [0]PETSC ERROR: ./scopes_slepc on a linux-gnu-real-32 named 608add14733f by hydra Thu Dec 9 23:38:43 2021 [0]PETSC ERROR: Configure options PETSC_ARCH=linux-gnu-real-32 --COPTFLAGS=-O3 -march=native -mtune=native --CXXOPTFLAGS=-O3 -march=native -mtune=native --FOPTFLAGS=-O3 -march=native -mtune=native --with-debugging=0 --with-fortran-bindings=0 --with-shared-libraries --download-hdf5 --download-blacs --download-hypre --download-metis --download-parmetis --download-mumps --download-ptscotch --download-scalapack --download-spai --download-suitesparse --with-scalar-type=real [0]PETSC ERROR: #1 BV_SafeSqrt() line 128 in /usr/local/slepc/include/slepc/private/bvimpl.h [0]PETSC ERROR: #2 BVNorm_Private() line 473 in /usr/local/slepc/src/sys/classes/bv/interface/bvglobal.c [0]PETSC ERROR: #3 BVNormVec() line 586 in /usr/local/slepc/src/sys/classes/bv/interface/bvglobal.c [0]PETSC ERROR: #4 BVOrthogonalizeMGS1() line 65 in /usr/local/slepc/src/sys/classes/bv/interface/bvorthog.c [0]PETSC ERROR: #5 BVOrthogonalizeGS() line 188 in /usr/local/slepc/src/sys/classes/bv/interface/bvorthog.c [0]PETSC ERROR: #6 BVOrthogonalizeColumn() line 348 in /usr/local/slepc/src/sys/classes/bv/interface/bvorthog.c [0]PETSC ERROR: #7 BVInsertVecs() line 363 in /usr/local/slepc/src/sys/classes/bv/interface/bvfunc.c [0]PETSC ERROR: #8 BVInsertConstraints() line 434 in /usr/local/slepc/src/sys/classes/bv/interface/bvfunc.c [0]PETSC ERROR: #9 EPSSetUp() line 308 in /usr/local/slepc/src/eps/interface/epssetup.c [0]PETSC ERROR: #10 EPSSolve() line 136 in /usr/local/slepc/src/eps/interface/epssolve.c [0]PETSC ERROR: #11 computeCond() line 1634 in scopes_slepc.cpp Thank you in advance! Sincerely, Vladislav Pimanov -------------- next part -------------- An HTML attachment was scrubbed... URL: From amartin at cimne.upc.edu Thu Dec 9 18:52:56 2021 From: amartin at cimne.upc.edu (=?UTF-8?Q?Alberto_F=2e_Mart=c3=adn?=) Date: Fri, 10 Dec 2021 11:52:56 +1100 Subject: [petsc-users] Internal memory MUMPS during numerical factorization Message-ID: Dear PETSc users, I am experiencing the following error during numerical factorization while using the MUMPS package. It seems an internal memory error in the DMUMPS_ALLOC_CB function. Has anyone experienced this before? Any clue how this can be solved? Thanks! Best regards, ?Alberto. Entering DMUMPS 5.3.5 from C interface with JOB, N =?? 1???? 2359298 ????? executing #MPI =????? 2 and #OMP =???? 24 ?================================================= ?MUMPS compiled with option -DGEMMT_AVAILABLE ?MUMPS compiled with option -Dmetis ?MUMPS compiled with option -Dparmetis ?MUMPS compiled with option -Dptscotch ?MUMPS compiled with option -Dscotch ?MUMPS compiled with option -DBLR_MT ?This MUMPS version includes code for SAVE_RESTORE ?This MUMPS version includes code for DIST_RHS ?================================================= L U Solver for unsymmetric matrices Type of parallelism: Working host ?****** ANALYSIS STEP ******** ?** Maximum transversal (ICNTL(6)) not allowed because matrix is distributed Using ParMETIS for parallel ordering. Structural symmetry is: 92% ?A root of estimated size???????? 3480? has been selected for Scalapack. Leaving analysis phase with? ... ?INFOG(1)?????????????????????????????????????? = 0 ?INFOG(2)?????????????????????????????????????? = 0 ?-- (20) Number of entries in factors (estim.)? = 548486696 ?--? (3) Real space for factors??? (estimated)? = 548486696 ?--? (4) Integer space for factors (estimated)? = 23032397 ?--? (5) Maximum frontal size????? (estimated)? = 4991 ?--? (6) Number of nodes in the tree??????????? = 173314 ?-- (32) Type of analysis effectively used????? = 2 ?--? (7) Ordering option effectively used?????? = 2 ?ICNTL(6) Maximum transversal option??????????? = 0 ?ICNTL(7) Pivot order option??????????????????? = 7 ?ICNTL(14) Percentage of memory relaxation????? = 50000 ?Number of level 2 nodes??????????????????????? = 2 ?Number of split nodes????????????????????????? = 0 ?RINFOG(1) Operations during elimination (estim)= 5.292D+11 ?Distributed matrix entry format (ICNTL(18))??? = 3 ?MEMORY ESTIMATIONS ... ?Estimations with standard Full-Rank (FR) factorization: ??? Maximum estim. space in Mbytes, IC facto. (INFOG(16)):???? 1246959 ??? Total space in MBytes, IC factorization (INFOG(17)):???? 2416245 ??? Maximum estim. space in Mbytes, OOC facto. (INFOG(26)):????? 190319 ??? Total space in MBytes,? OOC factorization (INFOG(27)):????? 379897 ?Elapsed time in analysis driver=????? 30.7962 Entering DMUMPS 5.3.5 from C interface with JOB, N =?? 2 2359298 ????? executing #MPI =????? 2 and #OMP =???? 24 ****** FACTORIZATION STEP ******** > > ?GLOBAL STATISTICS PRIOR NUMERICAL FACTORIZATION ... > ?Number of working processes??????????????? =?????????????? 2 > ?ICNTL(22) Out-of-core option?????????????? =?????????????? 0 > ?ICNTL(35) BLR activation (eff. choice)???? =?????????????? 0 > ?ICNTL(14) Memory relaxation??????????????? =?????????? 50000 > ?INFOG(3) Real space for factors (estimated)=?????? 548486696 > ?INFOG(4) Integer space for factors (estim.)=??????? 23032397 > ?Maximum frontal size (estimated)?????????? =??????????? 4991 > ?Number of nodes in the tree??????????????? =????????? 173314 > ?Memory allowed (MB -- 0: N/A )???????????? =?????????????? 0 > ?Memory provided by user, sum of LWK_USER?? =?????????????? 0 > ?Relative threshold for pivoting, CNTL(1)?? =????? 0.1000D-01 > ?Max difference from 1 after scaling the entries for ONE-NORM (option > 7/8)?? = 0.95D+00 > ?Average Effective size of S???? (based on INFO(39))= 150930257964 > > ?Redistrib: total data local/sent?????????? = 37911993??????? 60007025 > > ?Elapsed time to reformat/distribute matrix =????? 2.3306 > ?Problem with integer stack size?????????? 1 1????????? 13 > ?Internal error in DMUMPS_ALLOC_CB? T????????? 15 6090000 -- Alberto F. Mart?n-Huertas Senior Researcher, PhD. Computational Science Centre Internacional de M?todes Num?rics a l'Enginyeria (CIMNE) Parc Mediterrani de la Tecnologia, UPC Esteve Terradas 5, Building C3, Office 215, 08860 Castelldefels (Barcelona, Spain) Tel.: (+34) 9341 34223 e-mail:amartin at cimne.upc.edu FEMPAR project co-founder web: http://www.fempar.org ********************** IMPORTANT ANNOUNCEMENT The information contained in this message and / or attached file (s), sent from CENTRO INTERNACIONAL DE METODES NUMERICS EN ENGINYERIA-CIMNE, is confidential / privileged and is intended to be read only by the person (s) to the one (s) that is directed. Your data has been incorporated into the treatment system of CENTRO INTERNACIONAL DE METODES NUMERICS EN ENGINYERIA-CIMNE by virtue of its status as client, user of the website, provider and / or collaborator in order to contact you and send you information that may be of your interest and resolve your queries. You can exercise your rights of access, rectification, limitation of treatment, deletion, and opposition / revocation, in the terms established by the current regulations on data protection, directing your request to the postal address C / Gran Capit?, s / n Building C1 - 2nd Floor - Office C15 -Campus Nord - UPC 08034 Barcelona or via email to dpo at cimne.upc.edu If you read this message and it is not the designated recipient, or you have received this communication in error, we inform you that it is totally prohibited, and may be illegal, any disclosure, distribution or reproduction of this communication, and please notify us immediately. and return the original message to the address mentioned above. From knepley at gmail.com Thu Dec 9 19:53:15 2021 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Dec 2021 20:53:15 -0500 Subject: [petsc-users] Internal memory MUMPS during numerical factorization In-Reply-To: References: Message-ID: On Thu, Dec 9, 2021 at 7:53 PM Alberto F. Mart?n wrote: > Dear PETSc users, > > I am experiencing the following error during numerical factorization > while using the MUMPS package. > > It seems an internal memory error in the DMUMPS_ALLOC_CB function. Has > anyone experienced this before? Any clue how this can be solved? > I have not seen that before. It looks like an allocation failed for MUMPS, but it is not a nice error, and does not succumb to Google. I would mail the MUMPS developers for this one. Thanks, Matt > Thanks! > > Best regards, > > Alberto. > > Entering DMUMPS 5.3.5 from C interface with JOB, N = 1 2359298 > executing #MPI = 2 and #OMP = 24 > > ================================================= > MUMPS compiled with option -DGEMMT_AVAILABLE > MUMPS compiled with option -Dmetis > MUMPS compiled with option -Dparmetis > MUMPS compiled with option -Dptscotch > MUMPS compiled with option -Dscotch > MUMPS compiled with option -DBLR_MT > This MUMPS version includes code for SAVE_RESTORE > This MUMPS version includes code for DIST_RHS > ================================================= > L U Solver for unsymmetric matrices > Type of parallelism: Working host > > ****** ANALYSIS STEP ******** > > ** Maximum transversal (ICNTL(6)) not allowed because matrix is > distributed > Using ParMETIS for parallel ordering. > Structural symmetry is: 92% > A root of estimated size 3480 has been selected for Scalapack. > > Leaving analysis phase with ... > INFOG(1) = 0 > INFOG(2) = 0 > -- (20) Number of entries in factors (estim.) = 548486696 > -- (3) Real space for factors (estimated) = 548486696 > -- (4) Integer space for factors (estimated) = 23032397 > -- (5) Maximum frontal size (estimated) = 4991 > -- (6) Number of nodes in the tree = 173314 > -- (32) Type of analysis effectively used = 2 > -- (7) Ordering option effectively used = 2 > ICNTL(6) Maximum transversal option = 0 > ICNTL(7) Pivot order option = 7 > ICNTL(14) Percentage of memory relaxation = 50000 > Number of level 2 nodes = 2 > Number of split nodes = 0 > RINFOG(1) Operations during elimination (estim)= 5.292D+11 > Distributed matrix entry format (ICNTL(18)) = 3 > > MEMORY ESTIMATIONS ... > Estimations with standard Full-Rank (FR) factorization: > Maximum estim. space in Mbytes, IC facto. (INFOG(16)): 1246959 > Total space in MBytes, IC factorization (INFOG(17)): 2416245 > Maximum estim. space in Mbytes, OOC facto. (INFOG(26)): 190319 > Total space in MBytes, OOC factorization (INFOG(27)): 379897 > > Elapsed time in analysis driver= 30.7962 > > Entering DMUMPS 5.3.5 from C interface with JOB, N = 2 2359298 > executing #MPI = 2 and #OMP = 24 > > ****** FACTORIZATION STEP ******** > > > > > GLOBAL STATISTICS PRIOR NUMERICAL FACTORIZATION ... > > Number of working processes = 2 > > ICNTL(22) Out-of-core option = 0 > > ICNTL(35) BLR activation (eff. choice) = 0 > > ICNTL(14) Memory relaxation = 50000 > > INFOG(3) Real space for factors (estimated)= 548486696 > > INFOG(4) Integer space for factors (estim.)= 23032397 > > Maximum frontal size (estimated) = 4991 > > Number of nodes in the tree = 173314 > > Memory allowed (MB -- 0: N/A ) = 0 > > Memory provided by user, sum of LWK_USER = 0 > > Relative threshold for pivoting, CNTL(1) = 0.1000D-01 > > Max difference from 1 after scaling the entries for ONE-NORM (option > > 7/8) = 0.95D+00 > > Average Effective size of S (based on INFO(39))= 150930257964 > > > > Redistrib: total data local/sent = 37911993 60007025 > > > > Elapsed time to reformat/distribute matrix = 2.3306 > > Problem with integer stack size 1 1 13 > > Internal error in DMUMPS_ALLOC_CB T 15 6090000 > > > > > -- > Alberto F. Mart?n-Huertas > Senior Researcher, PhD. Computational Science > Centre Internacional de M?todes Num?rics a l'Enginyeria (CIMNE) > Parc Mediterrani de la Tecnologia, UPC > Esteve Terradas 5, Building C3, Office 215, > 08860 Castelldefels (Barcelona, Spain) > Tel.: (+34) 9341 34223 > e-mail:amartin at cimne.upc.edu > > FEMPAR project co-founder > web: http://www.fempar.org > > ********************** > IMPORTANT ANNOUNCEMENT > > The information contained in this message and / or attached file (s), sent > from CENTRO INTERNACIONAL DE METODES NUMERICS EN ENGINYERIA-CIMNE, > is confidential / privileged and is intended to be read only by the person > (s) to the one (s) that is directed. Your data has been incorporated > into the treatment system of CENTRO INTERNACIONAL DE METODES NUMERICS EN > ENGINYERIA-CIMNE by virtue of its status as client, user of the website, > provider and / or collaborator in order to contact you and send you > information that may be of your interest and resolve your queries. > You can exercise your rights of access, rectification, limitation of > treatment, deletion, and opposition / revocation, in the terms established > by the current regulations on data protection, directing your request to > the postal address C / Gran Capit?, s / n Building C1 - 2nd Floor - > Office C15 -Campus Nord - UPC 08034 Barcelona or via email to > dpo at cimne.upc.edu > > If you read this message and it is not the designated recipient, or you > have received this communication in error, we inform you that it is > totally prohibited, and may be illegal, any disclosure, distribution or > reproduction of this communication, and please notify us immediately. > and return the original message to the address mentioned above. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From quentin.chevalier at polytechnique.edu Fri Dec 10 03:02:32 2021 From: quentin.chevalier at polytechnique.edu (Quentin Chevalier) Date: Fri, 10 Dec 2021 10:02:32 +0100 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: Ok Matthew, I'll list my steps : *> sudo docker run -itv myfolder:/home/shared -w /home/shared/ --rm dolfinx/dolfinxroot at container:/home/shared# echo $PETSCH_ARCH $PETSCH_DIR* linux-gnu-real-32 /usr/local/petsc *root at container:/home/shared# cd /usr/local/petsc* *root at container:/usr/local/petsc# ./configure --with-hdf5 --with-petsc4py --force* +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The version of PETSc you are using is out-of-date, we recommend updating to the new release Available Version: 3.16.2 Installed Version: 3.16 https://petsc.org/release/download/ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ============================================================================================= Configuring PETSc to compile on your system ============================================================================================= TESTING: configureLibrary from config.packages.petsc4py(config/BuildSystem/config/packages/petsc4py.py:116) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- PETSc4py requires Python with "cython" module(s) installed! Please install using package managers - for ex: "apt" or "dnf" (on linux), or with "pip" using: /usr/bin/python3 -m pip install cython ******************************************************************************* *root at container:/usr/local/petsc# pip install cython* Collecting cython Downloading Cython-0.29.25-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (1.9 MB) |????????????????????????????????| 1.9 MB 10.8 MB/s Installing collected packages: cython Successfully installed cython-0.29.25 *root at container:/usr/local/petsc# ./configure --with-hdf5 --with-petsc4py --force* ============================================================================================= Configuring PETSc to compile on your system ============================================================================================= Compilers: C Compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-misleading-indentation -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 Version: gcc (Ubuntu 11.2.0-7ubuntu2) 11.2.0 C++ Compiler: mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O0 -fPIC -std=gnu++17 Version: g++ (Ubuntu 11.2.0-7ubuntu2) 11.2.0 Fortran Compiler: mpif90 -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O0 Version: GNU Fortran (Ubuntu 11.2.0-7ubuntu2) 11.2.0 Linkers: Shared linker: mpicc -shared -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-misleading-indentation -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 Dynamic linker: mpicc -shared -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-misleading-indentation -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 Libraries linked against: -lquadmath -lstdc++ -ldl BlasLapack: Library: -llapack -lblas Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> yourprogram -log_view) uses 4 byte integers MPI: Version: 3 mpiexec: mpiexec Implementation: mpich3 MPICH_NUMVERSION: 30402300 X: Library: -lX11 pthread: hdf5: Version: 1.12.1 Library: -lhdf5_hl -lhdf5 cmake: Version: 3.18.4 /usr/bin/cmake regex: Language used to compile PETSc: C PETSc: PETSC_ARCH: linux-gnu-real-32 PETSC_DIR: /usr/local/petsc Prefix: Scalar type: real Precision: double Support for __float128 Integer size: 4 bytes Single library: yes Shared libraries: yes Memory alignment from malloc(): 16 bytes Using GNU make: /usr/bin/gmake xxx=========================================================================xxx Configure stage complete. Now build PETSc libraries with: make PETSC_DIR=/usr/local/petsc PETSC_ARCH=linux-gnu-real-32 all xxx=========================================================================xxx *root at 17fbe5936a5d:/usr/local/petsc# make all* /usr/bin/python3 ./config/gmakegen.py --petsc-arch=linux-gnu-real-32 /usr/bin/python3 /usr/local/petsc/config/gmakegentest.py --petsc-dir=/usr/local/petsc --petsc-arch=linux-gnu-real-32 --testdir=./linux-gnu-real-32/tests ========================================== See documentation/faq.html and documentation/bugreporting.html for help with installation problems. Please send EVERYTHING printed out below when reporting problems. Please check the mailing list archives and consider subscribing. https://petsc.org/release/community/mailing/ ========================================== Starting make run on 17fbe5936a5d at Fri, 10 Dec 2021 08:28:10 +0000 Machine characteristics: Linux 17fbe5936a5d 5.3.18d01-lp152.63-default #1 SMP PREEMPT Fri Feb 5 19:19:17 CET 2021 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------- Using PETSc directory: /usr/local/petsc Using PETSc arch: linux-gnu-real-32 ----------------------------------------- PETSC_VERSION_RELEASE 1 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 16 PETSC_VERSION_SUBMINOR 0 PETSC_VERSION_PATCH 0 PETSC_VERSION_DATE "Sep 29, 2021" PETSC_VERSION_GIT "v3.16.0" PETSC_VERSION_DATE_GIT "2021-09-29 18:30:02 -0500" PETSC_VERSION_EQ(MAJOR,MINOR,SUBMINOR) \ PETSC_VERSION_ PETSC_VERSION_EQ PETSC_VERSION_LT(MAJOR,MINOR,SUBMINOR) \ PETSC_VERSION_LE(MAJOR,MINOR,SUBMINOR) \ PETSC_VERSION_GT(MAJOR,MINOR,SUBMINOR) \ PETSC_VERSION_GE(MAJOR,MINOR,SUBMINOR) \ ----------------------------------------- Using configure Options: --with-hdf5 --with-petsc4py --force Using configuration flags: #define INCLUDED_PETSCCONF_H #define PETSC_ARCH "linux-gnu-real-32" #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) #define PETSC_Alignx(a,b) #define PETSC_BLASLAPACK_UNDERSCORE 1 #define PETSC_CLANGUAGE_C 1 #define PETSC_CXX_INLINE inline #define PETSC_CXX_RESTRICT __restrict #define PETSC_C_INLINE inline #define PETSC_C_RESTRICT __restrict #define PETSC_DEPRECATED_ENUM(why) __attribute((deprecated)) #define PETSC_DEPRECATED_FUNCTION(why) __attribute((deprecated)) #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) #define PETSC_DEPRECATED_TYPEDEF(why) __attribute((deprecated)) #define PETSC_DIR "/usr/local/petsc" #define PETSC_DIR_SEPARATOR '/' #define PETSC_FORTRAN_CHARLEN_T size_t #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_ATTRIBUTEALIGNED 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_BZERO 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_CXX 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX_FIX 1 #define PETSC_HAVE_CXX_DIALECT_CXX03 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_CXX_DIALECT_CXX14 1 #define PETSC_HAVE_CXX_DIALECT_CXX17 1 #define PETSC_HAVE_DLADDR 1 #define PETSC_HAVE_DLCLOSE 1 #define PETSC_HAVE_DLERROR 1 #define PETSC_HAVE_DLFCN_H 1 #define PETSC_HAVE_DLOPEN 1 #define PETSC_HAVE_DLSYM 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_HAVE_DRAND48 1 #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_FORK 1 #define PETSC_HAVE_FORTRAN 1 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETDOMAINNAME 1 #define PETSC_HAVE_GETHOSTBYNAME 1 #define PETSC_HAVE_GETHOSTNAME 1 #define PETSC_HAVE_GETPAGESIZE 1 #define PETSC_HAVE_GETRUSAGE 1 #define PETSC_HAVE_HDF5 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_LGAMMA 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_MEMALIGN 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_MMAP 1 #define PETSC_HAVE_MPICH_NUMVERSION 30402300 #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 #define PETSC_HAVE_MPIIO 1 #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 #define PETSC_HAVE_MPI_COMBINER_DUP 1 #define PETSC_HAVE_MPI_COMBINER_NAMED 1 #define PETSC_HAVE_MPI_EXSCAN 1 #define PETSC_HAVE_MPI_F90MODULE 1 #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 #define PETSC_HAVE_MPI_FINALIZED 1 #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 #define PETSC_HAVE_MPI_IALLREDUCE 1 #define PETSC_HAVE_MPI_IBARRIER 1 #define PETSC_HAVE_MPI_INIT_THREAD 1 #define PETSC_HAVE_MPI_INT64_T 1 #define PETSC_HAVE_MPI_IN_PLACE 1 #define PETSC_HAVE_MPI_LONG_DOUBLE 1 #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 #define PETSC_HAVE_MPI_ONE_SIDED 1 #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 #define PETSC_HAVE_MPI_RGET 1 #define PETSC_HAVE_MPI_TYPE_DUP 1 #define PETSC_HAVE_MPI_TYPE_GET_ENVELOPE 1 #define PETSC_HAVE_MPI_WIN_CREATE 1 #define PETSC_HAVE_NANOSLEEP 1 #define PETSC_HAVE_NETDB_H 1 #define PETSC_HAVE_NETINET_IN_H 1 #define PETSC_HAVE_PACKAGES ":blaslapack:hdf5:mathlib:mpi:pthread:regex:x11:" #define PETSC_HAVE_PETSC4PY 1 #define PETSC_HAVE_POPEN 1 #define PETSC_HAVE_PTHREAD 1 #define PETSC_HAVE_PTHREAD_BARRIER_T 1 #define PETSC_HAVE_PTHREAD_H 1 #define PETSC_HAVE_PWD_H 1 #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_READLINK 1 #define PETSC_HAVE_REALPATH 1 #define PETSC_HAVE_REAL___FLOAT128 1 #define PETSC_HAVE_REGEX 1 #define PETSC_HAVE_RTLD_GLOBAL 1 #define PETSC_HAVE_RTLD_LAZY 1 #define PETSC_HAVE_RTLD_LOCAL 1 #define PETSC_HAVE_RTLD_NOW 1 #define PETSC_HAVE_SCHED_CPU_SET_T 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_SLEEP 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_SOCKET 1 #define PETSC_HAVE_SO_REUSEADDR 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRCASECMP 1 #define PETSC_HAVE_STRINGS_H 1 #define PETSC_HAVE_STRUCT_SIGACTION 1 #define PETSC_HAVE_SYSINFO 1 #define PETSC_HAVE_SYS_PARAM_H 1 #define PETSC_HAVE_SYS_PROCFS_H 1 #define PETSC_HAVE_SYS_RESOURCE_H 1 #define PETSC_HAVE_SYS_SOCKET_H 1 #define PETSC_HAVE_SYS_SYSINFO_H 1 #define PETSC_HAVE_SYS_TIMES_H 1 #define PETSC_HAVE_SYS_TIME_H 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_SYS_UTSNAME_H 1 #define PETSC_HAVE_SYS_WAIT_H 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_UNAME 1 #define PETSC_HAVE_UNISTD_H 1 #define PETSC_HAVE_USLEEP 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_X 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_HDF5_HAVE_PARALLEL 1 #define PETSC_HDF5_HAVE_ZLIB 1 #define PETSC_IS_COLORING_MAX USHRT_MAX #define PETSC_IS_COLORING_VALUE_TYPE short #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 #define PETSC_LEVEL1_DCACHE_LINESIZE 64 #define PETSC_LIB_DIR "/usr/local/petsc/linux-gnu-real-32/lib" #define PETSC_MAX_PATH_LEN 4096 #define PETSC_MEMALIGN 16 #define PETSC_MPICC_SHOW "gcc -I/usr/local/include -L/usr/local/lib -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -lmpi" #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT #define PETSC_PETSC4PY_INSTALL_PATH "/usr/local/petsc/linux-gnu-real-32/lib" #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PYTHON_EXE "/usr/bin/python3" #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_REPLACE_DIR_SEPARATOR '\\' #define PETSC_SIGNAL_CAST #define PETSC_SIZEOF_ENUM 4 #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 8 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SHORT 2 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_SLSUFFIX "so" #define PETSC_UINTPTR_T uintptr_t #define PETSC_UNUSED __attribute((unused)) #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_DEBUG 1 #define PETSC_USE_DEBUGGER "gdb" #define PETSC_USE_INFO 1 #define PETSC_USE_ISATTY 1 #define PETSC_USE_LOG 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_USE_SHARED_LIBRARIES 1 #define PETSC_USE_SINGLE_LIBRARY 1 #define PETSC_USE_SOCKET_VIEWER 1 #define PETSC_USE_VISIBILITY_C 1 #define PETSC_USE_VISIBILITY_CXX 1 #define PETSC_USING_64BIT_PTR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define PETSC__GNU_SOURCE 1 ----------------------------------------- Using C compile: mpicc -o .o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-misleading-indentation -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 mpicc -show: gcc -I/usr/local/include -L/usr/local/lib -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -lmpi C compiler version: gcc (Ubuntu 11.2.0-7ubuntu2) 11.2.0 Using C++ compile: mpicxx -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O0 -fPIC -std=gnu++17 -I/usr/local/petsc/include -I/usr/local/petsc/linux-gnu-real-32/include mpicxx -show: g++ -I/usr/local/include -L/usr/local/lib -lmpicxx -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -lmpi C++ compiler version: g++ (Ubuntu 11.2.0-7ubuntu2) 11.2.0 Using Fortran compile: mpif90 -o .o -c -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O0 -I/usr/local/petsc/include -I/usr/local/petsc/linux-gnu-real-32/include mpif90 -show: gfortran -I/usr/local/include -I/usr/local/include -L/usr/local/lib -lmpifort -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -lmpi Fortran compiler version: GNU Fortran (Ubuntu 11.2.0-7ubuntu2) 11.2.0 ----------------------------------------- Using C/C++ linker: mpicc Using C/C++ flags: -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-misleading-indentation -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 Using Fortran linker: mpif90 Using Fortran flags: -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O0 ----------------------------------------- Using system modules: Using mpi.h: # 1 "/usr/local/include/mpi.h" 1 3 ----------------------------------------- Using libraries: -Wl,-rpath,/usr/local/petsc/linux-gnu-real-32/lib -L/usr/local/petsc/linux-gnu-real-32/lib -Wl,-rpath,/usr/local/lib -L/usr/local/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -llapack -lblas -lhdf5_hl -lhdf5 -lm -lX11 -lstdc++ -ldl -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl ------------------------------------------ Using mpiexec: mpiexec ------------------------------------------ Using MAKE: /usr/bin/gmake Using MAKEFLAGS: -j10 -l18.0 --no-print-directory -- PETSC_DIR=/usr/local/petsc PETSC_ARCH=linux-gnu-real-32 ========================================== gmake[3]: Nothing to be done for 'libs'. *** Building petsc4py *** /usr/bin/bash: line 1: cd: src/binding/petsc4py: No such file or directory **************************ERROR************************************* Error building petsc4py. ******************************************************************** gmake[2]: *** [/usr/local/petsc/linux-gnu-real-32/lib/petsc/conf/petscrules:45: petsc4pybuild] Error 1 **************************ERROR************************************* Error during compile, check linux-gnu-real-32/lib/petsc/conf/make.log Send it and linux-gnu-real-32/lib/petsc/conf/configure.log to petsc-maint at mcs.anl.gov ******************************************************************** gmake[1]: *** [makefile:40: all] Error 1 make: *** [GNUmakefile:9: all] Error 2 Sorry for the massive text blob, but that was definitely unexepcted behaviour. I know I mailed before saying the --with-petsc4py option didn't change a thing, but now it seems to be breaking everything. I serached a bit through make.log and didn't find anything useful. Quentin [image: cid:image003.jpg at 01D690CB.3B3FDC10] Quentin CHEVALIER ? IA parcours recherche LadHyX - Ecole polytechnique __________ On Thu, 9 Dec 2021 at 18:02, Matthew Knepley wrote: > On Thu, Dec 9, 2021 at 11:40 AM Quentin Chevalier < > quentin.chevalier at polytechnique.edu> wrote: > >> Hello Matthew, >> >> You're absolutely right ! Editor error, my apologies. Running full >> process 1-5) in container gives : >> >> root at container_id: ./ex19 -da_refine 3 -pc_type mg -ksp_type fgmres >> lid velocity = 0.0016, prandtl # = 1., grashof # = 1. >> Number of SNES iterations = 2 >> >> So I guess it is passing the test. > > > Yes, that is right. > > >> I guess it is a site-wide >> installation then. Is that good news or bad news when it comes to >> adding HDF5 on top of things ? >> > > It is fine either way. You should now be able to get further. petsc4py > should be just using the shared > libraries, so you should now be able to run an HDF5 thing from there. Does > it work? > > Thanks, > > Matt > > >> Thank you for your time, >> >> Quentin >> >> On Wed, 8 Dec 2021 at 19:12, Matthew Knepley wrote: >> > >> > On Wed, Dec 8, 2021 at 11:05 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> Sorry Matthew, I had a correct tabulation. The attached file gives the >> >> same error. >> > >> > >> > The file you attached has 6 spaces in front of ${CLINKER} rather than a >> tab character. >> > >> > Thanks, >> > >> > Matt >> > >> >> >> >> Quentin >> >> >> >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> LadHyX - Ecole polytechnique >> >> >> >> __________ >> >> >> >> >> >> >> >> On Wed, 8 Dec 2021 at 16:51, Matthew Knepley >> wrote: >> >> > >> >> > On Wed, Dec 8, 2021 at 9:39 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >> Step 4) fails. Traceback is : makefile:2: *** missing separator. >> Stop. >> >> > >> >> > >> >> > Makefiles require a tab character at the beginning of every action >> line. When you cut & pasted from email >> >> > the tab got eaten. Once you put it back, the makefile will work. >> >> > >> >> >> >> >> >> echo $CLINKER return an empty line. Same for PETSC_LIB. It would >> seem >> >> >> the docker container has no such environment variables. Or did you >> >> >> expect me to replace these by an environment specific linker ? >> >> > >> >> > >> >> > Those variables are defined in the included makefile >> >> > >> >> > include ${PETSC_DIR}/lib/petsc/conf/variables >> >> > >> >> > Thanks, >> >> > >> >> > Matt >> >> > >> >> >> >> >> >> Quentin >> >> >> >> >> >> >> >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> >> >> LadHyX - Ecole polytechnique >> >> >> >> >> >> __________ >> >> >> >> >> >> >> >> >> >> >> >> On Wed, 8 Dec 2021 at 15:14, Matthew Knepley >> wrote: >> >> >> > >> >> >> > On Wed, Dec 8, 2021 at 9:07 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >> >> >> I'm not sure I understand what you're saying... But it's my >> theory >> >> >> >> from the beginning that make check fails because PETSc examples >> were >> >> >> >> taken out of the docker image. >> >> >> > >> >> >> > >> >> >> > It appears so. There is no 'src' directory. Here is a simple way >> to test with an install like this. >> >> >> > >> >> >> > 1) mkdir test; cd test >> >> >> > >> >> >> > 2) Download the source: >> https://gitlab.com/petsc/petsc/-/raw/main/src/snes/tutorials/ex19.c?inline=false >> >> >> > >> >> >> > 3) Create a simple makefile: >> >> >> > >> >> >> > ex19: ex19.o >> >> >> > ${CLINKER} -o ex19 ex19.o ${PETSC_LIB} >> >> >> > >> >> >> > include ${PETSC_DIR}/lib/petsc/conf/variables >> >> >> > include ${PETSC_DIR}/lib/petsc/conf/rules >> >> >> > >> >> >> > 4) make ex19 >> >> >> > >> >> >> > 5) ./ex19 -da_refine 3 -pc_type mg -ksp_type fgmres >> >> >> > >> >> >> > Thanks, >> >> >> > >> >> >> > Matt >> >> >> > >> >> >> >> I'm unsure what would be the correct way to discriminate between >> >> >> >> source and partial install, I tried a find . --name "testex19" >> from >> >> >> >> PETSC_DIR to no avail. A ls from PETSC_DIR yields : >> CODE_OF_CONDUCT.md >> >> >> >> CONTRIBUTING GNUmakefile LICENSE PKG-INFO config >> configtest.mod >> >> >> >> configure configure.log configure.log.bkp gmakefile >> gmakefile.test >> >> >> >> include interfaces lib linux-gnu-complex-32 >> linux-gnu-complex-64 >> >> >> >> linux-gnu-real-32 linux-gnu-real-64 make.log makefile >> petscdir.mk >> >> >> >> setup.py >> >> >> >> >> >> >> >> Quentin >> >> >> >> On Wed, 8 Dec 2021 at 14:38, Matthew Knepley >> wrote: >> >> >> >> > >> >> >> >> > On Wed, Dec 8, 2021 at 8:29 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >> >> >> >> >> @Matthew Knepley I'm confused by your comment. Judging from >> the >> >> >> >> >> traceback, PETSC_DIR is set and correct. You think it should >> be >> >> >> >> >> specified in the command line as well ? >> >> >> >> > >> >> >> >> > >> >> >> >> > You are right, usually this message >> >> >> >> > >> >> >> >> > >> /usr/bin/bash: line 1: cd: src/snes/tutorials: No such file >> or directory >> >> >> >> > >> >> >> >> > means that PETSC_DIR is undefined, and thus you cannot find >> the tutorials. However, it could be that >> >> >> >> > Your PETSC_DIR refers to a site-wide installation, which >> /usr/local/petsc seems like. There the source >> >> >> >> > is not installed and we cannot run 'make check' because only >> the headers and libraries are there. Is this >> >> >> >> > what is happening here? >> >> >> >> > >> >> >> >> > Thanks, >> >> >> >> > >> >> >> >> > Matt >> >> >> >> > >> >> >> >> >> >> >> >> >> >> Quentin >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> >> >> >> >> >> >> LadHyX - Ecole polytechnique >> >> >> >> >> >> >> >> >> >> __________ >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> On Wed, 8 Dec 2021 at 13:23, Matthew Knepley < >> knepley at gmail.com> wrote: >> >> >> >> >> > >> >> >> >> >> > On Wed, Dec 8, 2021 at 4:08 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >> >> >> >> >> >> >> @all thanks for your time it's heartening to see a lively >> community. >> >> >> >> >> >> >> >> >> >> >> >> @Barry I've restarted the container and grabbed the .log >> file directly after the docker magic. I've tried a make check, it >> unsurprisingly spews the same answer as before : >> >> >> >> >> >> >> >> >> >> >> >> Running check examples to verify correct installation >> >> >> >> >> >> Using PETSC_DIR=/usr/local/petsc and >> PETSC_ARCH=linux-gnu-real-32 >> >> >> >> >> >> /usr/bin/bash: line 1: cd: src/snes/tutorials: No such >> file or directory >> >> >> >> >> >> /usr/bin/bash: line 1: cd: src/snes/tutorials: No such >> file or directory >> >> >> >> >> >> gmake[3]: *** No rule to make target 'testex19'. Stop. >> >> >> >> >> >> gmake[2]: *** [makefile:155: check_build] Error 2 >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > This happens if you run 'make check' without defining >> PETSC_DIR in your environment, since we are including >> >> >> >> >> > makefiles with PETSC_DIR in the path and make does not >> allow proper error messages in that case. >> >> >> >> >> > >> >> >> >> >> > Thanks, >> >> >> >> >> > >> >> >> >> >> > Matt >> >> >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> @Matthew ok will do, but I think @Lawrence has already >> provided that answer. It's possible to change the dockerfile and recompute >> the dolfinx image with hdf5, only it is a time-consuming process. >> >> >> >> >> >> >> >> >> >> >> >> Quentin >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> >> >> >> >> >> >> >> >> LadHyX - Ecole polytechnique >> >> >> >> >> >> >> >> >> >> >> >> __________ >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> On Tue, 7 Dec 2021 at 19:16, Matthew Knepley < >> knepley at gmail.com> wrote: >> >> >> >> >> >>> >> >> >> >> >> >>> On Tue, Dec 7, 2021 at 9:43 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >>>> >> >> >> >> >> >>>> @Matthew, as stated before, error output is unchanged, >> i.e.the python >> >> >> >> >> >>>> command below produces the same traceback : >> >> >> >> >> >>>> >> >> >> >> >> >>>> # python3 -c "from petsc4py import PETSc; >> PETSc.Viewer().createHDF5('d.h5')" >> >> >> >> >> >>>> Traceback (most recent call last): >> >> >> >> >> >>>> File "", line 1, in >> >> >> >> >> >>>> File "PETSc/Viewer.pyx", line 182, in >> petsc4py.PETSc.Viewer.createHDF5 >> >> >> >> >> >>>> petsc4py.PETSc.Error: error code 86 >> >> >> >> >> >>>> [0] PetscViewerSetType() at >> >> >> >> >> >>>> >> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >> >> >> >> >> >>>> [0] Unknown type. Check for miss-spelling or missing >> package: >> >> >> >> >> >>>> >> https://petsc.org/release/install/install/#external-packages >> >> >> >> >> >>>> [0] Unknown PetscViewer type given: hdf5 >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> The reason I wanted the output was that the C output >> shows the configure options that the PETSc library >> >> >> >> >> >>> was built with, However, Python seems to be eating this, >> so I cannot check. >> >> >> >> >> >>> >> >> >> >> >> >>> It seems like using this container is counter-productive. >> If it was built correctly, making these changes would be trivial. >> >> >> >> >> >>> Send mail to FEniCS (I am guessing Chris Richardson >> maintains this), and ask how they intend people to change these >> >> >> >> >> >>> options. >> >> >> >> >> >>> >> >> >> >> >> >>> Thanks, >> >> >> >> >> >>> >> >> >> >> >> >>> Matt. >> >> >> >> >> >>> >> >> >> >> >> >>>> >> >> >> >> >> >>>> @Wence that makes sense. I'd assumed that the original >> PETSc had been >> >> >> >> >> >>>> overwritten, and if the linking has gone wrong I'm >> surprised anything >> >> >> >> >> >>>> happens with petsc4py at all. >> >> >> >> >> >>>> >> >> >> >> >> >>>> Your tentative command gave : >> >> >> >> >> >>>> >> >> >> >> >> >>>> ERROR: Invalid requirement: >> '/usr/local/petsc/src/binding/petsc4py' >> >> >> >> >> >>>> Hint: It looks like a path. File >> >> >> >> >> >>>> '/usr/local/petsc/src/binding/petsc4py' does not exist. >> >> >> >> >> >>>> >> >> >> >> >> >>>> So I tested that global variables PETSC_ARCH & PETSC_DIR >> were correct >> >> >> >> >> >>>> then ran "pip install petsc4py" to restart petsc4py from >> scratch. This >> >> >> >> >> >>>> gives rise to a different error : >> >> >> >> >> >>>> # python3 -c "from petsc4py import PETSc" >> >> >> >> >> >>>> Traceback (most recent call last): >> >> >> >> >> >>>> File "", line 1, in >> >> >> >> >> >>>> File >> "/usr/local/lib/python3.9/dist-packages/petsc4py/PETSc.py", >> >> >> >> >> >>>> line 3, in >> >> >> >> >> >>>> PETSc = ImportPETSc(ARCH) >> >> >> >> >> >>>> File >> "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >> >> >> >> >> >>>> line 29, in ImportPETSc >> >> >> >> >> >>>> return Import('petsc4py', 'PETSc', path, arch) >> >> >> >> >> >>>> File >> "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >> >> >> >> >> >>>> line 73, in Import >> >> >> >> >> >>>> module = import_module(pkg, name, path, arch) >> >> >> >> >> >>>> File >> "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >> >> >> >> >> >>>> line 58, in import_module >> >> >> >> >> >>>> with f: return imp.load_module(fullname, f, fn, info) >> >> >> >> >> >>>> File "/usr/lib/python3.9/imp.py", line 242, in >> load_module >> >> >> >> >> >>>> return load_dynamic(name, filename, file) >> >> >> >> >> >>>> File "/usr/lib/python3.9/imp.py", line 342, in >> load_dynamic >> >> >> >> >> >>>> return _load(spec) >> >> >> >> >> >>>> ImportError: >> /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/ >> PETSc.cpython-39-x86_64-linux-gnu.so: >> >> >> >> >> >>>> undefined symbol: petscstack >> >> >> >> >> >>>> >> >> >> >> >> >>>> Not sure that it a step forward ; looks like petsc4py is >> broken now. >> >> >> >> >> >>>> >> >> >> >> >> >>>> Quentin >> >> >> >> >> >>>> >> >> >> >> >> >>>> On Tue, 7 Dec 2021 at 14:58, Matthew Knepley < >> knepley at gmail.com> wrote: >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > On Tue, Dec 7, 2021 at 8:26 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> Ok my bad, that log corresponded to a tentative >> --download-hdf5. This >> >> >> >> >> >>>> >> log corresponds to the commands given above and has >> --with-hdf5 in its >> >> >> >> >> >>>> >> options. >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > Okay, this configure was successful and found HDF5 >> >> >> >> >> >>>> > >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> The whole process still results in the same error. >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > Now send me the complete error output with this PETSc. >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > Thanks, >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > Matt >> >> >> >> >> >>>> > >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> Quentin >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> LadHyX - Ecole polytechnique >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> __________ >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> >> >> >> >> >> >>>> >> On Tue, 7 Dec 2021 at 13:59, Matthew Knepley < >> knepley at gmail.com> wrote: >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> Hello Matthew, >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> That would indeed make sense. >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> Full log is attached, I grepped hdf5 in there and >> didn't find anything alarming. >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > At the top of this log: >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > Configure Options: --configModules=PETSc.Configure >> --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 >> --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 >> --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no >> --with-shared-libraries --download-hypre --download-mumps >> --download-ptscotch --download-scalapack --download-suitesparse >> --download-superlu_dist --with-scalar-type=complex >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > So the HDF5 option is not being specified. >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > Thanks, >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > Matt >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> >> Cheers, >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> Quentin >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> LadHyX - Ecole polytechnique >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> __________ >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> >> >> >> >> >> >>>> >> >> On Mon, 6 Dec 2021 at 21:39, Matthew Knepley < >> knepley at gmail.com> wrote: >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >>>> >> >>>> Fine. MWE is unchanged : >> >> >> >> >> >>>> >> >>>> * Run this docker container >> >> >> >> >> >>>> >> >>>> * Do : python3 -c "from petsc4py import PETSc; >> PETSc.Viewer().createHDF5('dummy.h5')" >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >>>> >> >>>> Updated attempt at a fix : >> >> >> >> >> >>>> >> >>>> * cd /usr/local/petsc/ >> >> >> >> >> >>>> >> >>>> * ./configure PETSC_ARCH= linux-gnu-real-32 >> PETSC_DIR=/usr/local/petsc --with-hdf5 --force >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> Did it find HDF5? If not, it will shut it off. >> You need to send us >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> $PETSC_DIR/configure.log >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> so we can see what happened in the configure run. >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> Thanks, >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> Matt >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >>>> >> >>>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= >> linux-gnu-real-32 all >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >>>> >> >>>> Still no joy. The same error remains. >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >>>> >> >>>> Quentin >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >>>> >> >>>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet < >> pierre at joliv.et> wrote: >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > The PETSC_DIR exactly corresponds to the >> previous one, so I guess that rules option b) out, except if a specific >> option is required to overwrite a previous installation of PETSc. As for >> a), well I thought reconfiguring pretty direct, you're welcome to give me a >> hint as to what could be wrong in the following process. >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > Steps to reproduce this behaviour are as >> follows : >> >> >> >> >> >>>> >> >>>> > * Run this docker container >> >> >> >> >> >>>> >> >>>> > * Do : python3 -c "from petsc4py import PETSc; >> PETSc.Viewer().createHDF5('dummy.h5')" >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > After you get the error Unknown PetscViewer >> type, feel free to try : >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > * cd /usr/local/petsc/ >> >> >> >> >> >>>> >> >>>> > * ./configure --with-hfd5 >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > It?s hdf5, not hfd5. >> >> >> >> >> >>>> >> >>>> > It?s PETSC_ARCH, not PETSC-ARCH. >> >> >> >> >> >>>> >> >>>> > PETSC_ARCH is missing from your configure line. >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > Thanks, >> >> >> >> >> >>>> >> >>>> > Pierre >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > * make PETSC_DIR=/usr/local/petsc >> PETSC-ARCH=linux-gnu-real-32 all >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > Then repeat the MWE and observe absolutely no >> behavioural change whatsoever. I'm afraid I don't know PETSc well enough to >> be surprised by that. >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > Quentin >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > Quentin CHEVALIER ? IA parcours recherche >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > LadHyX - Ecole polytechnique >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > __________ >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley < >> knepley at gmail.com> wrote: >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin >> Chevalier wrote: >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >>>> >> >>>> >>> It failed all of the tests included in `make >> >> >> >> >> >>>> >> >>>> >>> PETSC_DIR=/usr/local/petsc >> PETSC-ARCH=linux-gnu-real-32 check`, with >> >> >> >> >> >>>> >> >>>> >>> the error `/usr/bin/bash: line 1: cd: >> src/snes/tutorials: No such file >> >> >> >> >> >>>> >> >>>> >>> or directory` >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >>>> >> >>>> >>> I am therefore fairly confident this a "file >> absence" problem, and not >> >> >> >> >> >>>> >> >>>> >>> a compilation problem. >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >>>> >> >>>> >>> I repeat that there was no error at >> compilation stage. The final stage >> >> >> >> >> >>>> >> >>>> >>> did present `gmake[3]: Nothing to be done >> for 'libs'.` but that's all. >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >>>> >> >>>> >>> Again, running `./configure --with-hdf5` >> followed by a `make >> >> >> >> >> >>>> >> >>>> >>> PETSC_DIR=/usr/local/petsc >> PETSC-ARCH=linux-gnu-real-32 all` does not >> >> >> >> >> >>>> >> >>>> >>> change the problem. I get the same error at >> the same position as >> >> >> >> >> >>>> >> >>>> >>> before. >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> If you reconfigured and rebuilt, it is >> impossible to get the same error, so >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> a) You did not reconfigure >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> b) Your new build is somewhere else on the >> machine >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> Thanks, >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> Matt >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >>>> >> >>>> >>> I will comment I am running on OpenSUSE. >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >>>> >> >>>> >>> Quentin >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >>>> >> >>>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew Knepley >> wrote: >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin >> Chevalier wrote: >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> Hello Matthew and thanks for your quick >> response. >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> I'm afraid I did try to snoop around the >> container and rerun PETSc's >> >> >> >> >> >>>> >> >>>> >>> >> configure with the --with-hdf5 option, to >> absolutely no avail. >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> I didn't see any errors during config or >> make, but it failed the tests >> >> >> >> >> >>>> >> >>>> >>> >> (which aren't included in the minimal >> container I suppose) >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> > Failed which tests? What was the error? >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> > Thanks, >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> > Matt >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> Quentin >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> Quentin CHEVALIER ? IA parcours recherche >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> LadHyX - Ecole polytechnique >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> __________ >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >>>> >> >>>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew >> Knepley wrote: >> >> >> >> >> >>>> >> >>>> >>> >> > >> >> >> >> >> >>>> >> >>>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM Quentin >> Chevalier wrote: >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> Hello PETSc users, >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> This email is a duplicata of this >> gitlab issue, sorry for any inconvenience caused. >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> I want to compute a PETSc vector in >> real mode, than perform calculations with it in complex mode. I want as >> much of this process to be parallel as possible. Right now, I compile PETSc >> in real mode, compute my vector and save it to a file, then switch to >> complex mode, read it, and move on. >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> This creates unexpected behaviour >> using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My >> code is as follows (taking inspiration from petsc4py doc, a bitbucket >> example and another one, all top Google results for 'petsc hdf5') : >> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >> >>>> >> >>>> >>> >> >>> viewer = >> PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >> >> >> >> >> >>>> >> >>>> >>> >> >>> q.load(viewer) >> >> >> >> >> >>>> >> >>>> >>> >> >>> >> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> This crashes my code. I obtain >> traceback : >> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >> >>>> >> >>>> >>> >> >>> File "/home/shared/code.py", line >> 121, in Load >> >> >> >> >> >>>> >> >>>> >>> >> >>> viewer = >> PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >> >> >> >> >> >>>> >> >>>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, >> in petsc4py.PETSc.Viewer.createHDF5 >> >> >> >> >> >>>> >> >>>> >>> >> >>> petsc4py.PETSc.Error: error code 86 >> >> >> >> >> >>>> >> >>>> >>> >> >>> [0] PetscViewerSetType() at >> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >> >> >> >> >> >>>> >> >>>> >>> >> >>> [0] Unknown type. Check for >> miss-spelling or missing package: >> https://petsc.org/release/install/install/#external-packages >> >> >> >> >> >>>> >> >>>> >>> >> >>> [0] Unknown PetscViewer type given: >> hdf5 >> >> >> >> >> >>>> >> >>>> >>> >> > >> >> >> >> >> >>>> >> >>>> >>> >> > This means that PETSc has not been >> configured with HDF5 (--with-hdf5 or --download-hdf5), so the container >> should be updated. >> >> >> >> >> >>>> >> >>>> >>> >> > >> >> >> >> >> >>>> >> >>>> >>> >> > THanks, >> >> >> >> >> >>>> >> >>>> >>> >> > >> >> >> >> >> >>>> >> >>>> >>> >> > Matt >> >> >> >> >> >>>> >> >>>> >>> >> > >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> I have petsc4py 3.16 from this docker >> container (list of dependencies include PETSc and petsc4py). >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> I'm pretty sure this is not intended >> behaviour. Any insight as to how to fix this issue (I tried running >> ./configure --with-hdf5 to no avail) or more generally to perform this >> jiggling between real and complex would be much appreciated, >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> Kind regards. >> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >> >> >> >>>> >> >>>> >>> >> >> Quentin >> >> >> >> >> >>>> >> >>>> >>> >> > >> >> >> >> >> >>>> >> >>>> >>> >> > >> >> >> >> >> >>>> >> >>>> >>> >> > >> >> >> >> >> >>>> >> >>>> >>> >> > -- >> >> >> >> >> >>>> >> >>>> >>> >> > What most experimenters take for >> granted before they begin their experiments is infinitely more interesting >> than any results to which their experiments lead. >> >> >> >> >> >>>> >> >>>> >>> >> > -- Norbert Wiener >> >> >> >> >> >>>> >> >>>> >>> >> > >> >> >> >> >> >>>> >> >>>> >>> >> > https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> > -- >> >> >> >> >> >>>> >> >>>> >>> > What most experimenters take for granted >> before they begin their experiments is infinitely more interesting than any >> results to which their experiments lead. >> >> >> >> >> >>>> >> >>>> >>> > -- Norbert Wiener >> >> >> >> >> >>>> >> >>>> >>> > >> >> >> >> >> >>>> >> >>>> >>> > https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> -- >> >> >> >> >> >>>> >> >>>> >> What most experimenters take for granted >> before they begin their experiments is infinitely more interesting than any >> results to which their experiments lead. >> >> >> >> >> >>>> >> >>>> >> -- Norbert Wiener >> >> >> >> >> >>>> >> >>>> >> >> >> >> >> >> >>>> >> >>>> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>>> > >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> -- >> >> >> >> >> >>>> >> >>> What most experimenters take for granted before >> they begin their experiments is infinitely more interesting than any >> results to which their experiments lead. >> >> >> >> >> >>>> >> >>> -- Norbert Wiener >> >> >> >> >> >>>> >> >>> >> >> >> >> >> >>>> >> >>> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > -- >> >> >> >> >> >>>> >> > What most experimenters take for granted before >> they begin their experiments is infinitely more interesting than any >> results to which their experiments lead. >> >> >> >> >> >>>> >> > -- Norbert Wiener >> >> >> >> >> >>>> >> > >> >> >> >> >> >>>> >> > https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > -- >> >> >> >> >> >>>> > What most experimenters take for granted before they >> begin their experiments is infinitely more interesting than any results to >> which their experiments lead. >> >> >> >> >> >>>> > -- Norbert Wiener >> >> >> >> >> >>>> > >> >> >> >> >> >>>> > https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> -- >> >> >> >> >> >>> What most experimenters take for granted before they >> begin their experiments is infinitely more interesting than any results to >> which their experiments lead. >> >> >> >> >> >>> -- Norbert Wiener >> >> >> >> >> >>> >> >> >> >> >> >>> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > -- >> >> >> >> >> > What most experimenters take for granted before they begin >> their experiments is infinitely more interesting than any results to which >> their experiments lead. >> >> >> >> >> > -- Norbert Wiener >> >> >> >> >> > >> >> >> >> >> > https://www.cse.buffalo.edu/~knepley/ >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > -- >> >> >> >> > What most experimenters take for granted before they begin >> their experiments is infinitely more interesting than any results to which >> their experiments lead. >> >> >> >> > -- Norbert Wiener >> >> >> >> > >> >> >> >> > https://www.cse.buffalo.edu/~knepley/ >> >> >> > >> >> >> > >> >> >> > >> >> >> > -- >> >> >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> >> >> > -- Norbert Wiener >> >> >> > >> >> >> > https://www.cse.buffalo.edu/~knepley/ >> >> > >> >> > >> >> > >> >> > -- >> >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> >> > -- Norbert Wiener >> >> > >> >> > https://www.cse.buffalo.edu/~knepley/ >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 2044 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: text/x-log Size: 11812 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 865319 bytes Desc: not available URL: From jeremy at seamplex.com Fri Dec 10 04:05:18 2021 From: jeremy at seamplex.com (Jeremy Theler) Date: Fri, 10 Dec 2021 07:05:18 -0300 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Fri, 2021-12-10 at 10:02 +0100, Quentin Chevalier wrote: > root at container:/home/shared# echo $PETSCH_ARCH $PETSCH_DIR PETSCH_ARCH -> PETSC_ARCH PETSCH_DIR -> PETSC_DIR -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Dec 10 06:14:35 2021 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 10 Dec 2021 07:14:35 -0500 Subject: [petsc-users] How to read/write a HDF5 file using petsc4py ? In-Reply-To: References: Message-ID: On Fri, Dec 10, 2021 at 4:02 AM Quentin Chevalier < quentin.chevalier at polytechnique.edu> wrote: > Ok Matthew, > > I'll list my steps : > > *> sudo docker run -itv myfolder:/home/shared -w /home/shared/ --rm > dolfinx/dolfinxroot at container:/home/shared# echo $PETSCH_ARCH $PETSCH_DIR* > linux-gnu-real-32 /usr/local/petsc > *root at container:/home/shared# cd /usr/local/petsc* > *root at container:/usr/local/petsc# ./configure --with-hdf5 --with-petsc4py > --force* > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > The version of PETSc you are using is out-of-date, we recommend updating > to the new release > Available Version: 3.16.2 Installed Version: 3.16 > https://petsc.org/release/download/ > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > ============================================================================================= > Configuring PETSc to compile on your system > > > ============================================================================================= > TESTING: configureLibrary from > config.packages.petsc4py(config/BuildSystem/config/packages/petsc4py.py:116) > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > ------------------------------------------------------------------------------- > PETSc4py requires Python with "cython" module(s) installed! > Please install using package managers - for ex: "apt" or "dnf" (on linux), > or with "pip" using: /usr/bin/python3 -m pip install cython > > ******************************************************************************* > *root at container:/usr/local/petsc# pip install cython* > Collecting cython > Downloading > Cython-0.29.25-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl > (1.9 MB) > |????????????????????????????????| 1.9 MB 10.8 MB/s > Installing collected packages: cython > Successfully installed cython-0.29.25 > *root at container:/usr/local/petsc# ./configure --with-hdf5 --with-petsc4py > --force* > > ============================================================================================= > Configuring PETSc to compile on your system > > > ============================================================================================= > Compilers: > > > C Compiler: mpicc -fPIC -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-misleading-indentation > -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 > Version: gcc (Ubuntu 11.2.0-7ubuntu2) 11.2.0 > C++ Compiler: mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O0 -fPIC > -std=gnu++17 > Version: g++ (Ubuntu 11.2.0-7ubuntu2) 11.2.0 > Fortran Compiler: mpif90 -fPIC -Wall -ffree-line-length-0 > -Wno-unused-dummy-argument -g -O0 > Version: GNU Fortran (Ubuntu 11.2.0-7ubuntu2) 11.2.0 > Linkers: > Shared linker: mpicc -shared -fPIC -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-misleading-indentation > -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 > Dynamic linker: mpicc -shared -fPIC -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-misleading-indentation > -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 > Libraries linked against: -lquadmath -lstdc++ -ldl > BlasLapack: > Library: -llapack -lblas > Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> > yourprogram -log_view) > uses 4 byte integers > MPI: > Version: 3 > mpiexec: mpiexec > Implementation: mpich3 > MPICH_NUMVERSION: 30402300 > X: > Library: -lX11 > pthread: > hdf5: > Version: 1.12.1 > Library: -lhdf5_hl -lhdf5 > cmake: > Version: 3.18.4 > /usr/bin/cmake > regex: > Language used to compile PETSc: C > PETSc: > PETSC_ARCH: linux-gnu-real-32 > PETSC_DIR: /usr/local/petsc > Prefix: > Scalar type: real > Precision: double > Support for __float128 > Integer size: 4 bytes > Single library: yes > Shared libraries: yes > Memory alignment from malloc(): 16 bytes > Using GNU make: /usr/bin/gmake > > xxx=========================================================================xxx > Configure stage complete. Now build PETSc libraries with: > make PETSC_DIR=/usr/local/petsc PETSC_ARCH=linux-gnu-real-32 all > > xxx=========================================================================xxx > *root at 17fbe5936a5d:/usr/local/petsc# make all* > /usr/bin/python3 ./config/gmakegen.py --petsc-arch=linux-gnu-real-32 > /usr/bin/python3 /usr/local/petsc/config/gmakegentest.py > --petsc-dir=/usr/local/petsc --petsc-arch=linux-gnu-real-32 > --testdir=./linux-gnu-real-32/tests > ========================================== > > See documentation/faq.html and documentation/bugreporting.html > for help with installation problems. Please send EVERYTHING > printed out below when reporting problems. Please check the > mailing list archives and consider subscribing. > > https://petsc.org/release/community/mailing/ > > ========================================== > Starting make run on 17fbe5936a5d at Fri, 10 Dec 2021 08:28:10 +0000 > Machine characteristics: Linux 17fbe5936a5d 5.3.18d01-lp152.63-default #1 > SMP PREEMPT Fri Feb 5 19:19:17 CET 2021 x86_64 x86_64 x86_64 GNU/Linux > ----------------------------------------- > Using PETSc directory: /usr/local/petsc > Using PETSc arch: linux-gnu-real-32 > ----------------------------------------- > PETSC_VERSION_RELEASE 1 > PETSC_VERSION_MAJOR 3 > PETSC_VERSION_MINOR 16 > PETSC_VERSION_SUBMINOR 0 > PETSC_VERSION_PATCH 0 > PETSC_VERSION_DATE "Sep 29, 2021" > PETSC_VERSION_GIT "v3.16.0" > PETSC_VERSION_DATE_GIT "2021-09-29 18:30:02 -0500" > PETSC_VERSION_EQ(MAJOR,MINOR,SUBMINOR) \ > PETSC_VERSION_ PETSC_VERSION_EQ > PETSC_VERSION_LT(MAJOR,MINOR,SUBMINOR) \ > PETSC_VERSION_LE(MAJOR,MINOR,SUBMINOR) \ > PETSC_VERSION_GT(MAJOR,MINOR,SUBMINOR) \ > PETSC_VERSION_GE(MAJOR,MINOR,SUBMINOR) \ > ----------------------------------------- > Using configure Options: --with-hdf5 --with-petsc4py --force > Using configuration flags: > #define INCLUDED_PETSCCONF_H > #define PETSC_ARCH "linux-gnu-real-32" > #define PETSC_ATTRIBUTEALIGNED(size) __attribute((aligned(size))) > #define PETSC_Alignx(a,b) > #define PETSC_BLASLAPACK_UNDERSCORE 1 > #define PETSC_CLANGUAGE_C 1 > #define PETSC_CXX_INLINE inline > #define PETSC_CXX_RESTRICT __restrict > #define PETSC_C_INLINE inline > #define PETSC_C_RESTRICT __restrict > #define PETSC_DEPRECATED_ENUM(why) __attribute((deprecated)) > #define PETSC_DEPRECATED_FUNCTION(why) __attribute((deprecated)) > #define PETSC_DEPRECATED_MACRO(why) _Pragma(why) > #define PETSC_DEPRECATED_TYPEDEF(why) __attribute((deprecated)) > #define PETSC_DIR "/usr/local/petsc" > #define PETSC_DIR_SEPARATOR '/' > #define PETSC_FORTRAN_CHARLEN_T size_t > #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 > #define PETSC_FUNCTION_NAME_C __func__ > #define PETSC_FUNCTION_NAME_CXX __func__ > #define PETSC_HAVE_ACCESS 1 > #define PETSC_HAVE_ATOLL 1 > #define PETSC_HAVE_ATTRIBUTEALIGNED 1 > #define PETSC_HAVE_BUILTIN_EXPECT 1 > #define PETSC_HAVE_BZERO 1 > #define PETSC_HAVE_C99_COMPLEX 1 > #define PETSC_HAVE_CLOCK 1 > #define PETSC_HAVE_CXX 1 > #define PETSC_HAVE_CXX_COMPLEX 1 > #define PETSC_HAVE_CXX_COMPLEX_FIX 1 > #define PETSC_HAVE_CXX_DIALECT_CXX03 1 > #define PETSC_HAVE_CXX_DIALECT_CXX11 1 > #define PETSC_HAVE_CXX_DIALECT_CXX14 1 > #define PETSC_HAVE_CXX_DIALECT_CXX17 1 > #define PETSC_HAVE_DLADDR 1 > #define PETSC_HAVE_DLCLOSE 1 > #define PETSC_HAVE_DLERROR 1 > #define PETSC_HAVE_DLFCN_H 1 > #define PETSC_HAVE_DLOPEN 1 > #define PETSC_HAVE_DLSYM 1 > #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 > #define PETSC_HAVE_DRAND48 1 > #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 > #define PETSC_HAVE_ERF 1 > #define PETSC_HAVE_FCNTL_H 1 > #define PETSC_HAVE_FENV_H 1 > #define PETSC_HAVE_FLOAT_H 1 > #define PETSC_HAVE_FORK 1 > #define PETSC_HAVE_FORTRAN 1 > #define PETSC_HAVE_FORTRAN_FLUSH 1 > #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 > #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 > #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 > #define PETSC_HAVE_GETCWD 1 > #define PETSC_HAVE_GETDOMAINNAME 1 > #define PETSC_HAVE_GETHOSTBYNAME 1 > #define PETSC_HAVE_GETHOSTNAME 1 > #define PETSC_HAVE_GETPAGESIZE 1 > #define PETSC_HAVE_GETRUSAGE 1 > #define PETSC_HAVE_HDF5 1 > #define PETSC_HAVE_IMMINTRIN_H 1 > #define PETSC_HAVE_INTTYPES_H 1 > #define PETSC_HAVE_ISINF 1 > #define PETSC_HAVE_ISNAN 1 > #define PETSC_HAVE_ISNORMAL 1 > #define PETSC_HAVE_LGAMMA 1 > #define PETSC_HAVE_LOG2 1 > #define PETSC_HAVE_LSEEK 1 > #define PETSC_HAVE_MALLOC_H 1 > #define PETSC_HAVE_MEMALIGN 1 > #define PETSC_HAVE_MEMMOVE 1 > #define PETSC_HAVE_MMAP 1 > #define PETSC_HAVE_MPICH_NUMVERSION 30402300 > #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3 > #define PETSC_HAVE_MPIIO 1 > #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 > #define PETSC_HAVE_MPI_COMBINER_DUP 1 > #define PETSC_HAVE_MPI_COMBINER_NAMED 1 > #define PETSC_HAVE_MPI_EXSCAN 1 > #define PETSC_HAVE_MPI_F90MODULE 1 > #define PETSC_HAVE_MPI_F90MODULE_VISIBILITY 1 > #define PETSC_HAVE_MPI_FEATURE_DYNAMIC_WINDOW 1 > #define PETSC_HAVE_MPI_FINALIZED 1 > #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 > #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 > #define PETSC_HAVE_MPI_IALLREDUCE 1 > #define PETSC_HAVE_MPI_IBARRIER 1 > #define PETSC_HAVE_MPI_INIT_THREAD 1 > #define PETSC_HAVE_MPI_INT64_T 1 > #define PETSC_HAVE_MPI_IN_PLACE 1 > #define PETSC_HAVE_MPI_LONG_DOUBLE 1 > #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 > #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 > #define PETSC_HAVE_MPI_ONE_SIDED 1 > #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 > #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 > #define PETSC_HAVE_MPI_REDUCE_SCATTER 1 > #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 > #define PETSC_HAVE_MPI_RGET 1 > #define PETSC_HAVE_MPI_TYPE_DUP 1 > #define PETSC_HAVE_MPI_TYPE_GET_ENVELOPE 1 > #define PETSC_HAVE_MPI_WIN_CREATE 1 > #define PETSC_HAVE_NANOSLEEP 1 > #define PETSC_HAVE_NETDB_H 1 > #define PETSC_HAVE_NETINET_IN_H 1 > #define PETSC_HAVE_PACKAGES > ":blaslapack:hdf5:mathlib:mpi:pthread:regex:x11:" > #define PETSC_HAVE_PETSC4PY 1 > #define PETSC_HAVE_POPEN 1 > #define PETSC_HAVE_PTHREAD 1 > #define PETSC_HAVE_PTHREAD_BARRIER_T 1 > #define PETSC_HAVE_PTHREAD_H 1 > #define PETSC_HAVE_PWD_H 1 > #define PETSC_HAVE_RAND 1 > #define PETSC_HAVE_READLINK 1 > #define PETSC_HAVE_REALPATH 1 > #define PETSC_HAVE_REAL___FLOAT128 1 > #define PETSC_HAVE_REGEX 1 > #define PETSC_HAVE_RTLD_GLOBAL 1 > #define PETSC_HAVE_RTLD_LAZY 1 > #define PETSC_HAVE_RTLD_LOCAL 1 > #define PETSC_HAVE_RTLD_NOW 1 > #define PETSC_HAVE_SCHED_CPU_SET_T 1 > #define PETSC_HAVE_SETJMP_H 1 > #define PETSC_HAVE_SLEEP 1 > #define PETSC_HAVE_SNPRINTF 1 > #define PETSC_HAVE_SOCKET 1 > #define PETSC_HAVE_SO_REUSEADDR 1 > #define PETSC_HAVE_STDINT_H 1 > #define PETSC_HAVE_STRCASECMP 1 > #define PETSC_HAVE_STRINGS_H 1 > #define PETSC_HAVE_STRUCT_SIGACTION 1 > #define PETSC_HAVE_SYSINFO 1 > #define PETSC_HAVE_SYS_PARAM_H 1 > #define PETSC_HAVE_SYS_PROCFS_H 1 > #define PETSC_HAVE_SYS_RESOURCE_H 1 > #define PETSC_HAVE_SYS_SOCKET_H 1 > #define PETSC_HAVE_SYS_SYSINFO_H 1 > #define PETSC_HAVE_SYS_TIMES_H 1 > #define PETSC_HAVE_SYS_TIME_H 1 > #define PETSC_HAVE_SYS_TYPES_H 1 > #define PETSC_HAVE_SYS_UTSNAME_H 1 > #define PETSC_HAVE_SYS_WAIT_H 1 > #define PETSC_HAVE_TGAMMA 1 > #define PETSC_HAVE_TIME 1 > #define PETSC_HAVE_TIME_H 1 > #define PETSC_HAVE_UNAME 1 > #define PETSC_HAVE_UNISTD_H 1 > #define PETSC_HAVE_USLEEP 1 > #define PETSC_HAVE_VA_COPY 1 > #define PETSC_HAVE_VSNPRINTF 1 > #define PETSC_HAVE_X 1 > #define PETSC_HAVE_XMMINTRIN_H 1 > #define PETSC_HDF5_HAVE_PARALLEL 1 > #define PETSC_HDF5_HAVE_ZLIB 1 > #define PETSC_IS_COLORING_MAX USHRT_MAX > #define PETSC_IS_COLORING_VALUE_TYPE short > #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 > #define PETSC_LEVEL1_DCACHE_LINESIZE 64 > #define PETSC_LIB_DIR "/usr/local/petsc/linux-gnu-real-32/lib" > #define PETSC_MAX_PATH_LEN 4096 > #define PETSC_MEMALIGN 16 > #define PETSC_MPICC_SHOW "gcc -I/usr/local/include -L/usr/local/lib > -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -lmpi" > #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT > #define PETSC_PETSC4PY_INSTALL_PATH > "/usr/local/petsc/linux-gnu-real-32/lib" > #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA > #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 > #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 > #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 > #define PETSC_PYTHON_EXE "/usr/bin/python3" > #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) > #define PETSC_REPLACE_DIR_SEPARATOR '\\' > #define PETSC_SIGNAL_CAST > #define PETSC_SIZEOF_ENUM 4 > #define PETSC_SIZEOF_INT 4 > #define PETSC_SIZEOF_LONG 8 > #define PETSC_SIZEOF_LONG_LONG 8 > #define PETSC_SIZEOF_SHORT 2 > #define PETSC_SIZEOF_SIZE_T 8 > #define PETSC_SIZEOF_VOID_P 8 > #define PETSC_SLSUFFIX "so" > #define PETSC_UINTPTR_T uintptr_t > #define PETSC_UNUSED __attribute((unused)) > #define PETSC_USE_AVX512_KERNELS 1 > #define PETSC_USE_BACKWARD_LOOP 1 > #define PETSC_USE_CTABLE 1 > #define PETSC_USE_DEBUG 1 > #define PETSC_USE_DEBUGGER "gdb" > #define PETSC_USE_INFO 1 > #define PETSC_USE_ISATTY 1 > #define PETSC_USE_LOG 1 > #define PETSC_USE_PROC_FOR_SIZE 1 > #define PETSC_USE_REAL_DOUBLE 1 > #define PETSC_USE_SHARED_LIBRARIES 1 > #define PETSC_USE_SINGLE_LIBRARY 1 > #define PETSC_USE_SOCKET_VIEWER 1 > #define PETSC_USE_VISIBILITY_C 1 > #define PETSC_USE_VISIBILITY_CXX 1 > #define PETSC_USING_64BIT_PTR 1 > #define PETSC_USING_F2003 1 > #define PETSC_USING_F90FREEFORM 1 > #define PETSC__BSD_SOURCE 1 > #define PETSC__DEFAULT_SOURCE 1 > #define PETSC__GNU_SOURCE 1 > ----------------------------------------- > Using C compile: mpicc -o .o -c -fPIC -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -Wno-misleading-indentation > -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 > mpicc -show: gcc -I/usr/local/include -L/usr/local/lib -Wl,-rpath > -Wl,/usr/local/lib -Wl,--enable-new-dtags -lmpi > C compiler version: gcc (Ubuntu 11.2.0-7ubuntu2) 11.2.0 > Using C++ compile: mpicxx -o .o -c -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > -fvisibility=hidden -g -O0 -fPIC -std=gnu++17 -I/usr/local/petsc/include > -I/usr/local/petsc/linux-gnu-real-32/include > mpicxx -show: g++ -I/usr/local/include -L/usr/local/lib -lmpicxx > -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -lmpi > C++ compiler version: g++ (Ubuntu 11.2.0-7ubuntu2) 11.2.0 > Using Fortran compile: mpif90 -o .o -c -fPIC -Wall -ffree-line-length-0 > -Wno-unused-dummy-argument -g -O0 -I/usr/local/petsc/include > -I/usr/local/petsc/linux-gnu-real-32/include > mpif90 -show: gfortran -I/usr/local/include -I/usr/local/include > -L/usr/local/lib -lmpifort -Wl,-rpath -Wl,/usr/local/lib > -Wl,--enable-new-dtags -lmpi > Fortran compiler version: GNU Fortran (Ubuntu 11.2.0-7ubuntu2) 11.2.0 > ----------------------------------------- > Using C/C++ linker: mpicc > Using C/C++ flags: -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -Wno-misleading-indentation -Wno-stringop-overflow > -fstack-protector -fvisibility=hidden -g3 -O0 > Using Fortran linker: mpif90 > Using Fortran flags: -fPIC -Wall -ffree-line-length-0 > -Wno-unused-dummy-argument -g -O0 > ----------------------------------------- > Using system modules: > Using mpi.h: # 1 "/usr/local/include/mpi.h" 1 3 > ----------------------------------------- > Using libraries: -Wl,-rpath,/usr/local/petsc/linux-gnu-real-32/lib > -L/usr/local/petsc/linux-gnu-real-32/lib -Wl,-rpath,/usr/local/lib > -L/usr/local/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/11 > -L/usr/lib/gcc/x86_64-linux-gnu/11 -lpetsc -llapack -lblas -lhdf5_hl -lhdf5 > -lm -lX11 -lstdc++ -ldl -lmpifort -lmpi -lgfortran -lm -lgfortran -lm > -lgcc_s -lquadmath -lstdc++ -ldl > ------------------------------------------ > Using mpiexec: mpiexec > ------------------------------------------ > Using MAKE: /usr/bin/gmake > Using MAKEFLAGS: -j10 -l18.0 --no-print-directory -- > PETSC_DIR=/usr/local/petsc PETSC_ARCH=linux-gnu-real-32 > ========================================== > gmake[3]: Nothing to be done for 'libs'. > *** Building petsc4py *** > /usr/bin/bash: line 1: cd: src/binding/petsc4py: No such file or directory > Does this directory exist? $PETSC_DIR/src/binding/petsc4py It does in the PETSc 3.16.0 repository: https://gitlab.com/petsc/petsc/-/tree/main/src/binding/petsc4py Thanks, Matt > **************************ERROR************************************* > Error building petsc4py. > ******************************************************************** > gmake[2]: *** > [/usr/local/petsc/linux-gnu-real-32/lib/petsc/conf/petscrules:45: > petsc4pybuild] Error 1 > **************************ERROR************************************* > Error during compile, check linux-gnu-real-32/lib/petsc/conf/make.log > Send it and linux-gnu-real-32/lib/petsc/conf/configure.log to > petsc-maint at mcs.anl.gov > ******************************************************************** > gmake[1]: *** [makefile:40: all] Error 1 > make: *** [GNUmakefile:9: all] Error 2 > > Sorry for the massive text blob, but that was definitely unexepcted > behaviour. I know I mailed before saying the --with-petsc4py option didn't > change a thing, but now it seems to be breaking everything. I serached a > bit through make.log and didn't find anything useful. > > Quentin > > > > [image: cid:image003.jpg at 01D690CB.3B3FDC10] > > Quentin CHEVALIER ? IA parcours recherche > > LadHyX - Ecole polytechnique > > __________ > > > On Thu, 9 Dec 2021 at 18:02, Matthew Knepley wrote: > >> On Thu, Dec 9, 2021 at 11:40 AM Quentin Chevalier < >> quentin.chevalier at polytechnique.edu> wrote: >> >>> Hello Matthew, >>> >>> You're absolutely right ! Editor error, my apologies. Running full >>> process 1-5) in container gives : >>> >>> root at container_id: ./ex19 -da_refine 3 -pc_type mg -ksp_type fgmres >>> lid velocity = 0.0016, prandtl # = 1., grashof # = 1. >>> Number of SNES iterations = 2 >>> >>> So I guess it is passing the test. >> >> >> Yes, that is right. >> >> >>> I guess it is a site-wide >>> installation then. Is that good news or bad news when it comes to >>> adding HDF5 on top of things ? >>> >> >> It is fine either way. You should now be able to get further. petsc4py >> should be just using the shared >> libraries, so you should now be able to run an HDF5 thing from there. >> Does it work? >> >> Thanks, >> >> Matt >> >> >>> Thank you for your time, >>> >>> Quentin >>> >>> On Wed, 8 Dec 2021 at 19:12, Matthew Knepley wrote: >>> > >>> > On Wed, Dec 8, 2021 at 11:05 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >>> >> Sorry Matthew, I had a correct tabulation. The attached file gives the >>> >> same error. >>> > >>> > >>> > The file you attached has 6 spaces in front of ${CLINKER} rather than >>> a tab character. >>> > >>> > Thanks, >>> > >>> > Matt >>> > >>> >> >>> >> Quentin >>> >> >>> >> >>> >> >>> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >>> >> LadHyX - Ecole polytechnique >>> >> >>> >> __________ >>> >> >>> >> >>> >> >>> >> On Wed, 8 Dec 2021 at 16:51, Matthew Knepley >>> wrote: >>> >> > >>> >> > On Wed, Dec 8, 2021 at 9:39 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >>> >> >> Step 4) fails. Traceback is : makefile:2: *** missing separator. >>> Stop. >>> >> > >>> >> > >>> >> > Makefiles require a tab character at the beginning of every action >>> line. When you cut & pasted from email >>> >> > the tab got eaten. Once you put it back, the makefile will work. >>> >> > >>> >> >> >>> >> >> echo $CLINKER return an empty line. Same for PETSC_LIB. It would >>> seem >>> >> >> the docker container has no such environment variables. Or did you >>> >> >> expect me to replace these by an environment specific linker ? >>> >> > >>> >> > >>> >> > Those variables are defined in the included makefile >>> >> > >>> >> > include ${PETSC_DIR}/lib/petsc/conf/variables >>> >> > >>> >> > Thanks, >>> >> > >>> >> > Matt >>> >> > >>> >> >> >>> >> >> Quentin >>> >> >> >>> >> >> >>> >> >> >>> >> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >> >>> >> >> LadHyX - Ecole polytechnique >>> >> >> >>> >> >> __________ >>> >> >> >>> >> >> >>> >> >> >>> >> >> On Wed, 8 Dec 2021 at 15:14, Matthew Knepley >>> wrote: >>> >> >> > >>> >> >> > On Wed, Dec 8, 2021 at 9:07 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >> >>> >> >> >> I'm not sure I understand what you're saying... But it's my >>> theory >>> >> >> >> from the beginning that make check fails because PETSc examples >>> were >>> >> >> >> taken out of the docker image. >>> >> >> > >>> >> >> > >>> >> >> > It appears so. There is no 'src' directory. Here is a simple way >>> to test with an install like this. >>> >> >> > >>> >> >> > 1) mkdir test; cd test >>> >> >> > >>> >> >> > 2) Download the source: >>> https://gitlab.com/petsc/petsc/-/raw/main/src/snes/tutorials/ex19.c?inline=false >>> >> >> > >>> >> >> > 3) Create a simple makefile: >>> >> >> > >>> >> >> > ex19: ex19.o >>> >> >> > ${CLINKER} -o ex19 ex19.o ${PETSC_LIB} >>> >> >> > >>> >> >> > include ${PETSC_DIR}/lib/petsc/conf/variables >>> >> >> > include ${PETSC_DIR}/lib/petsc/conf/rules >>> >> >> > >>> >> >> > 4) make ex19 >>> >> >> > >>> >> >> > 5) ./ex19 -da_refine 3 -pc_type mg -ksp_type fgmres >>> >> >> > >>> >> >> > Thanks, >>> >> >> > >>> >> >> > Matt >>> >> >> > >>> >> >> >> I'm unsure what would be the correct way to discriminate between >>> >> >> >> source and partial install, I tried a find . --name "testex19" >>> from >>> >> >> >> PETSC_DIR to no avail. A ls from PETSC_DIR yields : >>> CODE_OF_CONDUCT.md >>> >> >> >> CONTRIBUTING GNUmakefile LICENSE PKG-INFO config >>> configtest.mod >>> >> >> >> configure configure.log configure.log.bkp gmakefile >>> gmakefile.test >>> >> >> >> include interfaces lib linux-gnu-complex-32 >>> linux-gnu-complex-64 >>> >> >> >> linux-gnu-real-32 linux-gnu-real-64 make.log makefile >>> petscdir.mk >>> >> >> >> setup.py >>> >> >> >> >>> >> >> >> Quentin >>> >> >> >> On Wed, 8 Dec 2021 at 14:38, Matthew Knepley >>> wrote: >>> >> >> >> > >>> >> >> >> > On Wed, Dec 8, 2021 at 8:29 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >> >> >>> >> >> >> >> @Matthew Knepley I'm confused by your comment. Judging from >>> the >>> >> >> >> >> traceback, PETSC_DIR is set and correct. You think it should >>> be >>> >> >> >> >> specified in the command line as well ? >>> >> >> >> > >>> >> >> >> > >>> >> >> >> > You are right, usually this message >>> >> >> >> > >>> >> >> >> > >> /usr/bin/bash: line 1: cd: src/snes/tutorials: No such >>> file or directory >>> >> >> >> > >>> >> >> >> > means that PETSC_DIR is undefined, and thus you cannot find >>> the tutorials. However, it could be that >>> >> >> >> > Your PETSC_DIR refers to a site-wide installation, which >>> /usr/local/petsc seems like. There the source >>> >> >> >> > is not installed and we cannot run 'make check' because only >>> the headers and libraries are there. Is this >>> >> >> >> > what is happening here? >>> >> >> >> > >>> >> >> >> > Thanks, >>> >> >> >> > >>> >> >> >> > Matt >>> >> >> >> > >>> >> >> >> >> >>> >> >> >> >> Quentin >>> >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >> >> >> >>> >> >> >> >> LadHyX - Ecole polytechnique >>> >> >> >> >> >>> >> >> >> >> __________ >>> >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> On Wed, 8 Dec 2021 at 13:23, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >> >> >> >> > >>> >> >> >> >> > On Wed, Dec 8, 2021 at 4:08 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >> >> >> >>> >> >> >> >> >> @all thanks for your time it's heartening to see a lively >>> community. >>> >> >> >> >> >> >>> >> >> >> >> >> @Barry I've restarted the container and grabbed the .log >>> file directly after the docker magic. I've tried a make check, it >>> unsurprisingly spews the same answer as before : >>> >> >> >> >> >> >>> >> >> >> >> >> Running check examples to verify correct installation >>> >> >> >> >> >> Using PETSC_DIR=/usr/local/petsc and >>> PETSC_ARCH=linux-gnu-real-32 >>> >> >> >> >> >> /usr/bin/bash: line 1: cd: src/snes/tutorials: No such >>> file or directory >>> >> >> >> >> >> /usr/bin/bash: line 1: cd: src/snes/tutorials: No such >>> file or directory >>> >> >> >> >> >> gmake[3]: *** No rule to make target 'testex19'. Stop. >>> >> >> >> >> >> gmake[2]: *** [makefile:155: check_build] Error 2 >>> >> >> >> >> > >>> >> >> >> >> > >>> >> >> >> >> > This happens if you run 'make check' without defining >>> PETSC_DIR in your environment, since we are including >>> >> >> >> >> > makefiles with PETSC_DIR in the path and make does not >>> allow proper error messages in that case. >>> >> >> >> >> > >>> >> >> >> >> > Thanks, >>> >> >> >> >> > >>> >> >> >> >> > Matt >>> >> >> >> >> > >>> >> >> >> >> >> >>> >> >> >> >> >> @Matthew ok will do, but I think @Lawrence has already >>> provided that answer. It's possible to change the dockerfile and recompute >>> the dolfinx image with hdf5, only it is a time-consuming process. >>> >> >> >> >> >> >>> >> >> >> >> >> Quentin >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >> >> >> >> >>> >> >> >> >> >> LadHyX - Ecole polytechnique >>> >> >> >> >> >> >>> >> >> >> >> >> __________ >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> On Tue, 7 Dec 2021 at 19:16, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >> >> >> >> >>> >>> >> >> >> >> >>> On Tue, Dec 7, 2021 at 9:43 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> @Matthew, as stated before, error output is unchanged, >>> i.e.the python >>> >> >> >> >> >>>> command below produces the same traceback : >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> # python3 -c "from petsc4py import PETSc; >>> PETSc.Viewer().createHDF5('d.h5')" >>> >> >> >> >> >>>> Traceback (most recent call last): >>> >> >> >> >> >>>> File "", line 1, in >>> >> >> >> >> >>>> File "PETSc/Viewer.pyx", line 182, in >>> petsc4py.PETSc.Viewer.createHDF5 >>> >> >> >> >> >>>> petsc4py.PETSc.Error: error code 86 >>> >> >> >> >> >>>> [0] PetscViewerSetType() at >>> >> >> >> >> >>>> >>> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> >> >> >> >> >>>> [0] Unknown type. Check for miss-spelling or missing >>> package: >>> >> >> >> >> >>>> >>> https://petsc.org/release/install/install/#external-packages >>> >> >> >> >> >>>> [0] Unknown PetscViewer type given: hdf5 >>> >> >> >> >> >>> >>> >> >> >> >> >>> >>> >> >> >> >> >>> The reason I wanted the output was that the C output >>> shows the configure options that the PETSc library >>> >> >> >> >> >>> was built with, However, Python seems to be eating this, >>> so I cannot check. >>> >> >> >> >> >>> >>> >> >> >> >> >>> It seems like using this container is >>> counter-productive. If it was built correctly, making these changes would >>> be trivial. >>> >> >> >> >> >>> Send mail to FEniCS (I am guessing Chris Richardson >>> maintains this), and ask how they intend people to change these >>> >> >> >> >> >>> options. >>> >> >> >> >> >>> >>> >> >> >> >> >>> Thanks, >>> >> >> >> >> >>> >>> >> >> >> >> >>> Matt. >>> >> >> >> >> >>> >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> @Wence that makes sense. I'd assumed that the original >>> PETSc had been >>> >> >> >> >> >>>> overwritten, and if the linking has gone wrong I'm >>> surprised anything >>> >> >> >> >> >>>> happens with petsc4py at all. >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> Your tentative command gave : >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> ERROR: Invalid requirement: >>> '/usr/local/petsc/src/binding/petsc4py' >>> >> >> >> >> >>>> Hint: It looks like a path. File >>> >> >> >> >> >>>> '/usr/local/petsc/src/binding/petsc4py' does not exist. >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> So I tested that global variables PETSC_ARCH & >>> PETSC_DIR were correct >>> >> >> >> >> >>>> then ran "pip install petsc4py" to restart petsc4py >>> from scratch. This >>> >> >> >> >> >>>> gives rise to a different error : >>> >> >> >> >> >>>> # python3 -c "from petsc4py import PETSc" >>> >> >> >> >> >>>> Traceback (most recent call last): >>> >> >> >> >> >>>> File "", line 1, in >>> >> >> >> >> >>>> File >>> "/usr/local/lib/python3.9/dist-packages/petsc4py/PETSc.py", >>> >> >> >> >> >>>> line 3, in >>> >> >> >> >> >>>> PETSc = ImportPETSc(ARCH) >>> >> >> >> >> >>>> File >>> "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >>> >> >> >> >> >>>> line 29, in ImportPETSc >>> >> >> >> >> >>>> return Import('petsc4py', 'PETSc', path, arch) >>> >> >> >> >> >>>> File >>> "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >>> >> >> >> >> >>>> line 73, in Import >>> >> >> >> >> >>>> module = import_module(pkg, name, path, arch) >>> >> >> >> >> >>>> File >>> "/usr/local/lib/python3.9/dist-packages/petsc4py/lib/__init__.py", >>> >> >> >> >> >>>> line 58, in import_module >>> >> >> >> >> >>>> with f: return imp.load_module(fullname, f, fn, >>> info) >>> >> >> >> >> >>>> File "/usr/lib/python3.9/imp.py", line 242, in >>> load_module >>> >> >> >> >> >>>> return load_dynamic(name, filename, file) >>> >> >> >> >> >>>> File "/usr/lib/python3.9/imp.py", line 342, in >>> load_dynamic >>> >> >> >> >> >>>> return _load(spec) >>> >> >> >> >> >>>> ImportError: >>> /usr/local/lib/python3.9/dist-packages/petsc4py/lib/linux-gnu-real-32/ >>> PETSc.cpython-39-x86_64-linux-gnu.so: >>> >> >> >> >> >>>> undefined symbol: petscstack >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> Not sure that it a step forward ; looks like petsc4py >>> is broken now. >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> Quentin >>> >> >> >> >> >>>> >>> >> >> >> >> >>>> On Tue, 7 Dec 2021 at 14:58, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > On Tue, Dec 7, 2021 at 8:26 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> Ok my bad, that log corresponded to a tentative >>> --download-hdf5. This >>> >> >> >> >> >>>> >> log corresponds to the commands given above and has >>> --with-hdf5 in its >>> >> >> >> >> >>>> >> options. >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > Okay, this configure was successful and found HDF5 >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> The whole process still results in the same error. >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > Now send me the complete error output with this PETSc. >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > Thanks, >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > Matt >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> Quentin >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> LadHyX - Ecole polytechnique >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> __________ >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> >>> >> >> >> >> >>>> >> On Tue, 7 Dec 2021 at 13:59, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > On Tue, Dec 7, 2021 at 3:55 AM Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> Hello Matthew, >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> That would indeed make sense. >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> Full log is attached, I grepped hdf5 in there and >>> didn't find anything alarming. >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > At the top of this log: >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > Configure Options: --configModules=PETSc.Configure >>> --optionsModule=config.compilerOptions PETSC_ARCH=linux-gnu-complex-64 >>> --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --with-make-np=2 >>> --with-64-bit-indices=yes --with-debugging=no --with-fortran-bindings=no >>> --with-shared-libraries --download-hypre --download-mumps >>> --download-ptscotch --download-scalapack --download-suitesparse >>> --download-superlu_dist --with-scalar-type=complex >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > So the HDF5 option is not being specified. >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > Thanks, >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > Matt >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> >> Cheers, >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> Quentin >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> LadHyX - Ecole polytechnique >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> __________ >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> >>> >> >> >> >> >>>> >> >> On Mon, 6 Dec 2021 at 21:39, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> On Mon, Dec 6, 2021 at 3:27 PM Quentin Chevalier >>> wrote: >>> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >>>> >> >>>> Fine. MWE is unchanged : >>> >> >> >> >> >>>> >> >>>> * Run this docker container >>> >> >> >> >> >>>> >> >>>> * Do : python3 -c "from petsc4py import PETSc; >>> PETSc.Viewer().createHDF5('dummy.h5')" >>> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >>>> >> >>>> Updated attempt at a fix : >>> >> >> >> >> >>>> >> >>>> * cd /usr/local/petsc/ >>> >> >> >> >> >>>> >> >>>> * ./configure PETSC_ARCH= linux-gnu-real-32 >>> PETSC_DIR=/usr/local/petsc --with-hdf5 --force >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> Did it find HDF5? If not, it will shut it off. >>> You need to send us >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> $PETSC_DIR/configure.log >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> so we can see what happened in the configure run. >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> Thanks, >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> Matt >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >>>> >> >>>> * make PETSC_DIR=/usr/local/petsc PETSC-ARCH= >>> linux-gnu-real-32 all >>> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >>>> >> >>>> Still no joy. The same error remains. >>> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >>>> >> >>>> Quentin >>> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >>>> >> >>>> >>> >> >> >> >> >>>> >> >>>> On Mon, 6 Dec 2021 at 20:04, Pierre Jolivet < >>> pierre at joliv.et> wrote: >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > On 6 Dec 2021, at 7:42 PM, Quentin Chevalier < >>> quentin.chevalier at polytechnique.edu> wrote: >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > The PETSC_DIR exactly corresponds to the >>> previous one, so I guess that rules option b) out, except if a specific >>> option is required to overwrite a previous installation of PETSc. As for >>> a), well I thought reconfiguring pretty direct, you're welcome to give me a >>> hint as to what could be wrong in the following process. >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > Steps to reproduce this behaviour are as >>> follows : >>> >> >> >> >> >>>> >> >>>> > * Run this docker container >>> >> >> >> >> >>>> >> >>>> > * Do : python3 -c "from petsc4py import >>> PETSc; PETSc.Viewer().createHDF5('dummy.h5')" >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > After you get the error Unknown PetscViewer >>> type, feel free to try : >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > * cd /usr/local/petsc/ >>> >> >> >> >> >>>> >> >>>> > * ./configure --with-hfd5 >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > It?s hdf5, not hfd5. >>> >> >> >> >> >>>> >> >>>> > It?s PETSC_ARCH, not PETSC-ARCH. >>> >> >> >> >> >>>> >> >>>> > PETSC_ARCH is missing from your configure >>> line. >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > Thanks, >>> >> >> >> >> >>>> >> >>>> > Pierre >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > * make PETSC_DIR=/usr/local/petsc >>> PETSC-ARCH=linux-gnu-real-32 all >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > Then repeat the MWE and observe absolutely no >>> behavioural change whatsoever. I'm afraid I don't know PETSc well enough to >>> be surprised by that. >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > Quentin >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > Quentin CHEVALIER ? IA parcours recherche >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > LadHyX - Ecole polytechnique >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > __________ >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > On Mon, 6 Dec 2021 at 19:24, Matthew Knepley < >>> knepley at gmail.com> wrote: >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> On Mon, Dec 6, 2021 at 1:22 PM Quentin >>> Chevalier wrote: >>> >> >> >> >> >>>> >> >>>> >>> >>> >> >> >> >> >>>> >> >>>> >>> It failed all of the tests included in `make >>> >> >> >> >> >>>> >> >>>> >>> PETSC_DIR=/usr/local/petsc >>> PETSC-ARCH=linux-gnu-real-32 check`, with >>> >> >> >> >> >>>> >> >>>> >>> the error `/usr/bin/bash: line 1: cd: >>> src/snes/tutorials: No such file >>> >> >> >> >> >>>> >> >>>> >>> or directory` >>> >> >> >> >> >>>> >> >>>> >>> >>> >> >> >> >> >>>> >> >>>> >>> I am therefore fairly confident this a >>> "file absence" problem, and not >>> >> >> >> >> >>>> >> >>>> >>> a compilation problem. >>> >> >> >> >> >>>> >> >>>> >>> >>> >> >> >> >> >>>> >> >>>> >>> I repeat that there was no error at >>> compilation stage. The final stage >>> >> >> >> >> >>>> >> >>>> >>> did present `gmake[3]: Nothing to be done >>> for 'libs'.` but that's all. >>> >> >> >> >> >>>> >> >>>> >>> >>> >> >> >> >> >>>> >> >>>> >>> Again, running `./configure --with-hdf5` >>> followed by a `make >>> >> >> >> >> >>>> >> >>>> >>> PETSC_DIR=/usr/local/petsc >>> PETSC-ARCH=linux-gnu-real-32 all` does not >>> >> >> >> >> >>>> >> >>>> >>> change the problem. I get the same error at >>> the same position as >>> >> >> >> >> >>>> >> >>>> >>> before. >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> If you reconfigured and rebuilt, it is >>> impossible to get the same error, so >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> a) You did not reconfigure >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> b) Your new build is somewhere else on the >>> machine >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> Thanks, >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> Matt >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >>> >>> >> >> >> >> >>>> >> >>>> >>> I will comment I am running on OpenSUSE. >>> >> >> >> >> >>>> >> >>>> >>> >>> >> >> >> >> >>>> >> >>>> >>> Quentin >>> >> >> >> >> >>>> >> >>>> >>> >>> >> >> >> >> >>>> >> >>>> >>> On Mon, 6 Dec 2021 at 19:09, Matthew >>> Knepley wrote: >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> > On Mon, Dec 6, 2021 at 1:08 PM Quentin >>> Chevalier wrote: >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> Hello Matthew and thanks for your quick >>> response. >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> I'm afraid I did try to snoop around the >>> container and rerun PETSc's >>> >> >> >> >> >>>> >> >>>> >>> >> configure with the --with-hdf5 option, >>> to absolutely no avail. >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> I didn't see any errors during config or >>> make, but it failed the tests >>> >> >> >> >> >>>> >> >>>> >>> >> (which aren't included in the minimal >>> container I suppose) >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> > Failed which tests? What was the error? >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> > Thanks, >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> > Matt >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> Quentin >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> Quentin CHEVALIER ? IA parcours recherche >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> LadHyX - Ecole polytechnique >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> __________ >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >> >> >> >> >>>> >> >>>> >>> >> On Mon, 6 Dec 2021 at 19:02, Matthew >>> Knepley wrote: >>> >> >> >> >> >>>> >> >>>> >>> >> > >>> >> >> >> >> >>>> >> >>>> >>> >> > On Mon, Dec 6, 2021 at 11:28 AM >>> Quentin Chevalier wrote: >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> Hello PETSc users, >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> This email is a duplicata of this >>> gitlab issue, sorry for any inconvenience caused. >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> I want to compute a PETSc vector in >>> real mode, than perform calculations with it in complex mode. I want as >>> much of this process to be parallel as possible. Right now, I compile PETSc >>> in real mode, compute my vector and save it to a file, then switch to >>> complex mode, read it, and move on. >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> This creates unexpected behaviour >>> using MPIIO, so on Lisandro Dalcinl's advice I'm moving to HDF5 format. My >>> code is as follows (taking inspiration from petsc4py doc, a bitbucket >>> example and another one, all top Google results for 'petsc hdf5') : >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >>> >> >> >> >> >>>> >> >>>> >>> >> >>> viewer = >>> PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>> >> >> >> >> >>>> >> >>>> >>> >> >>> q.load(viewer) >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >>> q.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD) >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> This crashes my code. I obtain >>> traceback : >>> >> >> >> >> >>>> >> >>>> >>> >> >>> >>> >> >> >> >> >>>> >> >>>> >>> >> >>> File "/home/shared/code.py", line >>> 121, in Load >>> >> >> >> >> >>>> >> >>>> >>> >> >>> viewer = >>> PETSc.Viewer().createHDF5(file_name, 'r', COMM_WORLD) >>> >> >> >> >> >>>> >> >>>> >>> >> >>> File "PETSc/Viewer.pyx", line 182, >>> in petsc4py.PETSc.Viewer.createHDF5 >>> >> >> >> >> >>>> >> >>>> >>> >> >>> petsc4py.PETSc.Error: error code 86 >>> >> >> >> >> >>>> >> >>>> >>> >> >>> [0] PetscViewerSetType() at >>> /usr/local/petsc/src/sys/classes/viewer/interface/viewreg.c:442 >>> >> >> >> >> >>>> >> >>>> >>> >> >>> [0] Unknown type. Check for >>> miss-spelling or missing package: >>> https://petsc.org/release/install/install/#external-packages >>> >> >> >> >> >>>> >> >>>> >>> >> >>> [0] Unknown PetscViewer type given: >>> hdf5 >>> >> >> >> >> >>>> >> >>>> >>> >> > >>> >> >> >> >> >>>> >> >>>> >>> >> > This means that PETSc has not been >>> configured with HDF5 (--with-hdf5 or --download-hdf5), so the container >>> should be updated. >>> >> >> >> >> >>>> >> >>>> >>> >> > >>> >> >> >> >> >>>> >> >>>> >>> >> > THanks, >>> >> >> >> >> >>>> >> >>>> >>> >> > >>> >> >> >> >> >>>> >> >>>> >>> >> > Matt >>> >> >> >> >> >>>> >> >>>> >>> >> > >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> I have petsc4py 3.16 from this docker >>> container (list of dependencies include PETSc and petsc4py). >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> I'm pretty sure this is not intended >>> behaviour. Any insight as to how to fix this issue (I tried running >>> ./configure --with-hdf5 to no avail) or more generally to perform this >>> jiggling between real and complex would be much appreciated, >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> Kind regards. >>> >> >> >> >> >>>> >> >>>> >>> >> >> >>> >> >> >> >> >>>> >> >>>> >>> >> >> Quentin >>> >> >> >> >> >>>> >> >>>> >>> >> > >>> >> >> >> >> >>>> >> >>>> >>> >> > >>> >> >> >> >> >>>> >> >>>> >>> >> > >>> >> >> >> >> >>>> >> >>>> >>> >> > -- >>> >> >> >> >> >>>> >> >>>> >>> >> > What most experimenters take for >>> granted before they begin their experiments is infinitely more interesting >>> than any results to which their experiments lead. >>> >> >> >> >> >>>> >> >>>> >>> >> > -- Norbert Wiener >>> >> >> >> >> >>>> >> >>>> >>> >> > >>> >> >> >> >> >>>> >> >>>> >>> >> > https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> > -- >>> >> >> >> >> >>>> >> >>>> >>> > What most experimenters take for granted >>> before they begin their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> >> >> >> >> >>>> >> >>>> >>> > -- Norbert Wiener >>> >> >> >> >> >>>> >> >>>> >>> > >>> >> >> >> >> >>>> >> >>>> >>> > https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> -- >>> >> >> >> >> >>>> >> >>>> >> What most experimenters take for granted >>> before they begin their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> >> >> >> >> >>>> >> >>>> >> -- Norbert Wiener >>> >> >> >> >> >>>> >> >>>> >> >>> >> >> >> >> >>>> >> >>>> >> https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>>> > >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> -- >>> >> >> >> >> >>>> >> >>> What most experimenters take for granted before >>> they begin their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> >> >> >> >> >>>> >> >>> -- Norbert Wiener >>> >> >> >> >> >>>> >> >>> >>> >> >> >> >> >>>> >> >>> https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > -- >>> >> >> >> >> >>>> >> > What most experimenters take for granted before >>> they begin their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> >> >> >> >> >>>> >> > -- Norbert Wiener >>> >> >> >> >> >>>> >> > >>> >> >> >> >> >>>> >> > https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > -- >>> >> >> >> >> >>>> > What most experimenters take for granted before they >>> begin their experiments is infinitely more interesting than any results to >>> which their experiments lead. >>> >> >> >> >> >>>> > -- Norbert Wiener >>> >> >> >> >> >>>> > >>> >> >> >> >> >>>> > https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> >> >>> >>> >> >> >> >> >>> >>> >> >> >> >> >>> >>> >> >> >> >> >>> -- >>> >> >> >> >> >>> What most experimenters take for granted before they >>> begin their experiments is infinitely more interesting than any results to >>> which their experiments lead. >>> >> >> >> >> >>> -- Norbert Wiener >>> >> >> >> >> >>> >>> >> >> >> >> >>> https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> >> > >>> >> >> >> >> > >>> >> >> >> >> > >>> >> >> >> >> > -- >>> >> >> >> >> > What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> >> >> >> >> > -- Norbert Wiener >>> >> >> >> >> > >>> >> >> >> >> > https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> > >>> >> >> >> > >>> >> >> >> > >>> >> >> >> > -- >>> >> >> >> > What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> >> >> >> > -- Norbert Wiener >>> >> >> >> > >>> >> >> >> > https://www.cse.buffalo.edu/~knepley/ >>> >> >> > >>> >> >> > >>> >> >> > >>> >> >> > -- >>> >> >> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> >> >> > -- Norbert Wiener >>> >> >> > >>> >> >> > https://www.cse.buffalo.edu/~knepley/ >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> >> > -- Norbert Wiener >>> >> > >>> >> > https://www.cse.buffalo.edu/~knepley/ >>> > >>> > >>> > >>> > -- >>> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> > -- Norbert Wiener >>> > >>> > https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Fri Dec 10 06:42:11 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Fri, 10 Dec 2021 12:42:11 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: Message-ID: Hi Matt, I intend to perform a scaling study ? I have a few more questions from ex56 3D, tri-quadratic hexahedra (Q1), displacement finite element formulation. i) What makes the problem non-linear? I believe SNES are used to solve non-linear problems. ii) 2,2,1 does it present the number of elements used in each direction? iii) What makes the problem unstructured? I believe the geometry is a cube or cuboid ? is it because it uses DMPlex? iv) Do any external FEM package with an unstructured problem domain has to use DMPlex mat type? v) What about -mat_type (not _dm_mat_type) ajicusparse ? will it work with unstructured FEM discretised domains? I tried to run two problems of ex56 with two different domain size ( - attached you find the log_view outputs of both on gpus) using _pc_type asm: ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt ./ex56 -cells 10,10,5 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt Below is the SNES iteration for problem with 2,2,1 cells which converges after two non-linear iterations: 0 SNES Function norm 0.000000000000e+00 0 SNES Function norm 7.529825940191e+01 1 SNES Function norm 4.734810707002e-08 2 SNES Function norm 1.382827243108e-14 Below is the SNES iteration for problem with 10,10,5 cells? why does it first decrease and then increase to 0 SNES Function norm 1.085975028558e+01 and finally converge? 0 SNES Function norm 2.892801019593e+01 1 SNES Function norm 5.361683383932e-07 2 SNES Function norm 1.726814199132e-14 0 SNES Function norm 1.085975028558e+01 1 SNES Function norm 2.294074693590e-07 2 SNES Function norm 2.491900236077e-14 Kind regards, Karthik. From: Matthew Knepley Date: Thursday, 2 December 2021 at 10:57 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh On Thu, Dec 2, 2021 at 3:33 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Hello, Are there example tutorials on unstructured mesh in ksp? Can some of them run on gpus? There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now. Thanks, Matt Kind regards, Karthik. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: output_221.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: output_10105.txt URL: From knepley at gmail.com Fri Dec 10 07:04:12 2021 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 10 Dec 2021 08:04:12 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: Message-ID: On Fri, Dec 10, 2021 at 7:42 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Hi Matt, > > > > I intend to perform a scaling study ? I have a few more questions from ex56 > > *3D, tri-quadratic hexahedra (Q1), displacement finite element > formulation.* > > i) What makes the problem non-linear? I believe SNES > are used to solve non-linear problems. > It is a linear problem. We use SNES because it is easier to use it for everything. > ii) 2,2,1 does it present the number of elements used > in each direction? > You can use -dm_view to show the mesh information actually used in the run. > iii) What makes the problem unstructured? I believe the > geometry is a cube or cuboid ? is it because it uses DMPlex? > Yes. > iv) Do any external FEM package with an unstructured > problem domain has to use DMPlex mat type? > DMPlex is not a Mat type, but rather a DM type. I do not understand the question. For example, LibMesh uses PETSc solvers but has its own mesh types. > v) What about -mat_type (not _dm_mat_type) ajicusparse > ? will it work with unstructured FEM discretised domains? > dm_mat_type is just a way of setting the MatType that the DM creates. You can set it to whatever type you want. Since it uses MatSetValues() there may be overhead in converting to that matrix type, but it should work. > I tried to run two problems of ex56 with two different domain size ( - > attached you find the log_view outputs of both on *gpus*) using _pc_type > asm: > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt > > > > > > ./ex56 -cells 10,10,5 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree > 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt > > > > Below is the SNES iteration for problem with 2,2,1 cells which converges > after two non-linear iterations: > > > > 0 SNES Function norm 0.000000000000e+00 > > 0 SNES Function norm 7.529825940191e+01 > > 1 SNES Function norm 4.734810707002e-08 > > 2 SNES Function norm 1.382827243108e-14 > Your KSP tolerance is too high, so it takes another iterate. Use -ksp_rtol 1e-10. > Below is the SNES iteration for problem with 10,10,5 cells? why does it > first decrease and then increase to *0 SNES Function norm > 1.085975028558e+01* and finally converge? > > > > 0 SNES Function norm 2.892801019593e+01 > > 1 SNES Function norm 5.361683383932e-07 > > 2 SNES Function norm 1.726814199132e-14 > > 0 SNES Function norm 1.085975028558e+01 > > 1 SNES Function norm 2.294074693590e-07 > > 2 SNES Function norm 2.491900236077e-14 > You are solving the problem twice, probably because ex56 is in its refinement loop. Thanks, Matt > Kind regards, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Thursday, 2 December 2021 at 10:57 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *"petsc-users at mcs.anl.gov" > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Thu, Dec 2, 2021 at 3:33 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Hello, > > > > Are there example tutorials on unstructured mesh in ksp? Can some of them > run on gpus? > > > > There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The > solver can run on the GPU, but the vector/matrix > > FEM assembly does not. I am working on that now. > > > > Thanks, > > > > Matt > > > > Kind regards, > > Karthik. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Fri Dec 10 09:49:06 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Fri, 10 Dec 2021 15:49:06 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: Message-ID: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> Thank you for your response. I tried using the flag -dm_view to get mesh information. I was hoping it might create a output file with mesh information but it didn?t create any. What should I expect with -dm_view? Best, Karthik. From: Matthew Knepley Date: Friday, 10 December 2021 at 13:04 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh On Fri, Dec 10, 2021 at 7:42 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Hi Matt, I intend to perform a scaling study ? I have a few more questions from ex56 3D, tri-quadratic hexahedra (Q1), displacement finite element formulation. i) What makes the problem non-linear? I believe SNES are used to solve non-linear problems. It is a linear problem. We use SNES because it is easier to use it for everything. ii) 2,2,1 does it present the number of elements used in each direction? You can use -dm_view to show the mesh information actually used in the run. iii) What makes the problem unstructured? I believe the geometry is a cube or cuboid ? is it because it uses DMPlex? Yes. iv) Do any external FEM package with an unstructured problem domain has to use DMPlex mat type? DMPlex is not a Mat type, but rather a DM type. I do not understand the question. For example, LibMesh uses PETSc solvers but has its own mesh types. v) What about -mat_type (not _dm_mat_type) ajicusparse ? will it work with unstructured FEM discretised domains? dm_mat_type is just a way of setting the MatType that the DM creates. You can set it to whatever type you want. Since it uses MatSetValues() there may be overhead in converting to that matrix type, but it should work. I tried to run two problems of ex56 with two different domain size ( - attached you find the log_view outputs of both on gpus) using _pc_type asm: ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt ./ex56 -cells 10,10,5 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt Below is the SNES iteration for problem with 2,2,1 cells which converges after two non-linear iterations: 0 SNES Function norm 0.000000000000e+00 0 SNES Function norm 7.529825940191e+01 1 SNES Function norm 4.734810707002e-08 2 SNES Function norm 1.382827243108e-14 Your KSP tolerance is too high, so it takes another iterate. Use -ksp_rtol 1e-10. Below is the SNES iteration for problem with 10,10,5 cells? why does it first decrease and then increase to 0 SNES Function norm 1.085975028558e+01 and finally converge? 0 SNES Function norm 2.892801019593e+01 1 SNES Function norm 5.361683383932e-07 2 SNES Function norm 1.726814199132e-14 0 SNES Function norm 1.085975028558e+01 1 SNES Function norm 2.294074693590e-07 2 SNES Function norm 2.491900236077e-14 You are solving the problem twice, probably because ex56 is in its refinement loop. Thanks, Matt Kind regards, Karthik. From: Matthew Knepley > Date: Thursday, 2 December 2021 at 10:57 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Thu, Dec 2, 2021 at 3:33 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Hello, Are there example tutorials on unstructured mesh in ksp? Can some of them run on gpus? There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now. Thanks, Matt Kind regards, Karthik. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.tardieu at edf.fr Fri Dec 10 10:04:01 2021 From: nicolas.tardieu at edf.fr (TARDIEU Nicolas) Date: Fri, 10 Dec 2021 16:04:01 +0000 Subject: [petsc-users] non-manifold DMPLEX Message-ID: Dear PETSc Team, Following a previous discussion on the mailing list, I'd like to experiment with DMPLEX with a very simple non-manifold mesh as shown in the attached picture : a cube connected to a square by an edge and to an edge by a point. I have read some of the papers that Matthew et al. have written, but I must admit that I do not see how to start... I see how the define the different elements but I do not see how to specify the special relationship between the cube and the square and between the cube and the edge. Once it will have been set correctly, what I am hoping is to be able to use all the nice features of the DMPLEX object. Best regards, Nicolas Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NonManifold.png Type: image/png Size: 11416 bytes Desc: NonManifold.png URL: From knepley at gmail.com Fri Dec 10 10:16:57 2021 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 10 Dec 2021 11:16:57 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> Message-ID: On Fri, Dec 10, 2021 at 10:49 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Thank you for your response. > > I tried using the flag -dm_view to get mesh information. > > I was hoping it might create a output file with mesh information but it > didn?t create any. > > What should I expect with -dm_view? > -dm_view prints the Plex information to the screen (by default), but it responds to the normal viewer options. However, it looks like Mark changes the prefix on the mesh, so I think you need -ex56_dm_view Thanks, Matt > Best, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Friday, 10 December 2021 at 13:04 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *"petsc-users at mcs.anl.gov" > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Fri, Dec 10, 2021 at 7:42 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Hi Matt, > > > > I intend to perform a scaling study ? I have a few more questions from ex56 > > *3D, tri-quadratic hexahedra (Q1), displacement finite element > formulation.* > > i) What makes the problem non-linear? I believe SNES > are used to solve non-linear problems. > > It is a linear problem. We use SNES because it is easier to use it for > everything. > > ii) 2,2,1 does it present the number of elements used > in each direction? > > You can use -dm_view to show the mesh information actually used in the run. > > iii) What makes the problem unstructured? I believe the > geometry is a cube or cuboid ? is it because it uses DMPlex? > > Yes. > > iv) Do any external FEM package with an unstructured > problem domain has to use DMPlex mat type? > > DMPlex is not a Mat type, but rather a DM type. I do not understand the > question. For example, LibMesh uses PETSc solvers but has > > its own mesh types. > > v) What about -mat_type (not _dm_mat_type) ajicusparse > ? will it work with unstructured FEM discretised domains? > > dm_mat_type is just a way of setting the MatType that the DM creates. You > can set it to whatever type you want. Since it uses MatSetValues() > > there may be overhead in converting to that matrix type, but it should > work. > > > > I tried to run two problems of ex56 with two different domain size ( - > attached you find the log_view outputs of both on *gpus*) using _pc_type > asm: > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt > > > > > > ./ex56 -cells 10,10,5 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree > 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt > > > > Below is the SNES iteration for problem with 2,2,1 cells which converges > after two non-linear iterations: > > > > 0 SNES Function norm 0.000000000000e+00 > > 0 SNES Function norm 7.529825940191e+01 > > 1 SNES Function norm 4.734810707002e-08 > > 2 SNES Function norm 1.382827243108e-14 > > > > Your KSP tolerance is too high, so it takes another iterate. Use -ksp_rtol > 1e-10. > > > > Below is the SNES iteration for problem with 10,10,5 cells? why does it > first decrease and then increase to *0 SNES Function norm > 1.085975028558e+01* and finally converge? > > > > 0 SNES Function norm 2.892801019593e+01 > > 1 SNES Function norm 5.361683383932e-07 > > 2 SNES Function norm 1.726814199132e-14 > > 0 SNES Function norm 1.085975028558e+01 > > 1 SNES Function norm 2.294074693590e-07 > > 2 SNES Function norm 2.491900236077e-14 > > > > You are solving the problem twice, probably because ex56 is in its > refinement loop. > > > > Thanks, > > > > Matt > > > > Kind regards, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Thursday, 2 December 2021 at 10:57 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *"petsc-users at mcs.anl.gov" > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Thu, Dec 2, 2021 at 3:33 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Hello, > > > > Are there example tutorials on unstructured mesh in ksp? Can some of them > run on gpus? > > > > There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The > solver can run on the GPU, but the vector/matrix > > FEM assembly does not. I am working on that now. > > > > Thanks, > > > > Matt > > > > Kind regards, > > Karthik. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Fri Dec 10 12:11:20 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Fri, 10 Dec 2021 18:11:20 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> Message-ID: <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> Thank you that worked. I have attached the output of the run ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view Below is the mesh information I get before the solve. Not sure how to interpret them ? is ?18? say number of elements in certain direction? How does ?18? come about ? given cells 2,2,1 as input? Sorry, bit lost what markers and other labels represent. Mesh in 3 dimensions: 0-cells: 18 1-cells: 33 2-cells: 20 3-cells: 4 Labels: marker: 1 strata with value/size (1 (48)) Face Sets: 6 strata with value/size (6 (2), 5 (2), 3 (2), 4 (2), 1 (4), 2 (4)) depth: 4 strata with value/size (0 (18), 1 (33), 2 (20), 3 (4)) boundary: 1 strata with value/size (1 (66)) celltype: 4 strata with value/size (7 (4), 0 (18), 4 (20), 1 (33)) I am more puzzled as the mesh information changes after the solve? Mesh in 3 dimensions: 0-cells: 75 1-cells: 170 2-cells: 128 3-cells: 32 Labels: celltype: 4 strata with value/size (0 (75), 1 (170), 4 (128), 7 (32)) depth: 4 strata with value/size (0 (75), 1 (170), 2 (128), 3 (32)) marker: 1 strata with value/size (1 (240)) Face Sets: 6 strata with value/size (6 (18), 5 (18), 3 (18), 4 (18), 1 (36), 2 (36)) boundary: 1 strata with value/size (1 (258)) Best, Karthik. From: Matthew Knepley Date: Friday, 10 December 2021 at 16:17 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh On Fri, Dec 10, 2021 at 10:49 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank you for your response. I tried using the flag -dm_view to get mesh information. I was hoping it might create a output file with mesh information but it didn?t create any. What should I expect with -dm_view? -dm_view prints the Plex information to the screen (by default), but it responds to the normal viewer options. However, it looks like Mark changes the prefix on the mesh, so I think you need -ex56_dm_view Thanks, Matt Best, Karthik. From: Matthew Knepley > Date: Friday, 10 December 2021 at 13:04 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Fri, Dec 10, 2021 at 7:42 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Hi Matt, I intend to perform a scaling study ? I have a few more questions from ex56 3D, tri-quadratic hexahedra (Q1), displacement finite element formulation. i) What makes the problem non-linear? I believe SNES are used to solve non-linear problems. It is a linear problem. We use SNES because it is easier to use it for everything. ii) 2,2,1 does it present the number of elements used in each direction? You can use -dm_view to show the mesh information actually used in the run. iii) What makes the problem unstructured? I believe the geometry is a cube or cuboid ? is it because it uses DMPlex? Yes. iv) Do any external FEM package with an unstructured problem domain has to use DMPlex mat type? DMPlex is not a Mat type, but rather a DM type. I do not understand the question. For example, LibMesh uses PETSc solvers but has its own mesh types. v) What about -mat_type (not _dm_mat_type) ajicusparse ? will it work with unstructured FEM discretised domains? dm_mat_type is just a way of setting the MatType that the DM creates. You can set it to whatever type you want. Since it uses MatSetValues() there may be overhead in converting to that matrix type, but it should work. I tried to run two problems of ex56 with two different domain size ( - attached you find the log_view outputs of both on gpus) using _pc_type asm: ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt ./ex56 -cells 10,10,5 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt Below is the SNES iteration for problem with 2,2,1 cells which converges after two non-linear iterations: 0 SNES Function norm 0.000000000000e+00 0 SNES Function norm 7.529825940191e+01 1 SNES Function norm 4.734810707002e-08 2 SNES Function norm 1.382827243108e-14 Your KSP tolerance is too high, so it takes another iterate. Use -ksp_rtol 1e-10. Below is the SNES iteration for problem with 10,10,5 cells? why does it first decrease and then increase to 0 SNES Function norm 1.085975028558e+01 and finally converge? 0 SNES Function norm 2.892801019593e+01 1 SNES Function norm 5.361683383932e-07 2 SNES Function norm 1.726814199132e-14 0 SNES Function norm 1.085975028558e+01 1 SNES Function norm 2.294074693590e-07 2 SNES Function norm 2.491900236077e-14 You are solving the problem twice, probably because ex56 is in its refinement loop. Thanks, Matt Kind regards, Karthik. From: Matthew Knepley > Date: Thursday, 2 December 2021 at 10:57 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Thu, Dec 2, 2021 at 3:33 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Hello, Are there example tutorials on unstructured mesh in ksp? Can some of them run on gpus? There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now. Thanks, Matt Kind regards, Karthik. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: mesh_out.txt URL: From kuang-chung.wang at intel.com Fri Dec 10 18:10:05 2021 From: kuang-chung.wang at intel.com (Wang, Kuang-chung) Date: Sat, 11 Dec 2021 00:10:05 +0000 Subject: [petsc-users] Orthogonality of eigenvectors in SLEPC In-Reply-To: References: Message-ID: 1. I was able to use MatIsHermitian function successfully since my matrix type is seqaij. Just hoped that it can return MatNorm (H -H^+) with this function. But it wasn't hard for me to implement that with the function listed in the previous email. 2. I have a related question. In user manual https://slepc.upv.es/documentation/slepc.pdf 2.1, it says that Ax=\lambda B x problem is usually reformulated to B^-1 Ax =\lambda x. If A matrix is Hermitian, B is diagonal but Bii and Bjj can be different. a) Will solving "Ax=\lambda B x" directly with slepc guarantees users receiving orthogonal eigenvectors? Namely, does xi^T*B*xj=delta_ij hold true? b) if we reformulate, (B^-1A) x = \lambda x will yield (B^-1 A) to be non-hermitian and therefore doesn't give orthogonal eigenvectors ( pointed out by Jose). Xi^T*xj != delta_ij. What about xi^T*B*xj=delta_ij, will this be guaranteed(since this is the same problem as "a" )? Currently, my test is that xi^T*B*xj=delta_ij is no longer true. To help with visibility: [cid:image002.jpg at 01D7EDE0.63827430] Although a and b are the same problem formulated differently, but the orthogonality isn't ensured in the case b while in case a is ensured(?) . If so, does it mean that we should be encouraged to use case a ? Best, Kuang From: Zhang, Hong Sent: Thursday, December 2, 2021 2:18 PM To: Wang, Kuang-chung ; Jose E. Roman Cc: petsc-users at mcs.anl.gov; Obradovic, Borna ; Cea, Stephen M Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC Kuang, PETSc supports MatIsHermitian() for SeqAIJ, IS and SeqSBAIJ matrix types. What is your matrix type? We should be able to add this support to other mat types. Hong ________________________________ From: petsc-users > on behalf of Wang, Kuang-chung > Sent: Thursday, December 2, 2021 2:06 PM To: Jose E. Roman > Cc: petsc-users at mcs.anl.gov >; Obradovic, Borna >; Cea, Stephen M > Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC Thanks Jose for your prompt reply. I did find my matrix highly non-hermitian. By forcing the solver to be hermtian, the orthogonality was restored. But I do need to root cause why my matrix is non-hermitian in the first place. Along the way, I highly recommend MatIsHermitian() function or combining functions like MatHermitianTranspose () MatAXPY MatNorm to determine the hermiticity to safeguard our program. Best, Kuang -----Original Message----- From: Jose E. Roman > Sent: Wednesday, November 24, 2021 6:20 AM To: Wang, Kuang-chung > Cc: petsc-users at mcs.anl.gov; Obradovic, Borna >; Cea, Stephen M > Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC In Hermitian eigenproblems orthogonality of eigenvectors is guaranteed/enforced. But you are solving the problem as non-Hermitian. If your matrix is Hermitian, make sure you solve it as a HEP, and make sure that your matrix is numerically Hermitian. If your matrix is non-Hermitian, then you cannot expect the eigenvectors to be orthogonal. What you can do in this case is get an orthogonal basis of the computed eigenspace, see https://slepc.upv.es/documentation/current/docs/manualpages/EPS/EPSGetInvariantSubspace.html By the way, version 3.7 is more than 5 years old, it is better if you can upgrade to a more recent version. Jose > El 24 nov 2021, a las 7:15, Wang, Kuang-chung > escribi?: > > Dear Jose : > I came across this thread describing issue using krylovschur and finding eigenvectors non-orthogonal. > https://lists.mcs.anl.gov/pipermail/petsc-users/2014-October/023360.ht > ml > > I furthermore have tested by reducing the tolerance as highlighted below from 1e-12 to 1e-16 with no luck. > Could you please suggest options/sources to try out ? > Thanks a lot for sharing your knowledge! > > Sincere, > Kuang-Chung Wang > > ======================================================= > Kuang-Chung Wang > Computational and Modeling Technology > Intel Corporation > Hillsboro OR 97124 > ======================================================= > > Here are more info: > * slepc/3.7.4 > * output message from by doing EPSView(eps,PETSC_NULL): > EPS Object: 1 MPI processes > type: krylovschur > Krylov-Schur: 50% of basis vectors kept after restart > Krylov-Schur: using the locking variant > problem type: non-hermitian eigenvalue problem > selected portion of the spectrum: closest to target: 20.1161 (in magnitude) > number of eigenvalues (nev): 40 > number of column vectors (ncv): 81 > maximum dimension of projected problem (mpd): 81 > maximum number of iterations: 1000 > tolerance: 1e-12 > convergence test: relative to the eigenvalue BV Object: 1 MPI > processes > type: svec > 82 columns of global length 2988 > vector orthogonalization method: classical Gram-Schmidt > orthogonalization refinement: always > block orthogonalization method: Gram-Schmidt > doing matmult as a single matrix-matrix product DS Object: 1 MPI > processes > type: nhep > ST Object: 1 MPI processes > type: sinvert > shift: 20.1161 > number of matrices: 1 > KSP Object: (st_) 1 MPI processes > type: preonly > maximum iterations=1000, initial guess is zero > tolerances: relative=1.12005e-09, absolute=1e-50, divergence=10000. > left preconditioning > using NONE norm type for convergence test > PC Object: (st_) 1 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 0., needed 0. > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij > rows=2988, cols=2988 > package used to perform factorization: mumps > total: nonzeros=614160, allocated nonzeros=614160 > total number of mallocs used during MatSetValues calls =0 > MUMPS run parameters: > SYM (matrix type): 0 > PAR (host participation): 1 > ICNTL(1) (output for error): 6 > ICNTL(2) (output of diagnostic msg): 0 > ICNTL(3) (output for global info): 0 > ICNTL(4) (level of printing): 0 > ICNTL(5) (input mat struct): 0 > ICNTL(6) (matrix prescaling): 7 > ICNTL(7) (sequential matrix ordering):7 > ICNTL(8) (scaling strategy): 77 > ICNTL(10) (max num of refinements): 0 > ICNTL(11) (error analysis): 0 > ICNTL(12) (efficiency control): 1 > ICNTL(13) (efficiency control): 0 > ICNTL(14) (percentage of estimated workspace increase): 20 > ICNTL(18) (input mat struct): 0 > ICNTL(19) (Schur complement info): 0 > ICNTL(20) (rhs sparse pattern): 0 > ICNTL(21) (solution struct): 0 > ICNTL(22) (in-core/out-of-core facility): 0 > ICNTL(23) (max size of memory can be allocated locally):0 > ICNTL(24) (detection of null pivot rows): 0 > ICNTL(25) (computation of a null space basis): 0 > ICNTL(26) (Schur options for rhs or solution): 0 > ICNTL(27) (experimental parameter): -24 > ICNTL(28) (use parallel or sequential ordering): 1 > ICNTL(29) (parallel ordering): 0 > ICNTL(30) (user-specified set of entries in inv(A)): 0 > ICNTL(31) (factors is discarded in the solve phase): 0 > ICNTL(33) (compute determinant): 0 > CNTL(1) (relative pivoting threshold): 0.01 > CNTL(2) (stopping criterion of refinement): 1.49012e-08 > CNTL(3) (absolute pivoting threshold): 0. > CNTL(4) (value of static pivoting): -1. > CNTL(5) (fixation for null pivots): 0. > RINFO(1) (local estimated flops for the elimination after analysis): > [0] 8.15668e+07 > RINFO(2) (local estimated flops for the assembly after factorization): > [0] 892584. > RINFO(3) (local estimated flops for the elimination after factorization): > [0] 8.15668e+07 > INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization): > [0] 16 > INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization): > [0] 16 > INFO(23) (num of pivots eliminated on this processor after factorization): > [0] 2988 > RINFOG(1) (global estimated flops for the elimination after analysis): 8.15668e+07 > RINFOG(2) (global estimated flops for the assembly after factorization): 892584. > RINFOG(3) (global estimated flops for the elimination after factorization): 8.15668e+07 > (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0) > INFOG(3) (estimated real workspace for factors on all processors after analysis): 614160 > INFOG(4) (estimated integer workspace for factors on all processors after analysis): 31971 > INFOG(5) (estimated maximum front size in the complete tree): 246 > INFOG(6) (number of nodes in the complete tree): 197 > INFOG(7) (ordering option effectively use after analysis): 2 > INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100 > INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 614160 > INFOG(10) (total integer space store the matrix factors after factorization): 31971 > INFOG(11) (order of largest frontal matrix after factorization): 246 > INFOG(12) (number of off-diagonal pivots): 0 > INFOG(13) (number of delayed pivots after factorization): 0 > INFOG(14) (number of memory compress after factorization): 0 > INFOG(15) (number of steps of iterative refinement after solution): 0 > INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 16 > INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 16 > INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 16 > INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 16 > INFOG(20) (estimated number of entries in the factors): 614160 > INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 14 > INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 14 > INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0 > INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1 > INFOG(25) (after factorization: number of pivots modified by static pivoting): 0 > INFOG(28) (after factorization: number of null pivots encountered): 0 > INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 614160 > INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 13, 13 > INFOG(32) (after analysis: type of analysis done): 1 > INFOG(33) (value used for ICNTL(8)): 7 > INFOG(34) (exponent of the determinant if determinant is requested): 0 > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=2988, cols=2988 > total: nonzeros=151488, allocated nonzeros=151488 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 996 nodes, limit used is 5 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 17516 bytes Desc: image002.jpg URL: From knepley at gmail.com Fri Dec 10 20:26:51 2021 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 10 Dec 2021 21:26:51 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> Message-ID: On Fri, Dec 10, 2021 at 1:11 PM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Thank you that worked. I have attached the output of the run > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view > > > > Below is the mesh information I get before the solve. > > Not sure how to interpret them ? is ?18? say number of elements in certain > direction? > 0-cells are vertices, 1-cells are edges, etc, so this has 4 cubes. That is why it has 18 vertices, 9 on top and 9 on the bottom. A bunch of stuff is labeled on this mesh, but you don't have to worry about it I think. > How does ?18? come about ? given cells 2,2,1 as input? > > Sorry, bit lost what markers and other labels represent. > > > > Mesh in 3 dimensions: > > 0-cells: 18 > > 1-cells: 33 > > 2-cells: 20 > > 3-cells: 4 > > Labels: > > marker: 1 strata with value/size (1 (48)) > > Face Sets: 6 strata with value/size (6 (2), 5 (2), 3 (2), 4 (2), 1 (4), > 2 (4)) > > depth: 4 strata with value/size (0 (18), 1 (33), 2 (20), 3 (4)) > > boundary: 1 strata with value/size (1 (66)) > > celltype: 4 strata with value/size (7 (4), 0 (18), 4 (20), 1 (33)) > > > > I am more puzzled as the mesh information changes after the solve? > Mark refined it once regularly, so each hexahedron is split into 8, giving 32 cubes. Thanks, Matt > Mesh in 3 dimensions: > > 0-cells: 75 > > 1-cells: 170 > > 2-cells: 128 > > 3-cells: 32 > > Labels: > > celltype: 4 strata with value/size (0 (75), 1 (170), 4 (128), 7 (32)) > > depth: 4 strata with value/size (0 (75), 1 (170), 2 (128), 3 (32)) > > marker: 1 strata with value/size (1 (240)) > > Face Sets: 6 strata with value/size (6 (18), 5 (18), 3 (18), 4 (18), 1 > (36), 2 (36)) > > boundary: 1 strata with value/size (1 (258)) > > > > Best, > > Karthik. > > > > > > > > > > *From: *Matthew Knepley > *Date: *Friday, 10 December 2021 at 16:17 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *"petsc-users at mcs.anl.gov" > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Fri, Dec 10, 2021 at 10:49 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank you for your response. > > I tried using the flag -dm_view to get mesh information. > > I was hoping it might create a output file with mesh information but it > didn?t create any. > > What should I expect with -dm_view? > > > > -dm_view prints the Plex information to the screen (by default), but it > responds to the normal viewer options. > > However, it looks like Mark changes the prefix on the mesh, so I think you > need > > > > -ex56_dm_view > > > > Thanks, > > > > Matt > > > > Best, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Friday, 10 December 2021 at 13:04 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *"petsc-users at mcs.anl.gov" > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Fri, Dec 10, 2021 at 7:42 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Hi Matt, > > > > I intend to perform a scaling study ? I have a few more questions from ex56 > > *3D, tri-quadratic hexahedra (Q1), displacement finite element > formulation.* > > i) What makes the problem non-linear? I believe SNES > are used to solve non-linear problems. > > It is a linear problem. We use SNES because it is easier to use it for > everything. > > ii) 2,2,1 does it present the number of elements used > in each direction? > > You can use -dm_view to show the mesh information actually used in the run. > > iii) What makes the problem unstructured? I believe the > geometry is a cube or cuboid ? is it because it uses DMPlex? > > Yes. > > iv) Do any external FEM package with an unstructured > problem domain has to use DMPlex mat type? > > DMPlex is not a Mat type, but rather a DM type. I do not understand the > question. For example, LibMesh uses PETSc solvers but has > > its own mesh types. > > v) What about -mat_type (not _dm_mat_type) ajicusparse > ? will it work with unstructured FEM discretised domains? > > dm_mat_type is just a way of setting the MatType that the DM creates. You > can set it to whatever type you want. Since it uses MatSetValues() > > there may be overhead in converting to that matrix type, but it should > work. > > > > I tried to run two problems of ex56 with two different domain size ( - > attached you find the log_view outputs of both on *gpus*) using _pc_type > asm: > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt > > > > > > ./ex56 -cells 10,10,5 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree > 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt > > > > Below is the SNES iteration for problem with 2,2,1 cells which converges > after two non-linear iterations: > > > > 0 SNES Function norm 0.000000000000e+00 > > 0 SNES Function norm 7.529825940191e+01 > > 1 SNES Function norm 4.734810707002e-08 > > 2 SNES Function norm 1.382827243108e-14 > > > > Your KSP tolerance is too high, so it takes another iterate. Use -ksp_rtol > 1e-10. > > > > Below is the SNES iteration for problem with 10,10,5 cells? why does it > first decrease and then increase to *0 SNES Function norm > 1.085975028558e+01* and finally converge? > > > > 0 SNES Function norm 2.892801019593e+01 > > 1 SNES Function norm 5.361683383932e-07 > > 2 SNES Function norm 1.726814199132e-14 > > 0 SNES Function norm 1.085975028558e+01 > > 1 SNES Function norm 2.294074693590e-07 > > 2 SNES Function norm 2.491900236077e-14 > > > > You are solving the problem twice, probably because ex56 is in its > refinement loop. > > > > Thanks, > > > > Matt > > > > Kind regards, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Thursday, 2 December 2021 at 10:57 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *"petsc-users at mcs.anl.gov" > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Thu, Dec 2, 2021 at 3:33 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Hello, > > > > Are there example tutorials on unstructured mesh in ksp? Can some of them > run on gpus? > > > > There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The > solver can run on the GPU, but the vector/matrix > > FEM assembly does not. I am working on that now. > > > > Thanks, > > > > Matt > > > > Kind regards, > > Karthik. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Sat Dec 11 01:43:36 2021 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sat, 11 Dec 2021 08:43:36 +0100 Subject: [petsc-users] Orthogonality of eigenvectors in SLEPC In-Reply-To: References: Message-ID: Case a is the simplest one to use and enforces B-orthogonality of eigenvectors. Case b can also be used if you have a good reason to do so, and in that case you can recover symmetry (and B-orthogonality) by explicitly setting the inner product matrix as illustrated in ex47.c https://slepc.upv.es/documentation/current/src/eps/tutorials/ex47.c.html Jose > El 11 dic 2021, a las 1:10, Wang, Kuang-chung escribi?: > > ? I was able to use MatIsHermitian function successfully since my matrix type is seqaij. Just hoped that it can return MatNorm (H -H^+) with this function. But it wasn?t hard for me to implement that with the function listed in the previous email. > ? I have a related question. In user manual https://slepc.upv.es/documentation/slepc.pdf 2.1, it says that Ax=\lambda B x problem is usually reformulated to B^-1 Ax =\lambda x. If A matrix is Hermitian, B is diagonal but Bii and Bjj can be different. > a) Will solving ?Ax=\lambda B x? directly with slepc guarantees users receiving orthogonal eigenvectors? Namely, does xi^T*B*xj=delta_ij hold true? > b) if we reformulate, (B^-1A) x = \lambda x will yield (B^-1 A) to be non-hermitian and therefore doesn?t give orthogonal eigenvectors ( pointed out by Jose). Xi^T*xj != delta_ij. What about xi^T*B*xj=delta_ij, will this be guaranteed(since this is the same problem as ?a? )? Currently, my test is that xi^T*B*xj=delta_ij is no longer true. > To help with visibility: > > > Although a and b are the same problem formulated differently, but the orthogonality isn?t ensured in the case b while in case a is ensured(?) . > If so, does it mean that we should be encouraged to use case a ? > > Best, > Kuang > > > From: Zhang, Hong > Sent: Thursday, December 2, 2021 2:18 PM > To: Wang, Kuang-chung ; Jose E. Roman > Cc: petsc-users at mcs.anl.gov; Obradovic, Borna ; Cea, Stephen M > Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC > > Kuang, > PETSc supports MatIsHermitian() for SeqAIJ, IS and SeqSBAIJ matrix types. What is your matrix type? > We should be able to add this support to other mat types. > Hong > From: petsc-users on behalf of Wang, Kuang-chung > Sent: Thursday, December 2, 2021 2:06 PM > To: Jose E. Roman > Cc: petsc-users at mcs.anl.gov ; Obradovic, Borna ; Cea, Stephen M > Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC > > Thanks Jose for your prompt reply. > I did find my matrix highly non-hermitian. By forcing the solver to be hermtian, the orthogonality was restored. > But I do need to root cause why my matrix is non-hermitian in the first place. > Along the way, I highly recommend MatIsHermitian() function or combining functions like MatHermitianTranspose () MatAXPY MatNorm to determine the hermiticity to safeguard our program. > > Best, > Kuang > > -----Original Message----- > From: Jose E. Roman > Sent: Wednesday, November 24, 2021 6:20 AM > To: Wang, Kuang-chung > Cc: petsc-users at mcs.anl.gov; Obradovic, Borna ; Cea, Stephen M > Subject: Re: [petsc-users] Orthogonality of eigenvectors in SLEPC > > In Hermitian eigenproblems orthogonality of eigenvectors is guaranteed/enforced. But you are solving the problem as non-Hermitian. > > If your matrix is Hermitian, make sure you solve it as a HEP, and make sure that your matrix is numerically Hermitian. > > If your matrix is non-Hermitian, then you cannot expect the eigenvectors to be orthogonal. What you can do in this case is get an orthogonal basis of the computed eigenspace, seehttps://slepc.upv.es/documentation/current/docs/manualpages/EPS/EPSGetInvariantSubspace.html > > > By the way, version 3.7 is more than 5 years old, it is better if you can upgrade to a more recent version. > > Jose > > > > > El 24 nov 2021, a las 7:15, Wang, Kuang-chung escribi?: > > > > Dear Jose : > > I came across this thread describing issue using krylovschur and finding eigenvectors non-orthogonal. > > https://lists.mcs.anl.gov/pipermail/petsc-users/2014-October/023360.ht > > ml > > > > I furthermore have tested by reducing the tolerance as highlighted below from 1e-12 to 1e-16 with no luck. > > Could you please suggest options/sources to try out ? > > Thanks a lot for sharing your knowledge! > > > > Sincere, > > Kuang-Chung Wang > > > > ======================================================= > > Kuang-Chung Wang > > Computational and Modeling Technology > > Intel Corporation > > Hillsboro OR 97124 > > ======================================================= > > > > Here are more info: > > ? slepc/3.7.4 > > ? output message from by doing EPSView(eps,PETSC_NULL): > > EPS Object: 1 MPI processes > > type: krylovschur > > Krylov-Schur: 50% of basis vectors kept after restart > > Krylov-Schur: using the locking variant > > problem type: non-hermitian eigenvalue problem > > selected portion of the spectrum: closest to target: 20.1161 (in magnitude) > > number of eigenvalues (nev): 40 > > number of column vectors (ncv): 81 > > maximum dimension of projected problem (mpd): 81 > > maximum number of iterations: 1000 > > tolerance: 1e-12 > > convergence test: relative to the eigenvalue BV Object: 1 MPI > > processes > > type: svec > > 82 columns of global length 2988 > > vector orthogonalization method: classical Gram-Schmidt > > orthogonalization refinement: always > > block orthogonalization method: Gram-Schmidt > > doing matmult as a single matrix-matrix product DS Object: 1 MPI > > processes > > type: nhep > > ST Object: 1 MPI processes > > type: sinvert > > shift: 20.1161 > > number of matrices: 1 > > KSP Object: (st_) 1 MPI processes > > type: preonly > > maximum iterations=1000, initial guess is zero > > tolerances: relative=1.12005e-09, absolute=1e-50, divergence=10000. > > left preconditioning > > using NONE norm type for convergence test > > PC Object: (st_) 1 MPI processes > > type: lu > > LU: out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: nd > > factor fill ratio given 0., needed 0. > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=2988, cols=2988 > > package used to perform factorization: mumps > > total: nonzeros=614160, allocated nonzeros=614160 > > total number of mallocs used during MatSetValues calls =0 > > MUMPS run parameters: > > SYM (matrix type): 0 > > PAR (host participation): 1 > > ICNTL(1) (output for error): 6 > > ICNTL(2) (output of diagnostic msg): 0 > > ICNTL(3) (output for global info): 0 > > ICNTL(4) (level of printing): 0 > > ICNTL(5) (input mat struct): 0 > > ICNTL(6) (matrix prescaling): 7 > > ICNTL(7) (sequential matrix ordering):7 > > ICNTL(8) (scaling strategy): 77 > > ICNTL(10) (max num of refinements): 0 > > ICNTL(11) (error analysis): 0 > > ICNTL(12) (efficiency control): 1 > > ICNTL(13) (efficiency control): 0 > > ICNTL(14) (percentage of estimated workspace increase): 20 > > ICNTL(18) (input mat struct): 0 > > ICNTL(19) (Schur complement info): 0 > > ICNTL(20) (rhs sparse pattern): 0 > > ICNTL(21) (solution struct): 0 > > ICNTL(22) (in-core/out-of-core facility): 0 > > ICNTL(23) (max size of memory can be allocated locally):0 > > ICNTL(24) (detection of null pivot rows): 0 > > ICNTL(25) (computation of a null space basis): 0 > > ICNTL(26) (Schur options for rhs or solution): 0 > > ICNTL(27) (experimental parameter): -24 > > ICNTL(28) (use parallel or sequential ordering): 1 > > ICNTL(29) (parallel ordering): 0 > > ICNTL(30) (user-specified set of entries in inv(A)): 0 > > ICNTL(31) (factors is discarded in the solve phase): 0 > > ICNTL(33) (compute determinant): 0 > > CNTL(1) (relative pivoting threshold): 0.01 > > CNTL(2) (stopping criterion of refinement): 1.49012e-08 > > CNTL(3) (absolute pivoting threshold): 0. > > CNTL(4) (value of static pivoting): -1. > > CNTL(5) (fixation for null pivots): 0. > > RINFO(1) (local estimated flops for the elimination after analysis): > > [0] 8.15668e+07 > > RINFO(2) (local estimated flops for the assembly after factorization): > > [0] 892584. > > RINFO(3) (local estimated flops for the elimination after factorization): > > [0] 8.15668e+07 > > INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization): > > [0] 16 > > INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization): > > [0] 16 > > INFO(23) (num of pivots eliminated on this processor after factorization): > > [0] 2988 > > RINFOG(1) (global estimated flops for the elimination after analysis): 8.15668e+07 > > RINFOG(2) (global estimated flops for the assembly after factorization): 892584. > > RINFOG(3) (global estimated flops for the elimination after factorization): 8.15668e+07 > > (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0) > > INFOG(3) (estimated real workspace for factors on all processors after analysis): 614160 > > INFOG(4) (estimated integer workspace for factors on all processors after analysis): 31971 > > INFOG(5) (estimated maximum front size in the complete tree): 246 > > INFOG(6) (number of nodes in the complete tree): 197 > > INFOG(7) (ordering option effectively use after analysis): 2 > > INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100 > > INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 614160 > > INFOG(10) (total integer space store the matrix factors after factorization): 31971 > > INFOG(11) (order of largest frontal matrix after factorization): 246 > > INFOG(12) (number of off-diagonal pivots): 0 > > INFOG(13) (number of delayed pivots after factorization): 0 > > INFOG(14) (number of memory compress after factorization): 0 > > INFOG(15) (number of steps of iterative refinement after solution): 0 > > INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 16 > > INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 16 > > INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 16 > > INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 16 > > INFOG(20) (estimated number of entries in the factors): 614160 > > INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 14 > > INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 14 > > INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0 > > INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1 > > INFOG(25) (after factorization: number of pivots modified by static pivoting): 0 > > INFOG(28) (after factorization: number of null pivots encountered): 0 > > INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 614160 > > INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 13, 13 > > INFOG(32) (after analysis: type of analysis done): 1 > > INFOG(33) (value used for ICNTL(8)): 7 > > INFOG(34) (exponent of the determinant if determinant is requested): 0 > > linear system matrix = precond matrix: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=2988, cols=2988 > > total: nonzeros=151488, allocated nonzeros=151488 > > total number of mallocs used during MatSetValues calls =0 > > using I-node routines: found 996 nodes, limit used is 5 > From tangqi at msu.edu Sat Dec 11 12:58:37 2021 From: tangqi at msu.edu (Tang, Qi) Date: Sat, 11 Dec 2021 18:58:37 +0000 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: <231abd15aab544f9850826cb437366f7@lanl.gov> References: <231abd15aab544f9850826cb437366f7@lanl.gov> Message-ID: Hi, Does anyone have comment on finite difference coloring with DMStag? We are using DMStag and TS to evolve some nonlinear equations implicitly. It would be helpful to have the coloring Jacobian option with that. Thanks, Qi On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users > wrote: Hello, Does the Jacobian approximation using coloring and finite differencing of the function evaluation work in DMStag? Thank you. Best regards, Zakariae -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Dec 11 15:28:33 2021 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 11 Dec 2021 16:28:33 -0500 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: References: <231abd15aab544f9850826cb437366f7@lanl.gov> Message-ID: On Sat, Dec 11, 2021 at 1:58 PM Tang, Qi wrote: > Hi, > Does anyone have comment on finite difference coloring with DMStag? We are > using DMStag and TS to evolve some nonlinear equations implicitly. It would > be helpful to have the coloring Jacobian option with that. > Since DMStag produces the Jacobian connectivity, you can use -snes_fd_color_use_mat. It has many options. Here is an example of us using that: https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex19.c#L898 Thanks, Matt > Thanks, > Qi > > > On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Hello, > > Does the Jacobian approximation using coloring and finite differencing of > the function evaluation work in DMStag? > Thank you. > Best regards, > > Zakariae > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Sun Dec 12 05:10:29 2021 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Sun, 12 Dec 2021 12:10:29 +0100 Subject: [petsc-users] non-manifold DMPLEX In-Reply-To: References: Message-ID: Here you have the following "points": - 1 3-cell (the cube volume) - 7 2-cells (the 6 faces of the cube plus the extra one) - 16 1-cells (the 12 edges of the cube, plus 3 extra ones from the extra face, plus the extra edge) - 11 0-cells (the 8 vertices of the cube, pus 2 extra ones from the extra face, plus the extra vertex) You could encode your mesh as here, by directly specifying relationships between these points in the Hasse diagram: https://petsc.org/release/docs/manual/dmplex/#representing-unstructured-grids Then, maybe the special relation is captured because you've defined the "cone" or "support" for each "point", which tells you about the local topology everywhere. E.g. to take the simpler case, three of the faces have the yellow edge in their "cone", or equivalently the yellow edge has those three faces in its "support". Am Fr., 10. Dez. 2021 um 17:04 Uhr schrieb TARDIEU Nicolas via petsc-users < petsc-users at mcs.anl.gov>: > Dear PETSc Team, > > Following a previous discussion on the mailing list, I'd like to > experiment with DMPLEX with a very simple non-manifold mesh as shown in the > attached picture : a cube connected to a square by an edge and to an edge > by a point. > I have read some of the papers that Matthew et al. have written, but I > must admit that I do not see how to start... > I see how the define the different elements but I do not see how to > specify the special relationship between the cube and the square and > between the cube and the edge. > Once it will have been set correctly, what I am hoping is to be able to > use all the nice features of the DMPLEX object. > > Best regards, > Nicolas > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 12 05:17:41 2021 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 12 Dec 2021 06:17:41 -0500 Subject: [petsc-users] non-manifold DMPLEX In-Reply-To: References: Message-ID: On Sun, Dec 12, 2021 at 6:11 AM Patrick Sanan wrote: > Here you have the following "points": > > - 1 3-cell (the cube volume) > - 7 2-cells (the 6 faces of the cube plus the extra one) > - 16 1-cells (the 12 edges of the cube, plus 3 extra ones from the extra > face, plus the extra edge) > - 11 0-cells (the 8 vertices of the cube, pus 2 extra ones from the extra > face, plus the extra vertex) > > You could encode your mesh as here, by directly specifying relationships > between these points in the Hasse diagram: > > https://petsc.org/release/docs/manual/dmplex/#representing-unstructured-grids > > Then, maybe the special relation is captured because you've defined the > "cone" or "support" for each "point", which tells you about the local > topology everywhere. E.g. to take the simpler case, three of the faces have > the yellow edge in their "cone", or equivalently the yellow edge has those > three faces in its "support". > This is correct. I can help you make this if you want. I think if you assign cell types, you can even get Plex to automatically interpolate. Note that with this kind of mesh, algorithms which assume a uniform cell dimension will break, but I am guessing you would not be interested in those anyway. Thanks, Matt > Am Fr., 10. Dez. 2021 um 17:04 Uhr schrieb TARDIEU Nicolas via petsc-users > : > >> Dear PETSc Team, >> >> Following a previous discussion on the mailing list, I'd like to >> experiment with DMPLEX with a very simple non-manifold mesh as shown in the >> attached picture : a cube connected to a square by an edge and to an edge >> by a point. >> I have read some of the papers that Matthew et al. have written, but I >> must admit that I do not see how to start... >> I see how the define the different elements but I do not see how to >> specify the special relationship between the cube and the square and >> between the cube and the edge. >> Once it will have been set correctly, what I am hoping is to be able to >> use all the nice features of the DMPLEX object. >> >> Best regards, >> Nicolas >> >> >> Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont >> ?tablis ? l'intention exclusive des destinataires et les informations qui y >> figurent sont strictement confidentielles. Toute utilisation de ce Message >> non conforme ? sa destination, toute diffusion ou toute publication totale >> ou partielle, est interdite sauf autorisation expresse. >> >> Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de >> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou >> partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de >> votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace >> sur quelque support que ce soit. Nous vous remercions ?galement d'en >> avertir imm?diatement l'exp?diteur par retour du message. >> >> Il est impossible de garantir que les communications par messagerie >> ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute >> erreur ou virus. >> ____________________________________________________ >> >> This message and any attachments (the 'Message') are intended solely for >> the addressees. The information contained in this Message is confidential. >> Any use of information contained in this Message not in accord with its >> purpose, any dissemination or disclosure, either whole or partial, is >> prohibited except formal approval. >> >> If you are not the addressee, you may not copy, forward, disclose or use >> any part of it. If you have received this message in error, please delete >> it and all copies from your system and notify the sender immediately by >> return message. >> >> E-mail communication cannot be guaranteed to be timely secure, error or >> virus-free. >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Sun Dec 12 14:18:49 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Sun, 12 Dec 2021 20:18:49 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> Message-ID: <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> Thank for your response that was helpful. I have a couple of questions: (i) How can I control the level of refinement? I tried to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement from 8 giving 32 cubes. (ii) What does -cell 2,2,1 correspond to? How can I determine the total number of dofs? So that I can perform a scaling study by changing the input of the flag -cells. Kind regards, Karthik. From: Matthew Knepley Date: Saturday, 11 December 2021 at 02:27 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh On Fri, Dec 10, 2021 at 1:11 PM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank you that worked. I have attached the output of the run ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view Below is the mesh information I get before the solve. Not sure how to interpret them ? is ?18? say number of elements in certain direction? 0-cells are vertices, 1-cells are edges, etc, so this has 4 cubes. That is why it has 18 vertices, 9 on top and 9 on the bottom. A bunch of stuff is labeled on this mesh, but you don't have to worry about it I think. How does ?18? come about ? given cells 2,2,1 as input? Sorry, bit lost what markers and other labels represent. Mesh in 3 dimensions: 0-cells: 18 1-cells: 33 2-cells: 20 3-cells: 4 Labels: marker: 1 strata with value/size (1 (48)) Face Sets: 6 strata with value/size (6 (2), 5 (2), 3 (2), 4 (2), 1 (4), 2 (4)) depth: 4 strata with value/size (0 (18), 1 (33), 2 (20), 3 (4)) boundary: 1 strata with value/size (1 (66)) celltype: 4 strata with value/size (7 (4), 0 (18), 4 (20), 1 (33)) I am more puzzled as the mesh information changes after the solve? Mark refined it once regularly, so each hexahedron is split into 8, giving 32 cubes. Thanks, Matt Mesh in 3 dimensions: 0-cells: 75 1-cells: 170 2-cells: 128 3-cells: 32 Labels: celltype: 4 strata with value/size (0 (75), 1 (170), 4 (128), 7 (32)) depth: 4 strata with value/size (0 (75), 1 (170), 2 (128), 3 (32)) marker: 1 strata with value/size (1 (240)) Face Sets: 6 strata with value/size (6 (18), 5 (18), 3 (18), 4 (18), 1 (36), 2 (36)) boundary: 1 strata with value/size (1 (258)) Best, Karthik. From: Matthew Knepley > Date: Friday, 10 December 2021 at 16:17 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Fri, Dec 10, 2021 at 10:49 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank you for your response. I tried using the flag -dm_view to get mesh information. I was hoping it might create a output file with mesh information but it didn?t create any. What should I expect with -dm_view? -dm_view prints the Plex information to the screen (by default), but it responds to the normal viewer options. However, it looks like Mark changes the prefix on the mesh, so I think you need -ex56_dm_view Thanks, Matt Best, Karthik. From: Matthew Knepley > Date: Friday, 10 December 2021 at 13:04 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Fri, Dec 10, 2021 at 7:42 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Hi Matt, I intend to perform a scaling study ? I have a few more questions from ex56 3D, tri-quadratic hexahedra (Q1), displacement finite element formulation. i) What makes the problem non-linear? I believe SNES are used to solve non-linear problems. It is a linear problem. We use SNES because it is easier to use it for everything. ii) 2,2,1 does it present the number of elements used in each direction? You can use -dm_view to show the mesh information actually used in the run. iii) What makes the problem unstructured? I believe the geometry is a cube or cuboid ? is it because it uses DMPlex? Yes. iv) Do any external FEM package with an unstructured problem domain has to use DMPlex mat type? DMPlex is not a Mat type, but rather a DM type. I do not understand the question. For example, LibMesh uses PETSc solvers but has its own mesh types. v) What about -mat_type (not _dm_mat_type) ajicusparse ? will it work with unstructured FEM discretised domains? dm_mat_type is just a way of setting the MatType that the DM creates. You can set it to whatever type you want. Since it uses MatSetValues() there may be overhead in converting to that matrix type, but it should work. I tried to run two problems of ex56 with two different domain size ( - attached you find the log_view outputs of both on gpus) using _pc_type asm: ./ex56 -cells 2,2,1 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt ./ex56 -cells 10,10,5 -max_conv_its 2 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 > output_221.txt Below is the SNES iteration for problem with 2,2,1 cells which converges after two non-linear iterations: 0 SNES Function norm 0.000000000000e+00 0 SNES Function norm 7.529825940191e+01 1 SNES Function norm 4.734810707002e-08 2 SNES Function norm 1.382827243108e-14 Your KSP tolerance is too high, so it takes another iterate. Use -ksp_rtol 1e-10. Below is the SNES iteration for problem with 10,10,5 cells? why does it first decrease and then increase to 0 SNES Function norm 1.085975028558e+01 and finally converge? 0 SNES Function norm 2.892801019593e+01 1 SNES Function norm 5.361683383932e-07 2 SNES Function norm 1.726814199132e-14 0 SNES Function norm 1.085975028558e+01 1 SNES Function norm 2.294074693590e-07 2 SNES Function norm 2.491900236077e-14 You are solving the problem twice, probably because ex56 is in its refinement loop. Thanks, Matt Kind regards, Karthik. From: Matthew Knepley > Date: Thursday, 2 December 2021 at 10:57 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Thu, Dec 2, 2021 at 3:33 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Hello, Are there example tutorials on unstructured mesh in ksp? Can some of them run on gpus? There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now. Thanks, Matt Kind regards, Karthik. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.tardieu at edf.fr Sun Dec 12 15:36:09 2021 From: nicolas.tardieu at edf.fr (TARDIEU Nicolas) Date: Sun, 12 Dec 2021 21:36:09 +0000 Subject: [petsc-users] non-manifold DMPLEX In-Reply-To: References: Message-ID: Dear Patrick and Matthew, Thank you very much for your answers. I am gonna try to set up such a test by assigning cell types. Shall I open a MR in order to contribute this example ? Regards, Nicolas ________________________________ De : knepley at gmail.com Envoy? : dimanche 12 d?cembre 2021 12:17 ? : Patrick Sanan Cc : TARDIEU Nicolas ; petsc-users at mcs.anl.gov Objet : Re: [petsc-users] non-manifold DMPLEX On Sun, Dec 12, 2021 at 6:11 AM Patrick Sanan > wrote: Here you have the following "points": - 1 3-cell (the cube volume) - 7 2-cells (the 6 faces of the cube plus the extra one) - 16 1-cells (the 12 edges of the cube, plus 3 extra ones from the extra face, plus the extra edge) - 11 0-cells (the 8 vertices of the cube, pus 2 extra ones from the extra face, plus the extra vertex) You could encode your mesh as here, by directly specifying relationships between these points in the Hasse diagram: https://petsc.org/release/docs/manual/dmplex/#representing-unstructured-grids Then, maybe the special relation is captured because you've defined the "cone" or "support" for each "point", which tells you about the local topology everywhere. E.g. to take the simpler case, three of the faces have the yellow edge in their "cone", or equivalently the yellow edge has those three faces in its "support". This is correct. I can help you make this if you want. I think if you assign cell types, you can even get Plex to automatically interpolate. Note that with this kind of mesh, algorithms which assume a uniform cell dimension will break, but I am guessing you would not be interested in those anyway. Thanks, Matt Am Fr., 10. Dez. 2021 um 17:04 Uhr schrieb TARDIEU Nicolas via petsc-users >: Dear PETSc Team, Following a previous discussion on the mailing list, I'd like to experiment with DMPLEX with a very simple non-manifold mesh as shown in the attached picture : a cube connected to a square by an edge and to an edge by a point. I have read some of the papers that Matthew et al. have written, but I must admit that I do not see how to start... I see how the define the different elements but I do not see how to specify the special relationship between the cube and the square and between the cube and the edge. Once it will have been set correctly, what I am hoping is to be able to use all the nice features of the DMPLEX object. Best regards, Nicolas Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Dec 12 17:00:06 2021 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 12 Dec 2021 18:00:06 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> Message-ID: On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Thank for your response that was helpful. I have a couple of questions: > > > > (i) How can I control the level of refinement? I tried > to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement > from 8 giving 32 cubes. > I answered this question recently but ex56 clobbers ex56_dm_refine in the convergence loop. I have an MR that prints a warning if you provide a ex56_dm_refine. * snes/ex56 runs a convergence study and confusingly sets the options manually, thus erasing your -ex56_dm_refine. * To refine, use -max_conv_its N <3>, this sets the number of steps of refinement. That is, the length of the convergence study * You can adjust where it starts from with -cells i,j,k <1,1,1> You do want to set this if you have multiple MPI processes so that the size of this mesh is the number of processes. That way it starts with one cell per process and refines from there. (ii) What does -cell 2,2,1 correspond to? > The initial mesh or mesh_0. The convergence test uniformly refines this mesh. So if you want to refine this twice you could use -cells 8,8,4 > How can I determine the total number of dofs? > Unfortunately, that is not printed but you can calculate from the initial cell grid, the order of the element and the refinement in each iteration of the convergence tests. > So that I can perform a scaling study by changing the input of the flag > -cells. > > > You can and the convergence test gives you data for a strong speedup study in one run. Each solve is put in its own "stage" of the output and you want to look at KSPSolve lines in the log_view output. -------------- next part -------------- An HTML attachment was scrubbed... URL: From celestechevali at gmail.com Sun Dec 12 17:21:40 2021 From: celestechevali at gmail.com (celestechevali at gmail.com) Date: Mon, 13 Dec 2021 00:21:40 +0100 Subject: [petsc-users] SNES always ends at iteration 0 Message-ID: Hello, I encountered a strange problem concerning the convergence of SNES. In my recent test runs I found that SNES always stops at iteration 0... At first I thought there may be an error with the tolerance setting, so I output the tolerances : *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = 10000 Norm of error 760.491 Iterations 0* Which are exactly the default values that I always used. However, for the same tolerance settings, the SNES solver converges successfully if I decrease the number of degrees of freedom in my system... I wish to know if anyone has experienced the same type of problems or has an idea about what could possibly cause the problem... Thank you so much in advance. I appreciate any advice that you provide. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Dec 12 18:11:34 2021 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 12 Dec 2021 19:11:34 -0500 Subject: [petsc-users] SNES always ends at iteration 0 In-Reply-To: References: Message-ID: The three "tol" values should be finite. It sounds like you set them to 0. Don't do that and use the defaults to start. The behavior with zero tolerances is not defined. You can use -snes_monitor to print out the iterations. On Sun, Dec 12, 2021 at 6:22 PM celestechevali at gmail.com < celestechevali at gmail.com> wrote: > Hello, > > I encountered a strange problem concerning the convergence of SNES. > > In my recent test runs I found that SNES always stops at iteration 0... > > At first I thought there may be an error with the tolerance setting, so I > output the tolerances : > > > > *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = > 10000 Norm of error 760.491 Iterations 0* > > Which are exactly the default values that I always used. However, for the > same tolerance settings, the SNES solver converges successfully if I > decrease the number of degrees of freedom in my system... > > I wish to know if anyone has experienced the same type of problems or has > an idea about what could possibly cause the problem... > > Thank you so much in advance. > > I appreciate any advice that you provide. > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From celestechevali at gmail.com Sun Dec 12 18:47:46 2021 From: celestechevali at gmail.com (celestechevali at gmail.com) Date: Mon, 13 Dec 2021 01:47:46 +0100 Subject: [petsc-users] SNES always ends at iteration 0 In-Reply-To: References: Message-ID: Thank you so much for your reply ! In fact I didn't know how to set tolerances, so I proceeded without specifying the tolerances, hoping that this could lead to the implementation of default values... I just added *SNESSetFromOptions(snes); *However, it doesn't make any difference... But it's true that the "tol" values are somehow set to zero... *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = 10000 0 SNES Function norm 7.604910424038e+02* Is it possible that it has something to do with my makefile ? Since I didn't figure out the PETSc makefile format (which seems to be different from standard C makefile format), I named my source code as *ex1.c* to make use of the default settings for PETSc example programs... And in my makefile I wrote : *include ${PETSC_DIR}/lib/petsc/conf/variablesinclude ${PETSC_DIR}/lib/petsc/conf/testex1: ex1.o* Is it possible the "tol" values are set to 0 by the default setting used for example programs ? Thank you so much for your help. PS: I just tried the same code with less degrees of freedom and this time it worked... But for a large system it didn't... *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = 10000 0 SNES Function norm 1.164809703659e+00 1 SNES Function norm 1.311388740736e-01 2 SNES Function norm 7.232579319557e-02 3 SNES Function norm 4.984271911548e-02 4 SNES Function norm 3.224387373805e-02 5 SNES Function norm 6.898280568053e-03 6 SNES Function norm 6.297558001575e-03 7 SNES Function norm 5.358028396052e-03 8 SNES Function norm 4.591005105466e-03 9 SNES Function norm 4.063981130201e-03 10 SNES Function norm 3.715929394609e-03 11 SNES Function norm 3.428330101253e-03 12 SNES Function norm 3.177113603032e-03 13 SNES Function norm 2.958574186594e-03 14 SNES Function norm 2.769227811865e-03 15 SNES Function norm 2.605947870584e-03 16 SNES Function norm 2.465934405221e-03 17 SNES Function norm 2.346761136962e-03 18 SNES Function norm 2.246362261451e-03 19 SNES Function norm 2.163102452591e-03 20 SNES Function norm 2.095849101382e-03 21 SNES Function norm 2.043740325461e-03 22 SNES Function norm 2.005106316761e-03 23 SNES Function norm 1.975748994170e-03 24 SNES Function norm 1.949413335428e-03 25 SNES Function norm 1.920795414593e-03 26 SNES Function norm 1.886883259141e-03 27 SNES Function norm 1.846374653045e-03 28 SNES Function norm 1.799050087038e-03 29 SNES Function norm 1.745284156916e-03 30 SNES Function norm 1.685885151987e-03 31 SNES Function norm 1.621850994665e-03 32 SNES Function norm 1.554258940064e-03 33 SNES Function norm 1.484213253375e-03 34 SNES Function norm 1.412768267404e-03 35 SNES Function norm 1.340893218332e-03 36 SNES Function norm 1.269412489589e-03 37 SNES Function norm 1.199029202116e-03 38 SNES Function norm 1.130300263372e-03 39 SNES Function norm 1.063694395854e-03 40 SNES Function norm 9.995826338243e-04 41 SNES Function norm 9.383610129089e-04 42 SNES Function norm 8.807543352645e-04 43 SNES Function norm 8.288695938590e-04 44 SNES Function norm 7.898873173876e-04 45 SNES Function norm 7.752509690373e-04 46 SNES Function norm 7.625724154377e-04 47 SNES Function norm 7.503152403370e-04 48 SNES Function norm 7.364744378378e-04 49 SNES Function norm 7.202926541551e-04 50 SNES Function norm 7.015245603442e-04 * Mark Adams ?2021?12?13??? 01:11??? > The three "tol" values should be finite. It sounds like you set them to 0. > Don't do that and use the defaults to start. > The behavior with zero tolerances is not defined. > You can use -snes_monitor to print out the iterations. > > On Sun, Dec 12, 2021 at 6:22 PM celestechevali at gmail.com < > celestechevali at gmail.com> wrote: > >> Hello, >> >> I encountered a strange problem concerning the convergence of SNES. >> >> In my recent test runs I found that SNES always stops at iteration 0... >> >> At first I thought there may be an error with the tolerance setting, so I >> output the tolerances : >> >> >> >> *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = >> 10000 Norm of error 760.491 Iterations 0* >> >> Which are exactly the default values that I always used. However, for the >> same tolerance settings, the SNES solver converges successfully if I >> decrease the number of degrees of freedom in my system... >> >> I wish to know if anyone has experienced the same type of problems or has >> an idea about what could possibly cause the problem... >> >> Thank you so much in advance. >> >> I appreciate any advice that you provide. >> >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 12 20:12:36 2021 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 12 Dec 2021 21:12:36 -0500 Subject: [petsc-users] non-manifold DMPLEX In-Reply-To: References: Message-ID: On Sun, Dec 12, 2021 at 4:36 PM TARDIEU Nicolas wrote: > Dear Patrick and Matthew, > > Thank you very much for your answers. I am gonna try to set up such a test > by assigning cell types. > Shall I open a MR in order to contribute this example ? > Yes, that would be great. Thanks, Matt > Regards, > Nicolas > > ------------------------------ > *De :* knepley at gmail.com > *Envoy? :* dimanche 12 d?cembre 2021 12:17 > *? :* Patrick Sanan > *Cc :* TARDIEU Nicolas ; petsc-users at mcs.anl.gov < > petsc-users at mcs.anl.gov> > *Objet :* Re: [petsc-users] non-manifold DMPLEX > > On Sun, Dec 12, 2021 at 6:11 AM Patrick Sanan > wrote: > > Here you have the following "points": > > - 1 3-cell (the cube volume) > - 7 2-cells (the 6 faces of the cube plus the extra one) > - 16 1-cells (the 12 edges of the cube, plus 3 extra ones from the extra > face, plus the extra edge) > - 11 0-cells (the 8 vertices of the cube, pus 2 extra ones from the extra > face, plus the extra vertex) > > You could encode your mesh as here, by directly specifying relationships > between these points in the Hasse diagram: > > https://petsc.org/release/docs/manual/dmplex/#representing-unstructured-grids > > > Then, maybe the special relation is captured because you've defined the > "cone" or "support" for each "point", which tells you about the local > topology everywhere. E.g. to take the simpler case, three of the faces have > the yellow edge in their "cone", or equivalently the yellow edge has those > three faces in its "support". > > > This is correct. I can help you make this if you want. I think if you > assign cell types, you can even get Plex to automatically interpolate. > > Note that with this kind of mesh, algorithms which assume a uniform cell > dimension will break, but I am guessing you would not > be interested in those anyway. > > Thanks, > > Matt > > > Am Fr., 10. Dez. 2021 um 17:04 Uhr schrieb TARDIEU Nicolas via petsc-users > : > > Dear PETSc Team, > > Following a previous discussion on the mailing list, I'd like to > experiment with DMPLEX with a very simple non-manifold mesh as shown in the > attached picture : a cube connected to a square by an edge and to an edge > by a point. > I have read some of the papers that Matthew et al. have written, but I > must admit that I do not see how to start... > I see how the define the different elements but I do not see how to > specify the special relationship between the cube and the square and > between the cube and the edge. > Once it will have been set correctly, what I am hoping is to be able to > use all the nice features of the DMPLEX object. > > Best regards, > Nicolas > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 12 20:31:31 2021 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 12 Dec 2021 21:31:31 -0500 Subject: [petsc-users] SNES always ends at iteration 0 In-Reply-To: References: Message-ID: On Sun, Dec 12, 2021 at 7:48 PM celestechevali at gmail.com < celestechevali at gmail.com> wrote: > Thank you so much for your reply ! > > In fact I didn't know how to set tolerances, so I proceeded without > specifying the tolerances, hoping that this could lead to the > implementation of default values... > > I just added *SNESSetFromOptions(snes); *However, it doesn't make any > difference... But it's true that the "tol" values are somehow set to > zero... > 1) If you call *SNESSetFromOptions(), you can set the tolerances using* -snes_rtol 1e-8 or similar. 2) These are not the default values, so you are definitely changing them. I suspect you are calling SNESetTolerances(). Do not call it unless you want to set them. 3) Give -snes_converged_reason, so we can see why the iteration stopped. Thanks, Matt > > *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = > 10000 0 SNES Function norm 7.604910424038e+02* > > Is it possible that it has something to do with my makefile ? > > Since I didn't figure out the PETSc makefile format (which seems to be > different from standard C makefile format), I named my source code as > *ex1.c* to make use of the default settings for PETSc example programs... > > And in my makefile I wrote : > > > > > *include ${PETSC_DIR}/lib/petsc/conf/variablesinclude > ${PETSC_DIR}/lib/petsc/conf/testex1: ex1.o* > > Is it possible the "tol" values are set to 0 by the default setting used > for example programs ? > > Thank you so much for your help. > > PS: I just tried the same code with less degrees of freedom and this time > it worked... But for a large system it didn't... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = > 10000 0 SNES Function norm 1.164809703659e+00 1 SNES Function norm > 1.311388740736e-01 2 SNES Function norm 7.232579319557e-02 3 SNES > Function norm 4.984271911548e-02 4 SNES Function norm 3.224387373805e-02 > 5 SNES Function norm 6.898280568053e-03 6 SNES Function norm > 6.297558001575e-03 7 SNES Function norm 5.358028396052e-03 8 SNES > Function norm 4.591005105466e-03 9 SNES Function norm 4.063981130201e-03 > 10 SNES Function norm 3.715929394609e-03 11 SNES Function norm > 3.428330101253e-03 12 SNES Function norm 3.177113603032e-03 13 SNES > Function norm 2.958574186594e-03 14 SNES Function norm 2.769227811865e-03 > 15 SNES Function norm 2.605947870584e-03 16 SNES Function norm > 2.465934405221e-03 17 SNES Function norm 2.346761136962e-03 18 SNES > Function norm 2.246362261451e-03 19 SNES Function norm 2.163102452591e-03 > 20 SNES Function norm 2.095849101382e-03 21 SNES Function norm > 2.043740325461e-03 22 SNES Function norm 2.005106316761e-03 23 SNES > Function norm 1.975748994170e-03 24 SNES Function norm 1.949413335428e-03 > 25 SNES Function norm 1.920795414593e-03 26 SNES Function norm > 1.886883259141e-03 27 SNES Function norm 1.846374653045e-03 28 SNES > Function norm 1.799050087038e-03 29 SNES Function norm 1.745284156916e-03 > 30 SNES Function norm 1.685885151987e-03 31 SNES Function norm > 1.621850994665e-03 32 SNES Function norm 1.554258940064e-03 33 SNES > Function norm 1.484213253375e-03 34 SNES Function norm 1.412768267404e-03 > 35 SNES Function norm 1.340893218332e-03 36 SNES Function norm > 1.269412489589e-03 37 SNES Function norm 1.199029202116e-03 38 SNES > Function norm 1.130300263372e-03 39 SNES Function norm 1.063694395854e-03 > 40 SNES Function norm 9.995826338243e-04 41 SNES Function norm > 9.383610129089e-04 42 SNES Function norm 8.807543352645e-04 43 SNES > Function norm 8.288695938590e-04 44 SNES Function norm 7.898873173876e-04 > 45 SNES Function norm 7.752509690373e-04 46 SNES Function norm > 7.625724154377e-04 47 SNES Function norm 7.503152403370e-04 48 SNES > Function norm 7.364744378378e-04 49 SNES Function norm 7.202926541551e-04 > 50 SNES Function norm 7.015245603442e-04 * > > Mark Adams ?2021?12?13??? 01:11??? > >> The three "tol" values should be finite. It sounds like you set them to 0. >> Don't do that and use the defaults to start. >> The behavior with zero tolerances is not defined. >> You can use -snes_monitor to print out the iterations. >> >> On Sun, Dec 12, 2021 at 6:22 PM celestechevali at gmail.com < >> celestechevali at gmail.com> wrote: >> >>> Hello, >>> >>> I encountered a strange problem concerning the convergence of SNES. >>> >>> In my recent test runs I found that SNES always stops at iteration 0... >>> >>> At first I thought there may be an error with the tolerance setting, so >>> I output the tolerances : >>> >>> >>> >>> *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = >>> 10000 Norm of error 760.491 Iterations 0* >>> >>> Which are exactly the default values that I always used. However, for >>> the same tolerance settings, the SNES solver converges successfully if I >>> decrease the number of degrees of freedom in my system... >>> >>> I wish to know if anyone has experienced the same type of problems or >>> has an idea about what could possibly cause the problem... >>> >>> Thank you so much in advance. >>> >>> I appreciate any advice that you provide. >>> >>> >>> >>> >>> >>> >>> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Dec 12 20:53:00 2021 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 12 Dec 2021 21:53:00 -0500 Subject: [petsc-users] SNES always ends at iteration 0 In-Reply-To: References: Message-ID: On Sun, Dec 12, 2021 at 7:47 PM celestechevali at gmail.com < celestechevali at gmail.com> wrote: > Thank you so much for your reply ! > > In fact I didn't know how to set tolerances, so I proceeded without > specifying the tolerances, hoping that this could lead to the > implementation of default values... > > I just added *SNESSetFromOptions(snes); *However, it doesn't make any > difference... But it's true that the "tol" values are somehow set to > zero... > > *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = > 10000* > I'm not sure where this output comes from. This does not look like PETSc "view" output. The default values are small (eg, 1e-8). maxit=50 and maxf=10000 look like the defaults. Maybe you are printing floats incorrectly. Your output below is converging due to maxit=50. rtol==0 would never converge because the relative residual can essentially never be 0. You can also use -ksp_monitor to view the linear solver iterations and -ksp_converged_reason to have the solver print the reason that it "converged". -snes_converged_reason makes the SNES print why it decided to stop iterating. These parameters should give you more information about the case where you see no output (unless the code is hung). Mark > > > * 0 SNES Function norm 7.604910424038e+02* > > Is it possible that it has something to do with my makefile ? > > Since I didn't figure out the PETSc makefile format (which seems to be > different from standard C makefile format), I named my source code as > *ex1.c* to make use of the default settings for PETSc example programs... > > And in my makefile I wrote : > > > > > *include ${PETSC_DIR}/lib/petsc/conf/variablesinclude > ${PETSC_DIR}/lib/petsc/conf/testex1: ex1.o* > > Is it possible the "tol" values are set to 0 by the default setting used > for example programs ? > > Thank you so much for your help. > > PS: I just tried the same code with less degrees of freedom and this time > it worked... But for a large system it didn't... > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = > 10000 0 SNES Function norm 1.164809703659e+00 1 SNES Function norm > 1.311388740736e-01 2 SNES Function norm 7.232579319557e-02 3 SNES > Function norm 4.984271911548e-02 4 SNES Function norm 3.224387373805e-02 > 5 SNES Function norm 6.898280568053e-03 6 SNES Function norm > 6.297558001575e-03 7 SNES Function norm 5.358028396052e-03 8 SNES > Function norm 4.591005105466e-03 9 SNES Function norm 4.063981130201e-03 > 10 SNES Function norm 3.715929394609e-03 11 SNES Function norm > 3.428330101253e-03 12 SNES Function norm 3.177113603032e-03 13 SNES > Function norm 2.958574186594e-03 14 SNES Function norm 2.769227811865e-03 > 15 SNES Function norm 2.605947870584e-03 16 SNES Function norm > 2.465934405221e-03 17 SNES Function norm 2.346761136962e-03 18 SNES > Function norm 2.246362261451e-03 19 SNES Function norm 2.163102452591e-03 > 20 SNES Function norm 2.095849101382e-03 21 SNES Function norm > 2.043740325461e-03 22 SNES Function norm 2.005106316761e-03 23 SNES > Function norm 1.975748994170e-03 24 SNES Function norm 1.949413335428e-03 > 25 SNES Function norm 1.920795414593e-03 26 SNES Function norm > 1.886883259141e-03 27 SNES Function norm 1.846374653045e-03 28 SNES > Function norm 1.799050087038e-03 29 SNES Function norm 1.745284156916e-03 > 30 SNES Function norm 1.685885151987e-03 31 SNES Function norm > 1.621850994665e-03 32 SNES Function norm 1.554258940064e-03 33 SNES > Function norm 1.484213253375e-03 34 SNES Function norm 1.412768267404e-03 > 35 SNES Function norm 1.340893218332e-03 36 SNES Function norm > 1.269412489589e-03 37 SNES Function norm 1.199029202116e-03 38 SNES > Function norm 1.130300263372e-03 39 SNES Function norm 1.063694395854e-03 > 40 SNES Function norm 9.995826338243e-04 41 SNES Function norm > 9.383610129089e-04 42 SNES Function norm 8.807543352645e-04 43 SNES > Function norm 8.288695938590e-04 44 SNES Function norm 7.898873173876e-04 > 45 SNES Function norm 7.752509690373e-04 46 SNES Function norm > 7.625724154377e-04 47 SNES Function norm 7.503152403370e-04 48 SNES > Function norm 7.364744378378e-04 49 SNES Function norm 7.202926541551e-04 > 50 SNES Function norm 7.015245603442e-04 * > > Mark Adams ?2021?12?13??? 01:11??? > >> The three "tol" values should be finite. It sounds like you set them to 0. >> Don't do that and use the defaults to start. >> The behavior with zero tolerances is not defined. >> You can use -snes_monitor to print out the iterations. >> >> On Sun, Dec 12, 2021 at 6:22 PM celestechevali at gmail.com < >> celestechevali at gmail.com> wrote: >> >>> Hello, >>> >>> I encountered a strange problem concerning the convergence of SNES. >>> >>> In my recent test runs I found that SNES always stops at iteration 0... >>> >>> At first I thought there may be an error with the tolerance setting, so >>> I output the tolerances : >>> >>> >>> >>> *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = >>> 10000 Norm of error 760.491 Iterations 0* >>> >>> Which are exactly the default values that I always used. However, for >>> the same tolerance settings, the SNES solver converges successfully if I >>> decrease the number of degrees of freedom in my system... >>> >>> I wish to know if anyone has experienced the same type of problems or >>> has an idea about what could possibly cause the problem... >>> >>> Thank you so much in advance. >>> >>> I appreciate any advice that you provide. >>> >>> >>> >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Mon Dec 13 01:34:58 2021 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Mon, 13 Dec 2021 08:34:58 +0100 Subject: [petsc-users] non-manifold DMPLEX In-Reply-To: References: Message-ID: This would be a particularly useful example for testing the PETSc library, as most applications are set on manifolds. There?s a recently-reformatted chapter in the developers docs on how the testing works - this explains the /* TEST */ blocks you see. If parts of these docs are confusing, that?s very useful feedback for us! https://petsc.org/release/developers/testing Matthew Knepley schrieb am Mo. 13. Dez. 2021 um 03:12: > On Sun, Dec 12, 2021 at 4:36 PM TARDIEU Nicolas > wrote: > >> Dear Patrick and Matthew, >> >> Thank you very much for your answers. I am gonna try to set up such a >> test by assigning cell types. >> Shall I open a MR in order to contribute this example ? >> > > Yes, that would be great. > > Thanks, > > Matt > > >> Regards, >> Nicolas >> >> ------------------------------ >> *De :* knepley at gmail.com >> *Envoy? :* dimanche 12 d?cembre 2021 12:17 >> *? :* Patrick Sanan >> *Cc :* TARDIEU Nicolas ; petsc-users at mcs.anl.gov >> >> *Objet :* Re: [petsc-users] non-manifold DMPLEX >> >> On Sun, Dec 12, 2021 at 6:11 AM Patrick Sanan >> wrote: >> >> Here you have the following "points": >> >> - 1 3-cell (the cube volume) >> - 7 2-cells (the 6 faces of the cube plus the extra one) >> - 16 1-cells (the 12 edges of the cube, plus 3 extra ones from the extra >> face, plus the extra edge) >> - 11 0-cells (the 8 vertices of the cube, pus 2 extra ones from the extra >> face, plus the extra vertex) >> >> You could encode your mesh as here, by directly specifying relationships >> between these points in the Hasse diagram: >> >> https://petsc.org/release/docs/manual/dmplex/#representing-unstructured-grids >> >> >> Then, maybe the special relation is captured because you've defined the >> "cone" or "support" for each "point", which tells you about the local >> topology everywhere. E.g. to take the simpler case, three of the faces have >> the yellow edge in their "cone", or equivalently the yellow edge has those >> three faces in its "support". >> >> >> This is correct. I can help you make this if you want. I think if you >> assign cell types, you can even get Plex to automatically interpolate. >> >> Note that with this kind of mesh, algorithms which assume a uniform cell >> dimension will break, but I am guessing you would not >> be interested in those anyway. >> >> Thanks, >> >> Matt >> >> >> Am Fr., 10. Dez. 2021 um 17:04 Uhr schrieb TARDIEU Nicolas via >> petsc-users : >> >> Dear PETSc Team, >> >> Following a previous discussion on the mailing list, I'd like to >> experiment with DMPLEX with a very simple non-manifold mesh as shown in the >> attached picture : a cube connected to a square by an edge and to an edge >> by a point. >> I have read some of the papers that Matthew et al. have written, but I >> must admit that I do not see how to start... >> I see how the define the different elements but I do not see how to >> specify the special relationship between the cube and the square and >> between the cube and the edge. >> Once it will have been set correctly, what I am hoping is to be able to >> use all the nice features of the DMPLEX object. >> >> Best regards, >> Nicolas >> >> >> Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont >> ?tablis ? l'intention exclusive des destinataires et les informations qui y >> figurent sont strictement confidentielles. Toute utilisation de ce Message >> non conforme ? sa destination, toute diffusion ou toute publication totale >> ou partielle, est interdite sauf autorisation expresse. >> >> Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de >> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou >> partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de >> votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace >> sur quelque support que ce soit. Nous vous remercions ?galement d'en >> avertir imm?diatement l'exp?diteur par retour du message. >> >> Il est impossible de garantir que les communications par messagerie >> ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute >> erreur ou virus. >> ____________________________________________________ >> >> This message and any attachments (the 'Message') are intended solely for >> the addressees. The information contained in this Message is confidential. >> Any use of information contained in this Message not in accord with its >> purpose, any dissemination or disclosure, either whole or partial, is >> prohibited except formal approval. >> >> If you are not the addressee, you may not copy, forward, disclose or use >> any part of it. If you have received this message in error, please delete >> it and all copies from your system and notify the sender immediately by >> return message. >> >> E-mail communication cannot be guaranteed to be timely secure, error or >> virus-free. >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont >> ?tablis ? l'intention exclusive des destinataires et les informations qui y >> figurent sont strictement confidentielles. Toute utilisation de ce Message >> non conforme ? sa destination, toute diffusion ou toute publication totale >> ou partielle, est interdite sauf autorisation expresse. >> >> Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de >> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou >> partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de >> votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace >> sur quelque support que ce soit. Nous vous remercions ?galement d'en >> avertir imm?diatement l'exp?diteur par retour du message. >> >> Il est impossible de garantir que les communications par messagerie >> ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute >> erreur ou virus. >> ____________________________________________________ >> >> This message and any attachments (the 'Message') are intended solely for >> the addressees. The information contained in this Message is confidential. >> Any use of information contained in this Message not in accord with its >> purpose, any dissemination or disclosure, either whole or partial, is >> prohibited except formal approval. >> >> If you are not the addressee, you may not copy, forward, disclose or use >> any part of it. If you have received this message in error, please delete >> it and all copies from your system and notify the sender immediately by >> return message. >> >> E-mail communication cannot be guaranteed to be timely secure, error or >> virus-free. >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Mon Dec 13 06:15:35 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Mon, 13 Dec 2021 12:15:35 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> Message-ID: <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> Thank you. I was able to confirm both the below options produced the same mesh ./ex56 -cells 2,2,1 -max_conv_its 2 ./ex56 -cells 4,4,2 -max_conv_its 1 But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of MPI processes. (i) Say I start with -cells 1,1,1 -max_conv its 7; that would eventually leave all refinement on level 7 running on 1 MPI process? (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended to run on 4 MPI processes? I am running ex56 on gpu; I am looking at KSPSolve (or any other event) but no gpu flops are recorded in the -log_view? For your reference I used the below flags: ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view Kind regards, Karthik. From: Mark Adams Date: Sunday, 12 December 2021 at 23:00 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: Matthew Knepley , "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank for your response that was helpful. I have a couple of questions: (i) How can I control the level of refinement? I tried to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement from 8 giving 32 cubes. I answered this question recently but ex56 clobbers ex56_dm_refine in the convergence loop. I have an MR that prints a warning if you provide a ex56_dm_refine. * snes/ex56 runs a convergence study and confusingly sets the options manually, thus erasing your -ex56_dm_refine. * To refine, use -max_conv_its N <3>, this sets the number of steps of refinement. That is, the length of the convergence study * You can adjust where it starts from with -cells i,j,k <1,1,1> You do want to set this if you have multiple MPI processes so that the size of this mesh is the number of processes. That way it starts with one cell per process and refines from there. (ii) What does -cell 2,2,1 correspond to? The initial mesh or mesh_0. The convergence test uniformly refines this mesh. So if you want to refine this twice you could use -cells 8,8,4 How can I determine the total number of dofs? Unfortunately, that is not printed but you can calculate from the initial cell grid, the order of the element and the refinement in each iteration of the convergence tests. So that I can perform a scaling study by changing the input of the flag -cells. You can and the convergence test gives you data for a strong speedup study in one run. Each solve is put in its own "stage" of the output and you want to look at KSPSolve lines in the log_view output. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 13 07:16:37 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 13 Dec 2021 08:16:37 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> Message-ID: On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Thank you. I was able to confirm both the below options produced the same > mesh > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 > > ./ex56 -cells 4,4,2 -max_conv_its 1 > Good > But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of > MPI processes. > It is not. The number of processes is specified independently using 'mpiexec -n

' or when using the test system NP=

. > (i) Say I start with -cells 1,1,1 -max_conv its 7; that would > eventually leave all refinement on level 7 running on 1 MPI process? > > (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended > to run on 4 MPI processes? > No, those options do not influence the number of processes. > I am running ex56 on gpu; I am looking at KSPSolve (or any other event) > but no gpu flops are recorded in the -log_view? > I do not think you are running on the GPU then. Mark can comment, but we usually specify GPU execution using the Vec and Mat types through -dm_vec_type and -dm_mat_type. Thanks, Matt > For your reference I used the below flags: > > ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view > > > > Kind regards, > > Karthik. > > > > > > *From: *Mark Adams > *Date: *Sunday, 12 December 2021 at 23:00 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank for your response that was helpful. I have a couple of questions: > > > > (i) How can I control the level of refinement? I tried > to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement > from 8 giving 32 cubes. > > > > I answered this question recently but ex56 clobbers ex56_dm_refine in the > convergence loop. I have an MR that prints a warning if you provide a > ex56_dm_refine. > > > > * snes/ex56 runs a convergence study and confusingly sets the options > manually, thus erasing your -ex56_dm_refine. > > > > * To refine, use -max_conv_its N <3>, this sets the number of steps of > refinement. That is, the length of the convergence study > > > > * You can adjust where it starts from with -cells i,j,k <1,1,1> > > You do want to set this if you have multiple MPI processes so that the > size of this mesh is the number of processes. That way it starts with one > cell per process and refines from there. > > > > (ii) What does -cell 2,2,1 correspond to? > > > > The initial mesh or mesh_0. The convergence test uniformly refines this > mesh. So if you want to refine this twice you could use -cells 8,8,4 > > > > How can I determine the total number of dofs? > > Unfortunately, that is not printed but you can calculate from the initial > cell grid, the order of the element and the refinement in each iteration of > the convergence tests. > > > > So that I can perform a scaling study by changing the input of the flag > -cells. > > > > > > You can and the convergence test gives you data for a strong speedup study > in one run. Each solve is put in its own "stage" of the output and you want > to look at KSPSolve lines in the log_view output. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Mon Dec 13 07:35:11 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Mon, 13 Dec 2021 13:35:11 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> Message-ID: <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> Thanks Matt. Couple of weeks back you mentioned ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now.? I am able to run other examples in ksp/tutorials on gpus. I complied ex56 in snes/tutorials no differently. The only difference being I didn?t specify _dm_vec_type and _dm_vec_type (as you mentioned they are not assembled on gpus anyways plus I am working on an unstructured grid thought _dm is not right type for this problem). I was hoping to see gpu flops recorded for KSPSolve, which I didn?t. Okay, I will wait for Mark to comment. Kind regards, Karthik. From: Matthew Knepley Date: Monday, 13 December 2021 at 13:17 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: Mark Adams , "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank you. I was able to confirm both the below options produced the same mesh ./ex56 -cells 2,2,1 -max_conv_its 2 ./ex56 -cells 4,4,2 -max_conv_its 1 Good But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of MPI processes. It is not. The number of processes is specified independently using 'mpiexec -n

' or when using the test system NP=

. (i) Say I start with -cells 1,1,1 -max_conv its 7; that would eventually leave all refinement on level 7 running on 1 MPI process? (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended to run on 4 MPI processes? No, those options do not influence the number of processes. I am running ex56 on gpu; I am looking at KSPSolve (or any other event) but no gpu flops are recorded in the -log_view? I do not think you are running on the GPU then. Mark can comment, but we usually specify GPU execution using the Vec and Mat types through -dm_vec_type and -dm_mat_type. Thanks, Matt For your reference I used the below flags: ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view Kind regards, Karthik. From: Mark Adams > Date: Sunday, 12 December 2021 at 23:00 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank for your response that was helpful. I have a couple of questions: (i) How can I control the level of refinement? I tried to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement from 8 giving 32 cubes. I answered this question recently but ex56 clobbers ex56_dm_refine in the convergence loop. I have an MR that prints a warning if you provide a ex56_dm_refine. * snes/ex56 runs a convergence study and confusingly sets the options manually, thus erasing your -ex56_dm_refine. * To refine, use -max_conv_its N <3>, this sets the number of steps of refinement. That is, the length of the convergence study * You can adjust where it starts from with -cells i,j,k <1,1,1> You do want to set this if you have multiple MPI processes so that the size of this mesh is the number of processes. That way it starts with one cell per process and refines from there. (ii) What does -cell 2,2,1 correspond to? The initial mesh or mesh_0. The convergence test uniformly refines this mesh. So if you want to refine this twice you could use -cells 8,8,4 How can I determine the total number of dofs? Unfortunately, that is not printed but you can calculate from the initial cell grid, the order of the element and the refinement in each iteration of the convergence tests. So that I can perform a scaling study by changing the input of the flag -cells. You can and the convergence test gives you data for a strong speedup study in one run. Each solve is put in its own "stage" of the output and you want to look at KSPSolve lines in the log_view output. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 13 07:43:25 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 13 Dec 2021 08:43:25 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> Message-ID: On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Thanks Matt. Couple of weeks back you mentioned > > ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. > The solver can run on the GPU, but the vector/matrix FEM assembly does not. > I am working on that now.? > > > > I am able to run other examples in ksp/tutorials on gpus. > How do you do this, meaning how do you tell another example to run on a GPU? Thanks, Matt > I complied ex56 in snes/tutorials no differently. The only difference > being I didn?t specify _dm_vec_type and _dm_vec_type (as you mentioned they > are not assembled on gpus anyways plus I am working on an unstructured grid > thought _dm is not right type for this problem). I was hoping to see gpu > flops recorded for KSPSolve, which I didn?t. > > > > Okay, I will wait for Mark to comment. > > > > Kind regards, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Monday, 13 December 2021 at 13:17 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Mark Adams , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank you. I was able to confirm both the below options produced the same > mesh > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 > > ./ex56 -cells 4,4,2 -max_conv_its 1 > > Good > > But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of > MPI processes. > > It is not. The number of processes is specified independently using > 'mpiexec -n

' or when using the test system NP=

. > > (i) Say I start with -cells 1,1,1 -max_conv its 7; that would > eventually leave all refinement on level 7 running on 1 MPI process? > > (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended > to run on 4 MPI processes? > > No, those options do not influence the number of processes. > > > > I am running ex56 on gpu; I am looking at KSPSolve (or any other event) > but no gpu flops are recorded in the -log_view? > > > > I do not think you are running on the GPU then. Mark can comment, but we > usually specify GPU execution using the Vec and Mat types > > through -dm_vec_type and -dm_mat_type. > > > > Thanks, > > > > Matt > > > > For your reference I used the below flags: > > ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view > > > > Kind regards, > > Karthik. > > > > > > *From: *Mark Adams > *Date: *Sunday, 12 December 2021 at 23:00 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank for your response that was helpful. I have a couple of questions: > > > > (i) How can I control the level of refinement? I tried > to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement > from 8 giving 32 cubes. > > > > I answered this question recently but ex56 clobbers ex56_dm_refine in the > convergence loop. I have an MR that prints a warning if you provide a > ex56_dm_refine. > > > > * snes/ex56 runs a convergence study and confusingly sets the options > manually, thus erasing your -ex56_dm_refine. > > > > * To refine, use -max_conv_its N <3>, this sets the number of steps of > refinement. That is, the length of the convergence study > > > > * You can adjust where it starts from with -cells i,j,k <1,1,1> > > You do want to set this if you have multiple MPI processes so that the > size of this mesh is the number of processes. That way it starts with one > cell per process and refines from there. > > > > (ii) What does -cell 2,2,1 correspond to? > > > > The initial mesh or mesh_0. The convergence test uniformly refines this > mesh. So if you want to refine this twice you could use -cells 8,8,4 > > > > How can I determine the total number of dofs? > > Unfortunately, that is not printed but you can calculate from the initial > cell grid, the order of the element and the refinement in each iteration of > the convergence tests. > > > > So that I can perform a scaling study by changing the input of the flag > -cells. > > > > > > You can and the convergence test gives you data for a strong speedup study > in one run. Each solve is put in its own "stage" of the output and you want > to look at KSPSolve lines in the log_view output. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Dec 13 07:57:37 2021 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 13 Dec 2021 08:57:37 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> Message-ID: On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Thanks Matt. Couple of weeks back you mentioned > > ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. > The solver can run on the GPU, but the vector/matrix FEM assembly does not. > I am working on that now.? > > > > I am able to run other examples in ksp/tutorials on gpus. I complied ex56 > in snes/tutorials no differently. The only difference being I didn?t > specify _dm_vec_type and _dm_vec_type (as you mentioned they are not > assembled on gpus anyways plus I am working on an unstructured grid thought > _dm is not right type for this problem). I was hoping to see gpu flops > recorded for KSPSolve, which I didn?t. > > > > Okay, I will wait for Mark to comment. > This (DM) example works like any other, with a prefix, as far as GPU: -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, etc. Run with -options_left to verify that these are used. > > > Kind regards, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Monday, 13 December 2021 at 13:17 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Mark Adams , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank you. I was able to confirm both the below options produced the same > mesh > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 > > ./ex56 -cells 4,4,2 -max_conv_its 1 > > Good > > But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of > MPI processes. > > It is not. The number of processes is specified independently using > 'mpiexec -n

' or when using the test system NP=

. > > (i) Say I start with -cells 1,1,1 -max_conv its 7; that would > eventually leave all refinement on level 7 running on 1 MPI process? > > (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended > to run on 4 MPI processes? > > No, those options do not influence the number of processes. > > > > I am running ex56 on gpu; I am looking at KSPSolve (or any other event) > but no gpu flops are recorded in the -log_view? > > > > I do not think you are running on the GPU then. Mark can comment, but we > usually specify GPU execution using the Vec and Mat types > > through -dm_vec_type and -dm_mat_type. > > > > Thanks, > > > > Matt > > > > For your reference I used the below flags: > > ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view > > > > Kind regards, > > Karthik. > > > > > > *From: *Mark Adams > *Date: *Sunday, 12 December 2021 at 23:00 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank for your response that was helpful. I have a couple of questions: > > > > (i) How can I control the level of refinement? I tried > to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement > from 8 giving 32 cubes. > > > > I answered this question recently but ex56 clobbers ex56_dm_refine in the > convergence loop. I have an MR that prints a warning if you provide a > ex56_dm_refine. > > > > * snes/ex56 runs a convergence study and confusingly sets the options > manually, thus erasing your -ex56_dm_refine. > > > > * To refine, use -max_conv_its N <3>, this sets the number of steps of > refinement. That is, the length of the convergence study > > > > * You can adjust where it starts from with -cells i,j,k <1,1,1> > > You do want to set this if you have multiple MPI processes so that the > size of this mesh is the number of processes. That way it starts with one > cell per process and refines from there. > > > > (ii) What does -cell 2,2,1 correspond to? > > > > The initial mesh or mesh_0. The convergence test uniformly refines this > mesh. So if you want to refine this twice you could use -cells 8,8,4 > > > > How can I determine the total number of dofs? > > Unfortunately, that is not printed but you can calculate from the initial > cell grid, the order of the element and the refinement in each iteration of > the convergence tests. > > > > So that I can perform a scaling study by changing the input of the flag > -cells. > > > > > > You can and the convergence test gives you data for a strong speedup study > in one run. Each solve is put in its own "stage" of the output and you want > to look at KSPSolve lines in the log_view output. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Dec 13 08:02:44 2021 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 13 Dec 2021 09:02:44 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> Message-ID: > It is not. The number of processes is specified independently using > 'mpiexec -n

' or when using the test system NP=

. > >> (i) Say I start with -cells 1,1,1 -max_conv its 7; that would >> eventually leave all refinement on level 7 running on 1 MPI process? >> > I don't understand > (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended >> to run on 4 MPI processes? >> > Yes! > No, those options do not influence the number of processes. > Correct, but you are not answering the question that he asked! The coupling of the two is not enforced in the code, but to get good load balancing you want i*j*k == NP. (there is more flexibility than this, but this is the place to start.) > > >> I am running ex56 on gpu; I am looking at KSPSolve (or any other event) >> but no gpu flops are recorded in the -log_view? >> > > I do not think you are running on the GPU then. Mark can comment, but we > usually specify GPU execution using the Vec and Mat types > through -dm_vec_type and -dm_mat_type. > This test has a ex56_ prefix. Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Mon Dec 13 08:40:44 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Mon, 13 Dec 2021 14:40:44 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> Message-ID: @Mark Adams Yes, it worked with -ex56_dm_mat_type mpiaijcusparse else it crashes with the error message [0]PETSC ERROR: Unknown Mat type given: cusparse @Matthew Knepley Usually PETSc -log_view reports the GPU flops. Alternatively if are using an external package such as hypre, where gpu flops are not recorded by petsc, profiling using Nvidia?s nsight captures them. So one could tell if the problem is running on gpus or not. Kind regards, Karthik. From: Mark Adams Date: Monday, 13 December 2021 at 13:58 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: Matthew Knepley , "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thanks Matt. Couple of weeks back you mentioned ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now.? I am able to run other examples in ksp/tutorials on gpus. I complied ex56 in snes/tutorials no differently. The only difference being I didn?t specify _dm_vec_type and _dm_vec_type (as you mentioned they are not assembled on gpus anyways plus I am working on an unstructured grid thought _dm is not right type for this problem). I was hoping to see gpu flops recorded for KSPSolve, which I didn?t. Okay, I will wait for Mark to comment. This (DM) example works like any other, with a prefix, as far as GPU: -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, etc. Run with -options_left to verify that these are used. Kind regards, Karthik. From: Matthew Knepley > Date: Monday, 13 December 2021 at 13:17 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Mark Adams >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank you. I was able to confirm both the below options produced the same mesh ./ex56 -cells 2,2,1 -max_conv_its 2 ./ex56 -cells 4,4,2 -max_conv_its 1 Good But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of MPI processes. It is not. The number of processes is specified independently using 'mpiexec -n

' or when using the test system NP=

. (i) Say I start with -cells 1,1,1 -max_conv its 7; that would eventually leave all refinement on level 7 running on 1 MPI process? (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended to run on 4 MPI processes? No, those options do not influence the number of processes. I am running ex56 on gpu; I am looking at KSPSolve (or any other event) but no gpu flops are recorded in the -log_view? I do not think you are running on the GPU then. Mark can comment, but we usually specify GPU execution using the Vec and Mat types through -dm_vec_type and -dm_mat_type. Thanks, Matt For your reference I used the below flags: ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view Kind regards, Karthik. From: Mark Adams > Date: Sunday, 12 December 2021 at 23:00 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank for your response that was helpful. I have a couple of questions: (i) How can I control the level of refinement? I tried to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement from 8 giving 32 cubes. I answered this question recently but ex56 clobbers ex56_dm_refine in the convergence loop. I have an MR that prints a warning if you provide a ex56_dm_refine. * snes/ex56 runs a convergence study and confusingly sets the options manually, thus erasing your -ex56_dm_refine. * To refine, use -max_conv_its N <3>, this sets the number of steps of refinement. That is, the length of the convergence study * You can adjust where it starts from with -cells i,j,k <1,1,1> You do want to set this if you have multiple MPI processes so that the size of this mesh is the number of processes. That way it starts with one cell per process and refines from there. (ii) What does -cell 2,2,1 correspond to? The initial mesh or mesh_0. The convergence test uniformly refines this mesh. So if you want to refine this twice you could use -cells 8,8,4 How can I determine the total number of dofs? Unfortunately, that is not printed but you can calculate from the initial cell grid, the order of the element and the refinement in each iteration of the convergence tests. So that I can perform a scaling study by changing the input of the flag -cells. You can and the convergence test gives you data for a strong speedup study in one run. Each solve is put in its own "stage" of the output and you want to look at KSPSolve lines in the log_view output. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 13 08:43:56 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 13 Dec 2021 09:43:56 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> Message-ID: On Mon, Dec 13, 2021 at 9:40 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > *@Mark Adams *Yes, it worked with *-ex56_dm_mat_type > mpiaijcusparse* else it crashes with the error message > > [0]PETSC ERROR: Unknown Mat type given: cusparse > > > > > > *@Matthew Knepley *Usually PETSc -log_view reports > the GPU flops. Alternatively if are using an external package such as > hypre, where gpu flops are not recorded by petsc, profiling using Nvidia?s > nsight captures them. So one could tell if the problem is running on gpus > or not. > Yes, that is how we measure GPU flops. I was asking how you tell the example to run on the GPU. I suggested using -ex56_dm_mat_type. You said that you were not using this but still running on a GPU. I did not see how this could be possible, so I was asking about that. Thanks, Matt > Kind regards, > > Karthik. > > > > *From: *Mark Adams > *Date: *Monday, 13 December 2021 at 13:58 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thanks Matt. Couple of weeks back you mentioned > > ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. > The solver can run on the GPU, but the vector/matrix FEM assembly does not. > I am working on that now.? > > > > I am able to run other examples in ksp/tutorials on gpus. I complied ex56 > in snes/tutorials no differently. The only difference being I didn?t > specify _dm_vec_type and _dm_vec_type (as you mentioned they are not > assembled on gpus anyways plus I am working on an unstructured grid thought > _dm is not right type for this problem). I was hoping to see gpu flops > recorded for KSPSolve, which I didn?t. > > > > Okay, I will wait for Mark to comment. > > > > This (DM) example works like any other, with a prefix, as far as GPU: > -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, > etc. > > Run with -options_left to verify that these are used. > > > > > > Kind regards, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Monday, 13 December 2021 at 13:17 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Mark Adams , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank you. I was able to confirm both the below options produced the same > mesh > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 > > ./ex56 -cells 4,4,2 -max_conv_its 1 > > Good > > But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of > MPI processes. > > It is not. The number of processes is specified independently using > 'mpiexec -n

' or when using the test system NP=

. > > (i) Say I start with -cells 1,1,1 -max_conv its 7; that would > eventually leave all refinement on level 7 running on 1 MPI process? > > (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended > to run on 4 MPI processes? > > No, those options do not influence the number of processes. > > > > I am running ex56 on gpu; I am looking at KSPSolve (or any other event) > but no gpu flops are recorded in the -log_view? > > > > I do not think you are running on the GPU then. Mark can comment, but we > usually specify GPU execution using the Vec and Mat types > > through -dm_vec_type and -dm_mat_type. > > > > Thanks, > > > > Matt > > > > For your reference I used the below flags: > > ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view > > > > Kind regards, > > Karthik. > > > > > > *From: *Mark Adams > *Date: *Sunday, 12 December 2021 at 23:00 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank for your response that was helpful. I have a couple of questions: > > > > (i) How can I control the level of refinement? I tried > to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement > from 8 giving 32 cubes. > > > > I answered this question recently but ex56 clobbers ex56_dm_refine in the > convergence loop. I have an MR that prints a warning if you provide a > ex56_dm_refine. > > > > * snes/ex56 runs a convergence study and confusingly sets the options > manually, thus erasing your -ex56_dm_refine. > > > > * To refine, use -max_conv_its N <3>, this sets the number of steps of > refinement. That is, the length of the convergence study > > > > * You can adjust where it starts from with -cells i,j,k <1,1,1> > > You do want to set this if you have multiple MPI processes so that the > size of this mesh is the number of processes. That way it starts with one > cell per process and refines from there. > > > > (ii) What does -cell 2,2,1 correspond to? > > > > The initial mesh or mesh_0. The convergence test uniformly refines this > mesh. So if you want to refine this twice you could use -cells 8,8,4 > > > > How can I determine the total number of dofs? > > Unfortunately, that is not printed but you can calculate from the initial > cell grid, the order of the element and the refinement in each iteration of > the convergence tests. > > > > So that I can perform a scaling study by changing the input of the flag > -cells. > > > > > > You can and the convergence test gives you data for a strong speedup study > in one run. Each solve is put in its own "stage" of the output and you want > to look at KSPSolve lines in the log_view output. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Mon Dec 13 08:53:45 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Mon, 13 Dec 2021 14:53:45 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> Message-ID: Sorry, looks like I have not only misunderstood your question but also your recommendation to run using -ex56_dm_mat_type. I didn?t realize one needs to add the prefix -ex56. Kind regards, Karthik. From: Matthew Knepley Date: Monday, 13 December 2021 at 14:44 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: Mark Adams , "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 9:40 AM Karthikeyan Chockalingam - STFC UKRI > wrote: @Mark Adams Yes, it worked with -ex56_dm_mat_type mpiaijcusparse else it crashes with the error message [0]PETSC ERROR: Unknown Mat type given: cusparse @Matthew Knepley Usually PETSc -log_view reports the GPU flops. Alternatively if are using an external package such as hypre, where gpu flops are not recorded by petsc, profiling using Nvidia?s nsight captures them. So one could tell if the problem is running on gpus or not. Yes, that is how we measure GPU flops. I was asking how you tell the example to run on the GPU. I suggested using -ex56_dm_mat_type. You said that you were not using this but still running on a GPU. I did not see how this could be possible, so I was asking about that. Thanks, Matt Kind regards, Karthik. From: Mark Adams > Date: Monday, 13 December 2021 at 13:58 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thanks Matt. Couple of weeks back you mentioned ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now.? I am able to run other examples in ksp/tutorials on gpus. I complied ex56 in snes/tutorials no differently. The only difference being I didn?t specify _dm_vec_type and _dm_vec_type (as you mentioned they are not assembled on gpus anyways plus I am working on an unstructured grid thought _dm is not right type for this problem). I was hoping to see gpu flops recorded for KSPSolve, which I didn?t. Okay, I will wait for Mark to comment. This (DM) example works like any other, with a prefix, as far as GPU: -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, etc. Run with -options_left to verify that these are used. Kind regards, Karthik. From: Matthew Knepley > Date: Monday, 13 December 2021 at 13:17 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Mark Adams >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank you. I was able to confirm both the below options produced the same mesh ./ex56 -cells 2,2,1 -max_conv_its 2 ./ex56 -cells 4,4,2 -max_conv_its 1 Good But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of MPI processes. It is not. The number of processes is specified independently using 'mpiexec -n

' or when using the test system NP=

. (i) Say I start with -cells 1,1,1 -max_conv its 7; that would eventually leave all refinement on level 7 running on 1 MPI process? (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended to run on 4 MPI processes? No, those options do not influence the number of processes. I am running ex56 on gpu; I am looking at KSPSolve (or any other event) but no gpu flops are recorded in the -log_view? I do not think you are running on the GPU then. Mark can comment, but we usually specify GPU execution using the Vec and Mat types through -dm_vec_type and -dm_mat_type. Thanks, Matt For your reference I used the below flags: ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view Kind regards, Karthik. From: Mark Adams > Date: Sunday, 12 December 2021 at 23:00 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank for your response that was helpful. I have a couple of questions: (i) How can I control the level of refinement? I tried to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement from 8 giving 32 cubes. I answered this question recently but ex56 clobbers ex56_dm_refine in the convergence loop. I have an MR that prints a warning if you provide a ex56_dm_refine. * snes/ex56 runs a convergence study and confusingly sets the options manually, thus erasing your -ex56_dm_refine. * To refine, use -max_conv_its N <3>, this sets the number of steps of refinement. That is, the length of the convergence study * You can adjust where it starts from with -cells i,j,k <1,1,1> You do want to set this if you have multiple MPI processes so that the size of this mesh is the number of processes. That way it starts with one cell per process and refines from there. (ii) What does -cell 2,2,1 correspond to? The initial mesh or mesh_0. The convergence test uniformly refines this mesh. So if you want to refine this twice you could use -cells 8,8,4 How can I determine the total number of dofs? Unfortunately, that is not printed but you can calculate from the initial cell grid, the order of the element and the refinement in each iteration of the convergence tests. So that I can perform a scaling study by changing the input of the flag -cells. You can and the convergence test gives you data for a strong speedup study in one run. Each solve is put in its own "stage" of the output and you want to look at KSPSolve lines in the log_view output. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Mon Dec 13 10:57:45 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Mon, 13 Dec 2021 16:57:45 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> Message-ID: <44E3FC40-727F-4ECB-8DCA-DCBA99ED74B1@stfc.ac.uk> I tried to run the problem using -pc_type hypre but it errored out: ./ex56 -cells 4,4,2 -max_conv_its 1 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type hypre -pc_hypre_type boomeramg -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view -ex56_dm_vec_type cuda -ex56_dm_mat_type hypre -options_left [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: Blocksize of layout 1 must match that of mapping 3 (or the latter must be 1) [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.16.1-353-g887dddf386 GIT Date: 2021-11-19 20:24:41 +0000 [0]PETSC ERROR: ./ex56 on a arch-linux2-c-opt named sqg2b13.bullx by kxc07-lxm25 Mon Dec 13 16:50:02 2021 [0]PETSC ERROR: Configure options --with-debugging=0 --with-blaslapack-dir=/lustre/scafellpike/local/apps/intel/intel_cs/2018.0.128/mkl --with-cuda=1 --with-cuda-arch=70 --download-hypre=yes --download-hypre-configure-arguments="--with-cuda=yes --enable-gpu-profiling=yes --enable-cusparse=yes --enable-cublas=yes --enable-curand=yes --enable-unified-memory=yes HYPRE_CUDA_SM=70" --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-cc=mpicc --with-cxx=mpicxx -with-fc=mpif90 [0]PETSC ERROR: #1 PetscLayoutSetISLocalToGlobalMapping() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/vec/is/utils/pmap.c:371 [0]PETSC ERROR: #2 MatSetLocalToGlobalMapping() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/mat/interface/matrix.c:2089 [0]PETSC ERROR: #3 DMCreateMatrix_Plex() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/impls/plex/plex.c:2460 [0]PETSC ERROR: #4 DMCreateMatrix() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/interface/dm.c:1445 [0]PETSC ERROR: #5 main() at ex56.c:439 [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -alpha .01 [0]PETSC ERROR: -cells 4,4,2 [0]PETSC ERROR: -ex56_dm_mat_type hypre [0]PETSC ERROR: -ex56_dm_vec_type cuda [0]PETSC ERROR: -ex56_dm_view [0]PETSC ERROR: -ksp_monitor [0]PETSC ERROR: -ksp_rtol 1.e-8 [0]PETSC ERROR: -ksp_type cg [0]PETSC ERROR: -log_view [0]PETSC ERROR: -lx 1. [0]PETSC ERROR: -max_conv_its 1 [0]PETSC ERROR: -options_left [0]PETSC ERROR: -pc_hypre_type boomeramg [0]PETSC ERROR: -pc_type hypre [0]PETSC ERROR: -petscspace_degree 1 [0]PETSC ERROR: -snes_monitor [0]PETSC ERROR: -snes_rtol 1.e-10 [0]PETSC ERROR: -use_gpu_aware_mpi 0 [0]PETSC ERROR: -use_mat_nearnullspace true [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 77. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- From: Mark Adams Date: Monday, 13 December 2021 at 13:58 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: Matthew Knepley , "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thanks Matt. Couple of weeks back you mentioned ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now.? I am able to run other examples in ksp/tutorials on gpus. I complied ex56 in snes/tutorials no differently. The only difference being I didn?t specify _dm_vec_type and _dm_vec_type (as you mentioned they are not assembled on gpus anyways plus I am working on an unstructured grid thought _dm is not right type for this problem). I was hoping to see gpu flops recorded for KSPSolve, which I didn?t. Okay, I will wait for Mark to comment. This (DM) example works like any other, with a prefix, as far as GPU: -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, etc. Run with -options_left to verify that these are used. Kind regards, Karthik. From: Matthew Knepley > Date: Monday, 13 December 2021 at 13:17 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Mark Adams >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank you. I was able to confirm both the below options produced the same mesh ./ex56 -cells 2,2,1 -max_conv_its 2 ./ex56 -cells 4,4,2 -max_conv_its 1 Good But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of MPI processes. It is not. The number of processes is specified independently using 'mpiexec -n

' or when using the test system NP=

. (i) Say I start with -cells 1,1,1 -max_conv its 7; that would eventually leave all refinement on level 7 running on 1 MPI process? (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended to run on 4 MPI processes? No, those options do not influence the number of processes. I am running ex56 on gpu; I am looking at KSPSolve (or any other event) but no gpu flops are recorded in the -log_view? I do not think you are running on the GPU then. Mark can comment, but we usually specify GPU execution using the Vec and Mat types through -dm_vec_type and -dm_mat_type. Thanks, Matt For your reference I used the below flags: ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view Kind regards, Karthik. From: Mark Adams > Date: Sunday, 12 December 2021 at 23:00 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank for your response that was helpful. I have a couple of questions: (i) How can I control the level of refinement? I tried to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement from 8 giving 32 cubes. I answered this question recently but ex56 clobbers ex56_dm_refine in the convergence loop. I have an MR that prints a warning if you provide a ex56_dm_refine. * snes/ex56 runs a convergence study and confusingly sets the options manually, thus erasing your -ex56_dm_refine. * To refine, use -max_conv_its N <3>, this sets the number of steps of refinement. That is, the length of the convergence study * You can adjust where it starts from with -cells i,j,k <1,1,1> You do want to set this if you have multiple MPI processes so that the size of this mesh is the number of processes. That way it starts with one cell per process and refines from there. (ii) What does -cell 2,2,1 correspond to? The initial mesh or mesh_0. The convergence test uniformly refines this mesh. So if you want to refine this twice you could use -cells 8,8,4 How can I determine the total number of dofs? Unfortunately, that is not printed but you can calculate from the initial cell grid, the order of the element and the refinement in each iteration of the convergence tests. So that I can perform a scaling study by changing the input of the flag -cells. You can and the convergence test gives you data for a strong speedup study in one run. Each solve is put in its own "stage" of the output and you want to look at KSPSolve lines in the log_view output. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zjorti at lanl.gov Mon Dec 13 12:03:53 2021 From: zjorti at lanl.gov (Jorti, Zakariae) Date: Mon, 13 Dec 2021 18:03:53 +0000 Subject: [petsc-users] [EXTERNAL] Re: Finite difference approximation of Jacobian In-Reply-To: References: <231abd15aab544f9850826cb437366f7@lanl.gov> , Message-ID: Hi Matt, Thanks for the reply. I tested the following flags. - With -snes_fd_color and -snes_fd_color_use_mat I got a segmentation violation. I was able to get the call stack as I am using a debugger: MatCreateSubmatrix_MPIAIJ_All MatGetSeqNonzeroStructure_MPIAIJ MatGetSeqNonzeroStructure MatColoringApply_SL MatColoringApply SNESComputeJacobianDefaultColor - With -snes_fd_color -snes_fd_color_use_mat -mat_coloring_type greedy -mat_coloring_weight_type lf , I get this error message: [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: Not for unassembled matrix [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.15.0, unknown [0]PETSC ERROR: /global/u1/z/zjorti/Gitlab_repos/runaway_develop/runaway/cutcell/MFDsolver/QuasiStatic_Normalized/main on a arch-linux-c-debug named nid00223 by zjorti Mon Dec 13 09:14:12 2021 [0]PETSC ERROR: Configure options CC=cc CXX=CC FC=ftn COPTFLAGS=-g CXXOPTFLAGS=-g FOPTFLAGS=-g --download-superlu_dist --download-scalapack --download-mumps --download-hypre --with-debugging=1 --with-cxx-dialect=C++11 [0]PETSC ERROR: #1 MatIncreaseOverlap() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/mat/interface/matrix.c:7302 [0]PETSC ERROR: #2 MatColoringGetDegrees() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/mat/color/utils/weights.c:59 [0]PETSC ERROR: #3 MatColoringCreateLargestFirstWeights() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/mat/color/utils/weights.c:131 [0]PETSC ERROR: #4 MatColoringCreateWeights() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/mat/color/utils/weights.c:347 [0]PETSC ERROR: #5 MatColoringApply_Greedy() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/mat/color/impls/greedy/greedy.c:563 [0]PETSC ERROR: #6 MatColoringApply() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/mat/color/interface/matcoloring.c:355 [0]PETSC ERROR: #7 SNESComputeJacobianDefaultColor() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/snes/interface/snesj2.c:86 [0]PETSC ERROR: #8 SNESComputeJacobian() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/snes/interface/snes.c:2713 [0]PETSC ERROR: #9 SNESSolve_NEWTONLS() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/snes/impls/ls/ls.c:222 [0]PETSC ERROR: #10 SNESSolve() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/snes/interface/snes.c:4653 [0]PETSC ERROR: #11 TSStep_ARKIMEX() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/ts/impls/arkimex/arkimex.c:845 [0]PETSC ERROR: #12 TSStep() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/ts/interface/ts.c:3777 [0]PETSC ERROR: #13 TSSolve() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/ts/interface/ts.c:4174 - With -snes_fd_color -snes_fd_color_use_mat -mat_coloring_type greedy, there is this error: [8]PETSC ERROR: Null argument, when expecting valid pointer [8]PETSC ERROR: Null Object: Parameter # 1 [8]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [8]PETSC ERROR: Petsc Release Version 3.15.0, unknown [8]PETSC ERROR: /global/u1/z/zjorti/Gitlab_repos/runaway_develop/runaway/cutcell/MFDsolver/QuasiStatic_Normalized/main on a arch-linux-c-debug named nid00223 by zjorti Mon Dec 13 09:27:11 2021 [8]PETSC ERROR: Configure options CC=cc CXX=CC FC=ftn COPTFLAGS=-g CXXOPTFLAGS=-g FOPTFLAGS=-g --download-superlu_dist --download-scalapack --download-mumps --download-hypre --with-debugging=1 --with-cxx-dialect=C++11 [8]PETSC ERROR: #1 VecGetLocalSize() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/vec/vec/interface/vector.c:688 [8]PETSC ERROR: #2 GreedyColoringLocalDistanceTwo_Private() at /global/project/projectdirs/m3016/Master_PETSc/Debug_PETSc/petsc/src/mat/color/impls/greedy/greedy.c:258 Besides, I found this thread "Questions about residual function can't be reduced" (https://www.mail-archive.com/search?l=petsc-users at mcs.anl.gov&q=subject:%22%5C%5Bpetsc%5C-users%5C%5D+questions%22&o=newest&f=1) where you are saying that the user needs to preallocate the Jacobian matrix correctly. Is it true only for user provided Jacobian matrix, or also for finite difference approximation with coloring? In the latter case, how can we do that? Many thanks. Zakariae ________________________________ From: petsc-users on behalf of Matthew Knepley Sent: Saturday, December 11, 2021 2:28:33 PM To: Tang, Qi Cc: petsc-users at mcs.anl.gov Subject: [EXTERNAL] Re: [petsc-users] Finite difference approximation of Jacobian On Sat, Dec 11, 2021 at 1:58 PM Tang, Qi > wrote: Hi, Does anyone have comment on finite difference coloring with DMStag? We are using DMStag and TS to evolve some nonlinear equations implicitly. It would be helpful to have the coloring Jacobian option with that. Since DMStag produces the Jacobian connectivity, you can use -snes_fd_color_use_mat. It has many options. Here is an example of us using that: https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex19.c#L898 Thanks, Matt Thanks, Qi On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users > wrote: Hello, Does the Jacobian approximation using coloring and finite differencing of the function evaluation work in DMStag? Thank you. Best regards, Zakariae -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Dec 13 12:16:33 2021 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 13 Dec 2021 19:16:33 +0100 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: References: <231abd15aab544f9850826cb437366f7@lanl.gov> Message-ID: On Sat 11. Dec 2021 at 22:28, Matthew Knepley wrote: > On Sat, Dec 11, 2021 at 1:58 PM Tang, Qi wrote: > >> Hi, >> Does anyone have comment on finite difference coloring with DMStag? We >> are using DMStag and TS to evolve some nonlinear equations implicitly. It >> would be helpful to have the coloring Jacobian option with that. >> > > Since DMStag produces the Jacobian connectivity, > This is incorrect. The DMCreateMatrix implementation for DMSTAG only sets the number of nonzeros (very inaccurately). It does not insert any zero values and thus the nonzero structure is actually not defined. That is why coloring doesn?t work. Thanks, Dave you can use -snes_fd_color_use_mat. It has many options. Here is an example > of us using that: > > > https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex19.c#L898 > > Thanks, > > Matt > > >> Thanks, >> Qi >> >> >> On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >> Hello, >> >> Does the Jacobian approximation using coloring and finite differencing >> of the function evaluation work in DMStag? >> Thank you. >> Best regards, >> >> Zakariae >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 13 12:29:24 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 13 Dec 2021 13:29:24 -0500 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: References: <231abd15aab544f9850826cb437366f7@lanl.gov> Message-ID: On Mon, Dec 13, 2021 at 1:16 PM Dave May wrote: > > > On Sat 11. Dec 2021 at 22:28, Matthew Knepley wrote: > >> On Sat, Dec 11, 2021 at 1:58 PM Tang, Qi wrote: >> >>> Hi, >>> Does anyone have comment on finite difference coloring with DMStag? We >>> are using DMStag and TS to evolve some nonlinear equations implicitly. It >>> would be helpful to have the coloring Jacobian option with that. >>> >> >> Since DMStag produces the Jacobian connectivity, >> > > This is incorrect. > The DMCreateMatrix implementation for DMSTAG only sets the number of > nonzeros (very inaccurately). It does not insert any zero values and thus > the nonzero structure is actually not defined. > That is why coloring doesn?t work. > Ah, thanks Dave. Okay, we should fix that.It is perfectly possible to compute the nonzero pattern from the DMStag information. Paging Patrick :) Thanks, Matt > Thanks, > Dave > > > you can use -snes_fd_color_use_mat. It has many options. Here is an >> example of us using that: >> >> >> https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex19.c#L898 >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Qi >>> >>> >>> On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users < >>> petsc-users at mcs.anl.gov> wrote: >>> >>> Hello, >>> >>> Does the Jacobian approximation using coloring and finite differencing >>> of the function evaluation work in DMStag? >>> Thank you. >>> Best regards, >>> >>> Zakariae >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Dec 13 12:52:12 2021 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 13 Dec 2021 19:52:12 +0100 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: References: <231abd15aab544f9850826cb437366f7@lanl.gov> Message-ID: On Mon, 13 Dec 2021 at 19:29, Matthew Knepley wrote: > On Mon, Dec 13, 2021 at 1:16 PM Dave May wrote: > >> >> >> On Sat 11. Dec 2021 at 22:28, Matthew Knepley wrote: >> >>> On Sat, Dec 11, 2021 at 1:58 PM Tang, Qi wrote: >>> >>>> Hi, >>>> Does anyone have comment on finite difference coloring with DMStag? We >>>> are using DMStag and TS to evolve some nonlinear equations implicitly. It >>>> would be helpful to have the coloring Jacobian option with that. >>>> >>> >>> Since DMStag produces the Jacobian connectivity, >>> >> >> This is incorrect. >> The DMCreateMatrix implementation for DMSTAG only sets the number of >> nonzeros (very inaccurately). It does not insert any zero values and thus >> the nonzero structure is actually not defined. >> That is why coloring doesn?t work. >> > > Ah, thanks Dave. > > Okay, we should fix that.It is perfectly possible to compute the nonzero > pattern from the DMStag information. > Agreed. The API for DMSTAG is complete enough to enable one to loop over the cells, and for all quantities defined on the cell (centre, face, vertex), insert values into the appropriate slot in the matrix. Combined with MATPREALLOCATOR, I believe a compact and readable code should be possible to write for the preallocation (cf DMDA). I think the only caveat with the approach of using all quantities defined on the cell is It may slightly over allocate depending on how the user wishes to impose the boundary condition, or slightly over allocate for says Stokes where there is no pressure-pressure coupling term. Thanks, Dave > Paging Patrick :) > > Thanks, > > Matt > > >> Thanks, >> Dave >> >> >> you can use -snes_fd_color_use_mat. It has many options. Here is an >>> example of us using that: >>> >>> >>> https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex19.c#L898 >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Qi >>>> >>>> >>>> On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users < >>>> petsc-users at mcs.anl.gov> wrote: >>>> >>>> Hello, >>>> >>>> Does the Jacobian approximation using coloring and finite differencing >>>> of the function evaluation work in DMStag? >>>> Thank you. >>>> Best regards, >>>> >>>> Zakariae >>>> >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tangqi at msu.edu Mon Dec 13 12:55:54 2021 From: tangqi at msu.edu (Tang, Qi) Date: Mon, 13 Dec 2021 18:55:54 +0000 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: References: <231abd15aab544f9850826cb437366f7@lanl.gov> Message-ID: <08A7E758-BF14-4DA6-AB7D-797DA41D3D45@msu.edu> Matt and Dave, Thanks, this is consistent with what we found. If Patrick or someone can add some basic coloring option with DMStag, that would be very useful for our project. Qi On Dec 13, 2021, at 11:52 AM, Dave May > wrote: On Mon, 13 Dec 2021 at 19:29, Matthew Knepley > wrote: On Mon, Dec 13, 2021 at 1:16 PM Dave May > wrote: On Sat 11. Dec 2021 at 22:28, Matthew Knepley > wrote: On Sat, Dec 11, 2021 at 1:58 PM Tang, Qi > wrote: Hi, Does anyone have comment on finite difference coloring with DMStag? We are using DMStag and TS to evolve some nonlinear equations implicitly. It would be helpful to have the coloring Jacobian option with that. Since DMStag produces the Jacobian connectivity, This is incorrect. The DMCreateMatrix implementation for DMSTAG only sets the number of nonzeros (very inaccurately). It does not insert any zero values and thus the nonzero structure is actually not defined. That is why coloring doesn?t work. Ah, thanks Dave. Okay, we should fix that.It is perfectly possible to compute the nonzero pattern from the DMStag information. Agreed. The API for DMSTAG is complete enough to enable one to loop over the cells, and for all quantities defined on the cell (centre, face, vertex), insert values into the appropriate slot in the matrix. Combined with MATPREALLOCATOR, I believe a compact and readable code should be possible to write for the preallocation (cf DMDA). I think the only caveat with the approach of using all quantities defined on the cell is It may slightly over allocate depending on how the user wishes to impose the boundary condition, or slightly over allocate for says Stokes where there is no pressure-pressure coupling term. Thanks, Dave Paging Patrick :) Thanks, Matt Thanks, Dave you can use -snes_fd_color_use_mat. It has many options. Here is an example of us using that: https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex19.c#L898 Thanks, Matt Thanks, Qi On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users > wrote: Hello, Does the Jacobian approximation using coloring and finite differencing of the function evaluation work in DMStag? Thank you. Best regards, Zakariae -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Dec 13 13:05:21 2021 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 13 Dec 2021 20:05:21 +0100 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: <08A7E758-BF14-4DA6-AB7D-797DA41D3D45@msu.edu> References: <231abd15aab544f9850826cb437366f7@lanl.gov> <08A7E758-BF14-4DA6-AB7D-797DA41D3D45@msu.edu> Message-ID: On Mon, 13 Dec 2021 at 19:55, Tang, Qi wrote: > Matt and Dave, > > Thanks, this is consistent with what we found. If Patrick or someone can > add some basic coloring option with DMStag, that would be very useful for > our project. > > Colouring only requires the non-zero structure of the matrix. So actually colouring is supported. The only thing missing for you is that the matrix returned from DMCreateMatrix for DMSTAG does not have a defined non-zero structure. Once that is set / defined, colouring will just work. Qi > > > > On Dec 13, 2021, at 11:52 AM, Dave May wrote: > > > > On Mon, 13 Dec 2021 at 19:29, Matthew Knepley wrote: > >> On Mon, Dec 13, 2021 at 1:16 PM Dave May wrote: >> >>> >>> >>> On Sat 11. Dec 2021 at 22:28, Matthew Knepley wrote: >>> >>>> On Sat, Dec 11, 2021 at 1:58 PM Tang, Qi wrote: >>>> >>>>> Hi, >>>>> Does anyone have comment on finite difference coloring with DMStag? We >>>>> are using DMStag and TS to evolve some nonlinear equations implicitly. It >>>>> would be helpful to have the coloring Jacobian option with that. >>>>> >>>> >>>> Since DMStag produces the Jacobian connectivity, >>>> >>> >>> This is incorrect. >>> The DMCreateMatrix implementation for DMSTAG only sets the number of >>> nonzeros (very inaccurately). It does not insert any zero values and thus >>> the nonzero structure is actually not defined. >>> That is why coloring doesn?t work. >>> >> >> Ah, thanks Dave. >> >> Okay, we should fix that.It is perfectly possible to compute the nonzero >> pattern from the DMStag information. >> > > Agreed. The API for DMSTAG is complete enough to enable one to > loop over the cells, and for all quantities defined on the cell (centre, > face, vertex), > insert values into the appropriate slot in the matrix. > Combined with MATPREALLOCATOR, I believe a compact and readable > code should be possible to write for the preallocation (cf DMDA). > > I think the only caveat with the approach of using all quantities defined > on the cell is > It may slightly over allocate depending on how the user wishes to impose > the boundary condition, > or slightly over allocate for says Stokes where there is no > pressure-pressure coupling term. > > Thanks, > Dave > > >> Paging Patrick :) >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Dave >>> >>> >>> you can use -snes_fd_color_use_mat. It has many options. Here is an >>>> example of us using that: >>>> >>>> >>>> https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex19.c#L898 >>>> >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thanks, >>>>> Qi >>>>> >>>>> >>>>> On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users < >>>>> petsc-users at mcs.anl.gov> wrote: >>>>> >>>>> Hello, >>>>> >>>>> Does the Jacobian approximation using coloring and finite >>>>> differencing of the function evaluation work in DMStag? >>>>> Thank you. >>>>> Best regards, >>>>> >>>>> Zakariae >>>>> >>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 13 13:13:16 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 13 Dec 2021 14:13:16 -0500 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: References: <231abd15aab544f9850826cb437366f7@lanl.gov> Message-ID: On Mon, Dec 13, 2021 at 1:52 PM Dave May wrote: > On Mon, 13 Dec 2021 at 19:29, Matthew Knepley wrote: > >> On Mon, Dec 13, 2021 at 1:16 PM Dave May wrote: >> >>> >>> >>> On Sat 11. Dec 2021 at 22:28, Matthew Knepley wrote: >>> >>>> On Sat, Dec 11, 2021 at 1:58 PM Tang, Qi wrote: >>>> >>>>> Hi, >>>>> Does anyone have comment on finite difference coloring with DMStag? We >>>>> are using DMStag and TS to evolve some nonlinear equations implicitly. It >>>>> would be helpful to have the coloring Jacobian option with that. >>>>> >>>> >>>> Since DMStag produces the Jacobian connectivity, >>>> >>> >>> This is incorrect. >>> The DMCreateMatrix implementation for DMSTAG only sets the number of >>> nonzeros (very inaccurately). It does not insert any zero values and thus >>> the nonzero structure is actually not defined. >>> That is why coloring doesn?t work. >>> >> >> Ah, thanks Dave. >> >> Okay, we should fix that.It is perfectly possible to compute the nonzero >> pattern from the DMStag information. >> > > Agreed. The API for DMSTAG is complete enough to enable one to > loop over the cells, and for all quantities defined on the cell (centre, > face, vertex), > insert values into the appropriate slot in the matrix. > Combined with MATPREALLOCATOR, I believe a compact and readable > code should be possible to write for the preallocation (cf DMDA). > > I think the only caveat with the approach of using all quantities defined > on the cell is > It may slightly over allocate depending on how the user wishes to impose > the boundary condition, > or slightly over allocate for says Stokes where there is no > pressure-pressure coupling term. > Yes, and would not handle higher order stencils.I think the overallocating is livable for the first imeplementation. Thanks, Matt > Thanks, > Dave > > >> Paging Patrick :) >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Dave >>> >>> >>> you can use -snes_fd_color_use_mat. It has many options. Here is an >>>> example of us using that: >>>> >>>> >>>> https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex19.c#L898 >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thanks, >>>>> Qi >>>>> >>>>> >>>>> On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users < >>>>> petsc-users at mcs.anl.gov> wrote: >>>>> >>>>> Hello, >>>>> >>>>> Does the Jacobian approximation using coloring and finite >>>>> differencing of the function evaluation work in DMStag? >>>>> Thank you. >>>>> Best regards, >>>>> >>>>> Zakariae >>>>> >>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Dec 13 13:17:32 2021 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 13 Dec 2021 20:17:32 +0100 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: References: <231abd15aab544f9850826cb437366f7@lanl.gov> Message-ID: On Mon, 13 Dec 2021 at 20:13, Matthew Knepley wrote: > On Mon, Dec 13, 2021 at 1:52 PM Dave May wrote: > >> On Mon, 13 Dec 2021 at 19:29, Matthew Knepley wrote: >> >>> On Mon, Dec 13, 2021 at 1:16 PM Dave May >>> wrote: >>> >>>> >>>> >>>> On Sat 11. Dec 2021 at 22:28, Matthew Knepley >>>> wrote: >>>> >>>>> On Sat, Dec 11, 2021 at 1:58 PM Tang, Qi wrote: >>>>> >>>>>> Hi, >>>>>> Does anyone have comment on finite difference coloring with DMStag? >>>>>> We are using DMStag and TS to evolve some nonlinear equations implicitly. >>>>>> It would be helpful to have the coloring Jacobian option with that. >>>>>> >>>>> >>>>> Since DMStag produces the Jacobian connectivity, >>>>> >>>> >>>> This is incorrect. >>>> The DMCreateMatrix implementation for DMSTAG only sets the number of >>>> nonzeros (very inaccurately). It does not insert any zero values and thus >>>> the nonzero structure is actually not defined. >>>> That is why coloring doesn?t work. >>>> >>> >>> Ah, thanks Dave. >>> >>> Okay, we should fix that.It is perfectly possible to compute the nonzero >>> pattern from the DMStag information. >>> >> >> Agreed. The API for DMSTAG is complete enough to enable one to >> loop over the cells, and for all quantities defined on the cell (centre, >> face, vertex), >> insert values into the appropriate slot in the matrix. >> Combined with MATPREALLOCATOR, I believe a compact and readable >> code should be possible to write for the preallocation (cf DMDA). >> >> I think the only caveat with the approach of using all quantities defined >> on the cell is >> It may slightly over allocate depending on how the user wishes to impose >> the boundary condition, >> or slightly over allocate for says Stokes where there is no >> pressure-pressure coupling term. >> > > Yes, and would not handle higher order stencils.I think the > overallocating is livable for the first imeplementation. > > Sure, but neither does DMDA. The user always has to know what they are doing and set the stencil width accordingly. I actually had this point listed in my initial email (and the stencil growth issue when using FD for nonlinear problems), however I deleted it as all the same issue exist in DMDA and no one complains (at least not loudly) :D > Thanks, > > Matt > > >> Thanks, >> Dave >> >> >>> Paging Patrick :) >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Dave >>>> >>>> >>>> you can use -snes_fd_color_use_mat. It has many options. Here is an >>>>> example of us using that: >>>>> >>>>> >>>>> https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex19.c#L898 >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks, >>>>>> Qi >>>>>> >>>>>> >>>>>> On Oct 15, 2021, at 3:07 PM, Jorti, Zakariae via petsc-users < >>>>>> petsc-users at mcs.anl.gov> wrote: >>>>>> >>>>>> Hello, >>>>>> >>>>>> Does the Jacobian approximation using coloring and finite >>>>>> differencing of the function evaluation work in DMStag? >>>>>> Thank you. >>>>>> Best regards, >>>>>> >>>>>> Zakariae >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tangqi at msu.edu Mon Dec 13 13:26:18 2021 From: tangqi at msu.edu (Tang, Qi) Date: Mon, 13 Dec 2021 19:26:18 +0000 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: References: <231abd15aab544f9850826cb437366f7@lanl.gov> Message-ID: <6B7C0CDC-DD47-43DC-BE63-8B77C2DE6F76@msu.edu> ?overallocating? is exactly what we can live on at the moment, as long as it is easier to work with coloring on dmstag. So it sounds like if we can provide a preallocated matrix with a proper stencil through DMCreateMatrix, then it should work with dmstag and coloring already. Most APIs are already there. Qi On Dec 13, 2021, at 12:13 PM, Matthew Knepley > wrote: Yes, and would not handle higher order stencils.I think the overallocating is livable for the first imeplementation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Dec 13 13:51:05 2021 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 13 Dec 2021 14:51:05 -0500 Subject: [petsc-users] Tips on integrating MPI ksp petsc into my application? In-Reply-To: References: <2030978811.184065.1638849869029.ref@mail.yahoo.com> <2030978811.184065.1638849869029@mail.yahoo.com> <2025431869.573432.1638932751081@mail.yahoo.com> <1421478133.591975.1638936268044@mail.yahoo.com> Message-ID: Sorry, I didn't notice these emails for a long time. PETSc does provide a "simple" mechanism to redistribute your matrix that does not require you to explicitly do the redistribution. You must create a MPIAIJ matrix over all the MPI ranks, but simply provide all the rows on the first rank and zero rows on the rest of the ranks (you can use MatCreateMPIAIJWithArrays ) then use -ksp_type preonly -pc_type redistribute You control the parallel KSP and preconditioner by using for example -redistribute_ksp_type gmres -redistribute_pc_type bjacobi Barry The PC type of redistribute manages distributing the matrix and vectors across all the ranks for you. As the PETSc documentation notes this is not a recommended use of PETSc for large numbers of ranks, due to Amdahl's law; for truly good parallel performance you must build the matrix in parallel. > On Dec 8, 2021, at 12:32 AM, Junchao Zhang wrote: > > > > On Tue, Dec 7, 2021 at 10:04 PM Faraz Hussain > wrote: > The matrix in memory is in IJV (Spooles ) or CSR3 ( Pardiso ). The application was written to use a variety of different direct solvers but Spooles and Pardiso are what I am most familiar with. > I assume the CSR3 has the a, i, j arrays used in petsc's MATAIJ. > You can create a MPIAIJ matrix A with MatCreateMPIAIJWithArrays , with only rank 0 providing data (i.e., other ranks just have m=n=0, i=j=a=NULL) > Then you call MatGetSubMatrix (A,isrow,iscol,reuse,&B) to redistribute the imbalanced A to a balanced matrix B. > You can use PetscLayoutCreate() and friends to create a row map and a column map (as if they are B's) and use them to get ranges of rows/cols each rank wants to own, and then build the isrow, iscol with ISCreateStride() > > My approach is kind of verbose. I would let Jed and Matt comment whether there are better ones. > > > > > > On Tuesday, December 7, 2021, 10:33:24 PM EST, Junchao Zhang > wrote: > > > > > > > > On Tue, Dec 7, 2021 at 9:06 PM Faraz Hussain via petsc-users > wrote: > > Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you wrote, "it efficiently gets the matrix from the file spread out over all the ranks.". > > > > However, in my application I only want rank 0 to read and assemble the matrix. I do not want other ranks trying to get the matrix data. The reason is the matrix is already in memory when my application is ready to call the petsc solver. > What is the data structure of your matrix in memory? > > > > > > > So if I am running with multiple ranks, I don't want all ranks assembling the matrix. This would require a total re-write of my application which is not possible . I realize this may sounds confusing. If so, I'll see if I can create an example that shows the issue. > > > > > > > > > > > > On Tuesday, December 7, 2021, 10:13:17 AM EST, Barry Smith > wrote: > > > > > > > > > > > > > > If you use MatLoad() it never has the entire matrix on a single rank at the same time; it efficiently gets the matrix from the file spread out over all the ranks. > > > >> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users > wrote: > >> > >> I am studying the examples but it seems all ranks read the full matrix. Is there an MPI example where only rank 0 reads the matrix? > >> > >> I don't want all ranks to read my input matrix and consume a lot of memory allocating data for the arrays. > >> > >> I have worked with Intel's cluster sparse solver and their documentation states: > >> > >> " Most of the input parameters must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm. " > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Dec 13 13:53:55 2021 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 13 Dec 2021 14:53:55 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: <44E3FC40-727F-4ECB-8DCA-DCBA99ED74B1@stfc.ac.uk> References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> <44E3FC40-727F-4ECB-8DCA-DCBA99ED74B1@stfc.ac.uk> Message-ID: Try adding -mat_block_size 3 On Mon, Dec 13, 2021 at 11:57 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > I tried to run the problem using -pc_type hypre but it errored out: > > > > ./ex56 -cells 4,4,2 -max_conv_its 1 -lx 1. -alpha .01 -petscspace_degree > 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type hypre -pc_hypre_type boomeramg > -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view > -log_view -ex56_dm_vec_type cuda *-ex56_dm_mat_type hypre* -options_left > > > > > > > > *[0]PETSC ERROR: --------------------- Error Message > --------------------------------------------------------------* > > [0]PETSC ERROR: Petsc has generated inconsistent data > > [0]PETSC ERROR: Blocksize of layout 1 must match that of mapping 3 (or the > latter must be 1) > > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > > [0]PETSC ERROR: Petsc Development GIT revision: v3.16.1-353-g887dddf386 GIT > Date: 2021-11-19 20:24:41 +0000 > > [0]PETSC ERROR: ./ex56 on a arch-linux2-c-opt named sqg2b13.bullx by > kxc07-lxm25 Mon Dec 13 16:50:02 2021 > > [0]PETSC ERROR: Configure options --with-debugging=0 > --with-blaslapack-dir=/lustre/scafellpike/local/apps/intel/intel_cs/2018.0.128/mkl > --with-cuda=1 --with-cuda-arch=70 --download-hypre=yes > --download-hypre-configure-arguments="--with-cuda=yes > --enable-gpu-profiling=yes --enable-cusparse=yes --enable-cublas=yes > --enable-curand=yes --enable-unified-memory=yes HYPRE_CUDA_SM=70" > --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-cc=mpicc > --with-cxx=mpicxx -with-fc=mpif90 > > [0]PETSC ERROR: #1 PetscLayoutSetISLocalToGlobalMapping() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/vec/is/utils/pmap.c:371 > > [0]PETSC ERROR: #2 MatSetLocalToGlobalMapping() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/mat/interface/matrix.c:2089 > > [0]PETSC ERROR: #3 DMCreateMatrix_Plex() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/impls/plex/plex.c:2460 > > [0]PETSC ERROR: #4 DMCreateMatrix() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/interface/dm.c:1445 > > [0]PETSC ERROR: #5 main() at ex56.c:439 > > [0]PETSC ERROR: PETSc Option Table entries: > > [0]PETSC ERROR: -alpha .01 > > [0]PETSC ERROR: -cells 4,4,2 > > [0]PETSC ERROR: -ex56_dm_mat_type hypre > > [0]PETSC ERROR: -ex56_dm_vec_type cuda > > [0]PETSC ERROR: -ex56_dm_view > > [0]PETSC ERROR: -ksp_monitor > > [0]PETSC ERROR: -ksp_rtol 1.e-8 > > [0]PETSC ERROR: -ksp_type cg > > [0]PETSC ERROR: -log_view > > [0]PETSC ERROR: -lx 1. > > [0]PETSC ERROR: -max_conv_its 1 > > [0]PETSC ERROR: -options_left > > [0]PETSC ERROR: -pc_hypre_type boomeramg > > [0]PETSC ERROR: -pc_type hypre > > [0]PETSC ERROR: -petscspace_degree 1 > > [0]PETSC ERROR: -snes_monitor > > [0]PETSC ERROR: -snes_rtol 1.e-10 > > [0]PETSC ERROR: -use_gpu_aware_mpi 0 > > [0]PETSC ERROR: -use_mat_nearnullspace true > > *[0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov----------* > > -------------------------------------------------------------------------- > > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > > with errorcode 77. > > > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > > You may or may not see output from other processes, depending on > > exactly when Open MPI kills them. > > -------------------------------------------------------------------------- > > > > > > *From: *Mark Adams > *Date: *Monday, 13 December 2021 at 13:58 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thanks Matt. Couple of weeks back you mentioned > > ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. > The solver can run on the GPU, but the vector/matrix FEM assembly does not. > I am working on that now.? > > > > I am able to run other examples in ksp/tutorials on gpus. I complied ex56 > in snes/tutorials no differently. The only difference being I didn?t > specify _dm_vec_type and _dm_vec_type (as you mentioned they are not > assembled on gpus anyways plus I am working on an unstructured grid thought > _dm is not right type for this problem). I was hoping to see gpu flops > recorded for KSPSolve, which I didn?t. > > > > Okay, I will wait for Mark to comment. > > > > This (DM) example works like any other, with a prefix, as far as GPU: > -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, > etc. > > Run with -options_left to verify that these are used. > > > > > > Kind regards, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Monday, 13 December 2021 at 13:17 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Mark Adams , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank you. I was able to confirm both the below options produced the same > mesh > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 > > ./ex56 -cells 4,4,2 -max_conv_its 1 > > Good > > But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of > MPI processes. > > It is not. The number of processes is specified independently using > 'mpiexec -n

' or when using the test system NP=

. > > (i) Say I start with -cells 1,1,1 -max_conv its 7; that would > eventually leave all refinement on level 7 running on 1 MPI process? > > (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended > to run on 4 MPI processes? > > No, those options do not influence the number of processes. > > > > I am running ex56 on gpu; I am looking at KSPSolve (or any other event) > but no gpu flops are recorded in the -log_view? > > > > I do not think you are running on the GPU then. Mark can comment, but we > usually specify GPU execution using the Vec and Mat types > > through -dm_vec_type and -dm_mat_type. > > > > Thanks, > > > > Matt > > > > For your reference I used the below flags: > > ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view > > > > Kind regards, > > Karthik. > > > > > > *From: *Mark Adams > *Date: *Sunday, 12 December 2021 at 23:00 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank for your response that was helpful. I have a couple of questions: > > > > (i) How can I control the level of refinement? I tried > to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement > from 8 giving 32 cubes. > > > > I answered this question recently but ex56 clobbers ex56_dm_refine in the > convergence loop. I have an MR that prints a warning if you provide a > ex56_dm_refine. > > > > * snes/ex56 runs a convergence study and confusingly sets the options > manually, thus erasing your -ex56_dm_refine. > > > > * To refine, use -max_conv_its N <3>, this sets the number of steps of > refinement. That is, the length of the convergence study > > > > * You can adjust where it starts from with -cells i,j,k <1,1,1> > > You do want to set this if you have multiple MPI processes so that the > size of this mesh is the number of processes. That way it starts with one > cell per process and refines from there. > > > > (ii) What does -cell 2,2,1 correspond to? > > > > The initial mesh or mesh_0. The convergence test uniformly refines this > mesh. So if you want to refine this twice you could use -cells 8,8,4 > > > > How can I determine the total number of dofs? > > Unfortunately, that is not printed but you can calculate from the initial > cell grid, the order of the element and the refinement in each iteration of > the convergence tests. > > > > So that I can perform a scaling study by changing the input of the flag > -cells. > > > > > > You can and the convergence test gives you data for a strong speedup study > in one run. Each solve is put in its own "stage" of the output and you want > to look at KSPSolve lines in the log_view output. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Tue Dec 14 01:42:07 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Tue, 14 Dec 2021 07:42:07 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> <44E3FC40-727F-4ECB-8DCA-DCBA99ED74B1@stfc.ac.uk> Message-ID: <65FBFEA6-FD79-4FF2-9E1E-7E4FF81147D8@stfc.ac.uk> I tried adding the -mat_block_size 3 but I still get the same error message. Thanks, Karthik. From: Mark Adams Date: Monday, 13 December 2021 at 19:54 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: Matthew Knepley , "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh Try adding -mat_block_size 3 On Mon, Dec 13, 2021 at 11:57 AM Karthikeyan Chockalingam - STFC UKRI > wrote: I tried to run the problem using -pc_type hypre but it errored out: ./ex56 -cells 4,4,2 -max_conv_its 1 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type hypre -pc_hypre_type boomeramg -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view -ex56_dm_vec_type cuda -ex56_dm_mat_type hypre -options_left [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: Blocksize of layout 1 must match that of mapping 3 (or the latter must be 1) [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.16.1-353-g887dddf386 GIT Date: 2021-11-19 20:24:41 +0000 [0]PETSC ERROR: ./ex56 on a arch-linux2-c-opt named sqg2b13.bullx by kxc07-lxm25 Mon Dec 13 16:50:02 2021 [0]PETSC ERROR: Configure options --with-debugging=0 --with-blaslapack-dir=/lustre/scafellpike/local/apps/intel/intel_cs/2018.0.128/mkl --with-cuda=1 --with-cuda-arch=70 --download-hypre=yes --download-hypre-configure-arguments="--with-cuda=yes --enable-gpu-profiling=yes --enable-cusparse=yes --enable-cublas=yes --enable-curand=yes --enable-unified-memory=yes HYPRE_CUDA_SM=70" --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-cc=mpicc --with-cxx=mpicxx -with-fc=mpif90 [0]PETSC ERROR: #1 PetscLayoutSetISLocalToGlobalMapping() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/vec/is/utils/pmap.c:371 [0]PETSC ERROR: #2 MatSetLocalToGlobalMapping() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/mat/interface/matrix.c:2089 [0]PETSC ERROR: #3 DMCreateMatrix_Plex() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/impls/plex/plex.c:2460 [0]PETSC ERROR: #4 DMCreateMatrix() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/interface/dm.c:1445 [0]PETSC ERROR: #5 main() at ex56.c:439 [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -alpha .01 [0]PETSC ERROR: -cells 4,4,2 [0]PETSC ERROR: -ex56_dm_mat_type hypre [0]PETSC ERROR: -ex56_dm_vec_type cuda [0]PETSC ERROR: -ex56_dm_view [0]PETSC ERROR: -ksp_monitor [0]PETSC ERROR: -ksp_rtol 1.e-8 [0]PETSC ERROR: -ksp_type cg [0]PETSC ERROR: -log_view [0]PETSC ERROR: -lx 1. [0]PETSC ERROR: -max_conv_its 1 [0]PETSC ERROR: -options_left [0]PETSC ERROR: -pc_hypre_type boomeramg [0]PETSC ERROR: -pc_type hypre [0]PETSC ERROR: -petscspace_degree 1 [0]PETSC ERROR: -snes_monitor [0]PETSC ERROR: -snes_rtol 1.e-10 [0]PETSC ERROR: -use_gpu_aware_mpi 0 [0]PETSC ERROR: -use_mat_nearnullspace true [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 77. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- From: Mark Adams > Date: Monday, 13 December 2021 at 13:58 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thanks Matt. Couple of weeks back you mentioned ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now.? I am able to run other examples in ksp/tutorials on gpus. I complied ex56 in snes/tutorials no differently. The only difference being I didn?t specify _dm_vec_type and _dm_vec_type (as you mentioned they are not assembled on gpus anyways plus I am working on an unstructured grid thought _dm is not right type for this problem). I was hoping to see gpu flops recorded for KSPSolve, which I didn?t. Okay, I will wait for Mark to comment. This (DM) example works like any other, with a prefix, as far as GPU: -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, etc. Run with -options_left to verify that these are used. Kind regards, Karthik. From: Matthew Knepley > Date: Monday, 13 December 2021 at 13:17 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Mark Adams >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank you. I was able to confirm both the below options produced the same mesh ./ex56 -cells 2,2,1 -max_conv_its 2 ./ex56 -cells 4,4,2 -max_conv_its 1 Good But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of MPI processes. It is not. The number of processes is specified independently using 'mpiexec -n

' or when using the test system NP=

. (i) Say I start with -cells 1,1,1 -max_conv its 7; that would eventually leave all refinement on level 7 running on 1 MPI process? (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended to run on 4 MPI processes? No, those options do not influence the number of processes. I am running ex56 on gpu; I am looking at KSPSolve (or any other event) but no gpu flops are recorded in the -log_view? I do not think you are running on the GPU then. Mark can comment, but we usually specify GPU execution using the Vec and Mat types through -dm_vec_type and -dm_mat_type. Thanks, Matt For your reference I used the below flags: ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view Kind regards, Karthik. From: Mark Adams > Date: Sunday, 12 December 2021 at 23:00 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank for your response that was helpful. I have a couple of questions: (i) How can I control the level of refinement? I tried to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement from 8 giving 32 cubes. I answered this question recently but ex56 clobbers ex56_dm_refine in the convergence loop. I have an MR that prints a warning if you provide a ex56_dm_refine. * snes/ex56 runs a convergence study and confusingly sets the options manually, thus erasing your -ex56_dm_refine. * To refine, use -max_conv_its N <3>, this sets the number of steps of refinement. That is, the length of the convergence study * You can adjust where it starts from with -cells i,j,k <1,1,1> You do want to set this if you have multiple MPI processes so that the size of this mesh is the number of processes. That way it starts with one cell per process and refines from there. (ii) What does -cell 2,2,1 correspond to? The initial mesh or mesh_0. The convergence test uniformly refines this mesh. So if you want to refine this twice you could use -cells 8,8,4 How can I determine the total number of dofs? Unfortunately, that is not printed but you can calculate from the initial cell grid, the order of the element and the refinement in each iteration of the convergence tests. So that I can perform a scaling study by changing the input of the flag -cells. You can and the convergence test gives you data for a strong speedup study in one run. Each solve is put in its own "stage" of the output and you want to look at KSPSolve lines in the log_view output. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tangqi at msu.edu Tue Dec 14 11:34:12 2021 From: tangqi at msu.edu (Tang, Qi) Date: Tue, 14 Dec 2021 17:34:12 +0000 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: <6B7C0CDC-DD47-43DC-BE63-8B77C2DE6F76@msu.edu> References: <231abd15aab544f9850826cb437366f7@lanl.gov> <6B7C0CDC-DD47-43DC-BE63-8B77C2DE6F76@msu.edu> Message-ID: <7E1ECF14-D585-4C6D-B51D-CA0885FC5B23@msu.edu> Dear all, Will someone be able to help with this coloring request on dmstag in the next few weeks? If not, we will try to fix that on our own. We really need this capability for both debugging as well as performance comparison vs analytical Jacobian/preconditioning we implemented. Thanks. Qi LANL On Dec 13, 2021, at 12:26 PM, Tang, Qi > wrote: ?overallocating? is exactly what we can live on at the moment, as long as it is easier to work with coloring on dmstag. So it sounds like if we can provide a preallocated matrix with a proper stencil through DMCreateMatrix, then it should work with dmstag and coloring already. Most APIs are already there. Qi On Dec 13, 2021, at 12:13 PM, Matthew Knepley > wrote: Yes, and would not handle higher order stencils.I think the overallocating is livable for the first imeplementation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Dec 14 12:57:56 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 14 Dec 2021 13:57:56 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: <65FBFEA6-FD79-4FF2-9E1E-7E4FF81147D8@stfc.ac.uk> References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> <44E3FC40-727F-4ECB-8DCA-DCBA99ED74B1@stfc.ac.uk> <65FBFEA6-FD79-4FF2-9E1E-7E4FF81147D8@stfc.ac.uk> Message-ID: I was able to get hypre to work on ex56 (snes) with -ex56_dm_mat_type aijcusparse -ex56_dm_vec_type cuda (not hypre matrix). This should copy the cusparse matrix to a hypre matrix before the solve. So not optimal but the actual solve should be the same. Hypre is not yet supported on this example so you might not want to spend too much time on it. In particular, we do not have an example that uses DMPlex and hypre on a GPU. src/ksp/ksp/tutotials/ex56 is old and does not use DMPLex, but it is missing a call like ksp ex4 for hypre like this: #if defined(PETSC_HAVE_HYPRE) ierr = MatHYPRESetPreallocation(A,5,NULL,5,NULL);CHKERRQ(ierr); #endif If you add that it might work, but again this is all pretty fragile at this point. Mark Mark On Tue, Dec 14, 2021 at 2:42 AM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > I tried adding the -mat_block_size 3 but I still get the same error > message. > > > > Thanks, > > Karthik. > > > > *From: *Mark Adams > *Date: *Monday, 13 December 2021 at 19:54 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > Try adding -mat_block_size 3 > > > > > > On Mon, Dec 13, 2021 at 11:57 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > I tried to run the problem using -pc_type hypre but it errored out: > > > > ./ex56 -cells 4,4,2 -max_conv_its 1 -lx 1. -alpha .01 -petscspace_degree > 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type hypre -pc_hypre_type boomeramg > -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view > -log_view -ex56_dm_vec_type cuda *-ex56_dm_mat_type hypre* -options_left > > > > > > > > *[0]PETSC ERROR: --------------------- Error Message > --------------------------------------------------------------* > > [0]PETSC ERROR: Petsc has generated inconsistent data > > [0]PETSC ERROR: Blocksize of layout 1 must match that of mapping 3 (or the > latter must be 1) > > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > > [0]PETSC ERROR: Petsc Development GIT revision: v3.16.1-353-g887dddf386 GIT > Date: 2021-11-19 20:24:41 +0000 > > [0]PETSC ERROR: ./ex56 on a arch-linux2-c-opt named sqg2b13.bullx by > kxc07-lxm25 Mon Dec 13 16:50:02 2021 > > [0]PETSC ERROR: Configure options --with-debugging=0 > --with-blaslapack-dir=/lustre/scafellpike/local/apps/intel/intel_cs/2018.0.128/mkl > --with-cuda=1 --with-cuda-arch=70 --download-hypre=yes > --download-hypre-configure-arguments="--with-cuda=yes > --enable-gpu-profiling=yes --enable-cusparse=yes --enable-cublas=yes > --enable-curand=yes --enable-unified-memory=yes HYPRE_CUDA_SM=70" > --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-cc=mpicc > --with-cxx=mpicxx -with-fc=mpif90 > > [0]PETSC ERROR: #1 PetscLayoutSetISLocalToGlobalMapping() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/vec/is/utils/pmap.c:371 > > [0]PETSC ERROR: #2 MatSetLocalToGlobalMapping() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/mat/interface/matrix.c:2089 > > [0]PETSC ERROR: #3 DMCreateMatrix_Plex() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/impls/plex/plex.c:2460 > > [0]PETSC ERROR: #4 DMCreateMatrix() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/interface/dm.c:1445 > > [0]PETSC ERROR: #5 main() at ex56.c:439 > > [0]PETSC ERROR: PETSc Option Table entries: > > [0]PETSC ERROR: -alpha .01 > > [0]PETSC ERROR: -cells 4,4,2 > > [0]PETSC ERROR: -ex56_dm_mat_type hypre > > [0]PETSC ERROR: -ex56_dm_vec_type cuda > > [0]PETSC ERROR: -ex56_dm_view > > [0]PETSC ERROR: -ksp_monitor > > [0]PETSC ERROR: -ksp_rtol 1.e-8 > > [0]PETSC ERROR: -ksp_type cg > > [0]PETSC ERROR: -log_view > > [0]PETSC ERROR: -lx 1. > > [0]PETSC ERROR: -max_conv_its 1 > > [0]PETSC ERROR: -options_left > > [0]PETSC ERROR: -pc_hypre_type boomeramg > > [0]PETSC ERROR: -pc_type hypre > > [0]PETSC ERROR: -petscspace_degree 1 > > [0]PETSC ERROR: -snes_monitor > > [0]PETSC ERROR: -snes_rtol 1.e-10 > > [0]PETSC ERROR: -use_gpu_aware_mpi 0 > > [0]PETSC ERROR: -use_mat_nearnullspace true > > *[0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov----------* > > -------------------------------------------------------------------------- > > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > > with errorcode 77. > > > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > > You may or may not see output from other processes, depending on > > exactly when Open MPI kills them. > > -------------------------------------------------------------------------- > > > > > > *From: *Mark Adams > *Date: *Monday, 13 December 2021 at 13:58 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thanks Matt. Couple of weeks back you mentioned > > ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. > The solver can run on the GPU, but the vector/matrix FEM assembly does not. > I am working on that now.? > > > > I am able to run other examples in ksp/tutorials on gpus. I complied ex56 > in snes/tutorials no differently. The only difference being I didn?t > specify _dm_vec_type and _dm_vec_type (as you mentioned they are not > assembled on gpus anyways plus I am working on an unstructured grid thought > _dm is not right type for this problem). I was hoping to see gpu flops > recorded for KSPSolve, which I didn?t. > > > > Okay, I will wait for Mark to comment. > > > > This (DM) example works like any other, with a prefix, as far as GPU: > -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, > etc. > > Run with -options_left to verify that these are used. > > > > > > Kind regards, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Monday, 13 December 2021 at 13:17 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Mark Adams , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank you. I was able to confirm both the below options produced the same > mesh > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 > > ./ex56 -cells 4,4,2 -max_conv_its 1 > > Good > > But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of > MPI processes. > > It is not. The number of processes is specified independently using > 'mpiexec -n

' or when using the test system NP=

. > > (i) Say I start with -cells 1,1,1 -max_conv its 7; that would > eventually leave all refinement on level 7 running on 1 MPI process? > > (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended > to run on 4 MPI processes? > > No, those options do not influence the number of processes. > > > > I am running ex56 on gpu; I am looking at KSPSolve (or any other event) > but no gpu flops are recorded in the -log_view? > > > > I do not think you are running on the GPU then. Mark can comment, but we > usually specify GPU execution using the Vec and Mat types > > through -dm_vec_type and -dm_mat_type. > > > > Thanks, > > > > Matt > > > > For your reference I used the below flags: > > ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view > > > > Kind regards, > > Karthik. > > > > > > *From: *Mark Adams > *Date: *Sunday, 12 December 2021 at 23:00 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank for your response that was helpful. I have a couple of questions: > > > > (i) How can I control the level of refinement? I tried > to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement > from 8 giving 32 cubes. > > > > I answered this question recently but ex56 clobbers ex56_dm_refine in the > convergence loop. I have an MR that prints a warning if you provide a > ex56_dm_refine. > > > > * snes/ex56 runs a convergence study and confusingly sets the options > manually, thus erasing your -ex56_dm_refine. > > > > * To refine, use -max_conv_its N <3>, this sets the number of steps of > refinement. That is, the length of the convergence study > > > > * You can adjust where it starts from with -cells i,j,k <1,1,1> > > You do want to set this if you have multiple MPI processes so that the > size of this mesh is the number of processes. That way it starts with one > cell per process and refines from there. > > > > (ii) What does -cell 2,2,1 correspond to? > > > > The initial mesh or mesh_0. The convergence test uniformly refines this > mesh. So if you want to refine this twice you could use -cells 8,8,4 > > > > How can I determine the total number of dofs? > > Unfortunately, that is not printed but you can calculate from the initial > cell grid, the order of the element and the refinement in each iteration of > the convergence tests. > > > > So that I can perform a scaling study by changing the input of the flag > -cells. > > > > > > You can and the convergence test gives you data for a strong speedup study > in one run. Each solve is put in its own "stage" of the output and you want > to look at KSPSolve lines in the log_view output. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rohany at alumni.cmu.edu Tue Dec 14 12:46:42 2021 From: rohany at alumni.cmu.edu (Rohan Yadav) Date: Tue, 14 Dec 2021 13:46:42 -0500 Subject: [petsc-users] Help initializing matrix to a constant Message-ID: Hi, I'm having trouble setting all entries of a matrix to a constant value, similar to the `VecSet` method on vectors. I have a dense matrix that I want to initialize all entries to 1. The only related method I see on the `Mat` interface is `MatZeroEntries`, which sets all entries to 0. The obvious first attempt is to use the `MatSetValue` function to set all entries to the constant. ``` Mat C; MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, k, j, NULL, &C); for (int kk = 0; kk < k; kk++) { for (int jj = 0; jj < j; jj++) { MatSetValue(C, kk, jj, 1, INSERT_VALUES); } } MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY); MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY); ``` However, when run with a relatively large matrix C (5GB) and a rank-per-core on my 40-core machine this code OOMs and crashes. It does not OOM with only 1 and 10 rank, leading me to believe that this API call is somehow causing the entire matrix to be replicated on each rank. Despite looking through the documentation, I could not find another API call that would allow me to set all the values in the matrix to a constant. What should I do here? Thanks, Rohan -------------- next part -------------- An HTML attachment was scrubbed... URL: From aduarteg at utexas.edu Tue Dec 14 14:18:56 2021 From: aduarteg at utexas.edu (Alfredo J Duarte Gomez) Date: Tue, 14 Dec 2021 14:18:56 -0600 Subject: [petsc-users] PETSC TS Extrapolation Message-ID: Hello PETSC team, I have some questions about the extrapolation routines used in the TS solvers. I wish to use an extrapolation as the initial guess for the solution at the next time step, which should have an order equivalent to the order of my TS solver. So far, I have been using the TS THETA with the extrapolate flag, but it is also my understanding that the TS BDF extrapolates by default. What values are used in these extrapolation routines? Is it the derivative plus k solutions (where k is the order of the extrapolation) or is it k+1 previous solutions? I am also unsure about what happens at the boundaries when using DAEs. Are the values at the boundaries extrapolated as well? Or is there no extrapolation at the boundary points? Thank you, -Alfredo -- Alfredo Duarte Graduate Research Assistant The University of Texas at Austin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Dec 14 14:21:52 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 14 Dec 2021 15:21:52 -0500 Subject: [petsc-users] Help initializing matrix to a constant In-Reply-To: References: Message-ID: This should work on one process if you have enough memory in parallel you are having every processor set the whole matrix. If this code runs it should fail in MatAssemblyEnd because you are inserting into the same place (ADD_VALUES would give you a matrix with np in each entry). See this for the function and examples of doing what you want (just do your local rows): https://petsc.org/release/docs/manualpages/Mat/MatGetOwnershipRange.html#MatGetOwnershipRange Mark On Tue, Dec 14, 2021 at 2:05 PM Rohan Yadav wrote: > Hi, > > I'm having trouble setting all entries of a matrix to a constant value, > similar to the `VecSet` method on vectors. I have a dense matrix that I > want to initialize all entries to 1. The only related method I see on the > `Mat` interface is `MatZeroEntries`, which sets all entries to 0. The > obvious first attempt is to use the `MatSetValue` function to set all > entries to the constant. > > ``` > Mat C; > > MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, k, j, > NULL, &C); > > for (int kk = 0; kk < k; kk++) { > > for (int jj = 0; jj < j; jj++) { > > MatSetValue(C, kk, jj, 1, INSERT_VALUES); > > } > > } > > MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY); > > MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY); > ``` > > However, when run with a relatively large matrix C (5GB) and a > rank-per-core on my 40-core machine this code OOMs and crashes. It does not > OOM with only 1 and 10 rank, leading me to believe that this API call is > somehow causing the entire matrix to be replicated on each rank. > > Despite looking through the documentation, I could not find another API > call that would allow me to set all the values in the matrix to a constant. > What should I do here? > > Thanks, > > Rohan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Dec 14 14:27:07 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 14 Dec 2021 14:27:07 -0600 Subject: [petsc-users] Help initializing matrix to a constant In-Reply-To: References: Message-ID: >From https://petsc.org/release/src/ksp/ksp/tutorials/ex77.c.html 114: MatDenseGetArrayWrite (B,&x);115: for (i=0; i(B,&x); --Junchao Zhang On Tue, Dec 14, 2021 at 1:05 PM Rohan Yadav wrote: > Hi, > > I'm having trouble setting all entries of a matrix to a constant value, > similar to the `VecSet` method on vectors. I have a dense matrix that I > want to initialize all entries to 1. The only related method I see on the > `Mat` interface is `MatZeroEntries`, which sets all entries to 0. The > obvious first attempt is to use the `MatSetValue` function to set all > entries to the constant. > > ``` > Mat C; > > MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, k, j, > NULL, &C); > > for (int kk = 0; kk < k; kk++) { > > for (int jj = 0; jj < j; jj++) { > > MatSetValue(C, kk, jj, 1, INSERT_VALUES); > > } > > } > > MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY); > > MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY); > ``` > > However, when run with a relatively large matrix C (5GB) and a > rank-per-core on my 40-core machine this code OOMs and crashes. It does not > OOM with only 1 and 10 rank, leading me to believe that this API call is > somehow causing the entire matrix to be replicated on each rank. > > Despite looking through the documentation, I could not find another API > call that would allow me to set all the values in the matrix to a constant. > What should I do here? > > Thanks, > > Rohan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rohany at alumni.cmu.edu Tue Dec 14 14:28:58 2021 From: rohany at alumni.cmu.edu (Rohan Yadav) Date: Tue, 14 Dec 2021 15:28:58 -0500 Subject: [petsc-users] Help initializing matrix to a constant In-Reply-To: References: Message-ID: Thanks Mark, I will try that -- that seems to be what I want. Junchao, that excerpt seems like it runs into the same problem as above right? If every rank tries to get the whole matrix then the process will surely OOM. Rohan On Tue, Dec 14, 2021 at 3:27 PM Junchao Zhang wrote: > From https://petsc.org/release/src/ksp/ksp/tutorials/ex77.c.html > > 114: MatDenseGetArrayWrite (B,&x);115: for (i=0; i(B,&x); > > --Junchao Zhang > > > On Tue, Dec 14, 2021 at 1:05 PM Rohan Yadav wrote: > >> Hi, >> >> I'm having trouble setting all entries of a matrix to a constant value, >> similar to the `VecSet` method on vectors. I have a dense matrix that I >> want to initialize all entries to 1. The only related method I see on the >> `Mat` interface is `MatZeroEntries`, which sets all entries to 0. The >> obvious first attempt is to use the `MatSetValue` function to set all >> entries to the constant. >> >> ``` >> Mat C; >> >> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, k, j, >> NULL, &C); >> >> for (int kk = 0; kk < k; kk++) { >> >> for (int jj = 0; jj < j; jj++) { >> >> MatSetValue(C, kk, jj, 1, INSERT_VALUES); >> >> } >> >> } >> >> MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY); >> >> MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY); >> ``` >> >> However, when run with a relatively large matrix C (5GB) and a >> rank-per-core on my 40-core machine this code OOMs and crashes. It does not >> OOM with only 1 and 10 rank, leading me to believe that this API call is >> somehow causing the entire matrix to be replicated on each rank. >> >> Despite looking through the documentation, I could not find another API >> call that would allow me to set all the values in the matrix to a constant. >> What should I do here? >> >> Thanks, >> >> Rohan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Tue Dec 14 14:37:10 2021 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Tue, 14 Dec 2021 15:37:10 -0500 Subject: [petsc-users] PCs and MATIS Message-ID: Hi, We want to use an Additive Schwarz preconditioner (like PCASM) combined with overlapping meshes *and* specific boundary conditions to local (MATIS) matrices. At first sight, MATIS is only supported for BDDC and FETI-DP and is not working with PCASM. Do we have to write a new PC from scratch to combine the use of mesh overlap, MATIS and customized local matrices? ...or is there any working example we should look at to start from? :) Thanks a lot! Eric -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 From mfadams at lbl.gov Tue Dec 14 14:42:30 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 14 Dec 2021 15:42:30 -0500 Subject: [petsc-users] Help initializing matrix to a constant In-Reply-To: References: Message-ID: "m" is the local number of rows, which you can get from MatGetLocalSize. N is your "j"; On Tue, Dec 14, 2021 at 3:29 PM Rohan Yadav wrote: > Thanks Mark, I will try that -- that seems to be what I want. > > Junchao, that excerpt seems like it runs into the same problem as above > right? If every rank tries to get the whole matrix then the process will > surely OOM. > > Rohan > > On Tue, Dec 14, 2021 at 3:27 PM Junchao Zhang > wrote: > >> From https://petsc.org/release/src/ksp/ksp/tutorials/ex77.c.html >> >> 114: MatDenseGetArrayWrite (B,&x);115: for (i=0; i(B,&x); >> >> --Junchao Zhang >> >> >> On Tue, Dec 14, 2021 at 1:05 PM Rohan Yadav >> wrote: >> >>> Hi, >>> >>> I'm having trouble setting all entries of a matrix to a constant value, >>> similar to the `VecSet` method on vectors. I have a dense matrix that I >>> want to initialize all entries to 1. The only related method I see on the >>> `Mat` interface is `MatZeroEntries`, which sets all entries to 0. The >>> obvious first attempt is to use the `MatSetValue` function to set all >>> entries to the constant. >>> >>> ``` >>> Mat C; >>> >>> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, k, j, >>> NULL, &C); >>> >>> for (int kk = 0; kk < k; kk++) { >>> >>> for (int jj = 0; jj < j; jj++) { >>> >>> MatSetValue(C, kk, jj, 1, INSERT_VALUES); >>> >>> } >>> >>> } >>> >>> MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY); >>> >>> MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY); >>> ``` >>> >>> However, when run with a relatively large matrix C (5GB) and a >>> rank-per-core on my 40-core machine this code OOMs and crashes. It does not >>> OOM with only 1 and 10 rank, leading me to believe that this API call is >>> somehow causing the entire matrix to be replicated on each rank. >>> >>> Despite looking through the documentation, I could not find another API >>> call that would allow me to set all the values in the matrix to a constant. >>> What should I do here? >>> >>> Thanks, >>> >>> Rohan >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Dec 14 14:55:07 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 14 Dec 2021 14:55:07 -0600 Subject: [petsc-users] Help initializing matrix to a constant In-Reply-To: References: Message-ID: MatDenseGetArrayWrite () only returns the pointer to the local array, so it does not need any extra memory. 2130: *PetscErrorCode MatDenseGetArray_SeqDense(Mat A,PetscScalar **array)* 2131: {2132: Mat_SeqDense *mat = (Mat_SeqDense*)A->data; 2135: if (mat->matinuse) SETERRQ (PETSC_COMM_SELF ,PETSC_ERR_ORDER,"Need to call MatDenseRestoreSubMatrix () first");2136: *array = mat->v;2137: return(0);2138: } --Junchao Zhang On Tue, Dec 14, 2021 at 2:29 PM Rohan Yadav wrote: > Thanks Mark, I will try that -- that seems to be what I want. > > Junchao, that excerpt seems like it runs into the same problem as above > right? If every rank tries to get the whole matrix then the process will > surely OOM. > > Rohan > > On Tue, Dec 14, 2021 at 3:27 PM Junchao Zhang > wrote: > >> From https://petsc.org/release/src/ksp/ksp/tutorials/ex77.c.html >> >> 114: MatDenseGetArrayWrite (B,&x);115: for (i=0; i(B,&x); >> >> --Junchao Zhang >> >> >> On Tue, Dec 14, 2021 at 1:05 PM Rohan Yadav >> wrote: >> >>> Hi, >>> >>> I'm having trouble setting all entries of a matrix to a constant value, >>> similar to the `VecSet` method on vectors. I have a dense matrix that I >>> want to initialize all entries to 1. The only related method I see on the >>> `Mat` interface is `MatZeroEntries`, which sets all entries to 0. The >>> obvious first attempt is to use the `MatSetValue` function to set all >>> entries to the constant. >>> >>> ``` >>> Mat C; >>> >>> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, k, j, >>> NULL, &C); >>> >>> for (int kk = 0; kk < k; kk++) { >>> >>> for (int jj = 0; jj < j; jj++) { >>> >>> MatSetValue(C, kk, jj, 1, INSERT_VALUES); >>> >>> } >>> >>> } >>> >>> MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY); >>> >>> MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY); >>> ``` >>> >>> However, when run with a relatively large matrix C (5GB) and a >>> rank-per-core on my 40-core machine this code OOMs and crashes. It does not >>> OOM with only 1 and 10 rank, leading me to believe that this API call is >>> somehow causing the entire matrix to be replicated on each rank. >>> >>> Despite looking through the documentation, I could not find another API >>> call that would allow me to set all the values in the matrix to a constant. >>> What should I do here? >>> >>> Thanks, >>> >>> Rohan >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Tue Dec 14 14:58:44 2021 From: pierre at joliv.et (Pierre Jolivet) Date: Tue, 14 Dec 2021 21:58:44 +0100 Subject: [petsc-users] PCs and MATIS In-Reply-To: References: Message-ID: <31BA04FD-18EF-4E16-92C0-3EA53EF81F1F@joliv.et> Hello Eric, Why do you want to use a MATIS? This solution that Barry suggested to me some time ago is perfectly functioning: https://lists.mcs.anl.gov/pipermail/petsc-dev/2020-January/025491.html That?s what I do for optimized (restricted) additive Schwarz. You can see also how to use this ?trick? from Barry here: https://gitlab.com/petsc/petsc/-/commit/ba409789b0c94205af794ad85bdac232504f2b3f#2c7d367ac831f3b0c5fb767c0eb16c1ea7ae7fe0_821_821 (it?s in a WIP branch, and for the fine-level PCASM smoother used in PCHPDDM, but at least it?s actual code, in case you do not follow what Barry and I are saying in the old thread). Thanks, Pierre > On 14 Dec 2021, at 9:37 PM, Eric Chamberland wrote: > > Hi, > > We want to use an Additive Schwarz preconditioner (like PCASM) combined with overlapping meshes *and* specific boundary conditions to local (MATIS) matrices. > > At first sight, MATIS is only supported for BDDC and FETI-DP and is not working with PCASM. > > Do we have to write a new PC from scratch to combine the use of mesh overlap, MATIS and customized local matrices? > > ...or is there any working example we should look at to start from? :) > > Thanks a lot! > > Eric > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 14 15:00:32 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 14 Dec 2021 16:00:32 -0500 Subject: [petsc-users] PCs and MATIS In-Reply-To: References: Message-ID: Do the "overlapping meshes" "match" in the overlap regions or are you connecting completely different meshes discretizations by boundary conditions along the edges of all the sub meshes? In other words, will your global linear system be defined by your overlapping meshes? If they match but you want to use more general boundary conditions for the subproblems than PCASM supports by default you might be able to use PCSetModifySubMatrices() to allow you to modify the sub matrices before they are used in the preconditioned; you can for example modify the entries along the boundary of the domain to represent Robin's conditions. Or you can put whatever you want into the entire submatrix if modifying them is too tedious. But if the meshes don't match then you don't really need a new PC you need to even define what your nonlinear system is and you have a very big project to write a PDE solver for non-matching overlapping grids using PETSc. Barry > On Dec 14, 2021, at 3:37 PM, Eric Chamberland wrote: > > Hi, > > We want to use an Additive Schwarz preconditioner (like PCASM) combined with overlapping meshes *and* specific boundary conditions to local (MATIS) matrices. > > At first sight, MATIS is only supported for BDDC and FETI-DP and is not working with PCASM. > > Do we have to write a new PC from scratch to combine the use of mesh overlap, MATIS and customized local matrices? > > ...or is there any working example we should look at to start from? :) > > Thanks a lot! > > Eric > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > From rohany at alumni.cmu.edu Tue Dec 14 15:08:16 2021 From: rohany at alumni.cmu.edu (Rohan Yadav) Date: Tue, 14 Dec 2021 16:08:16 -0500 Subject: [petsc-users] Help initializing matrix to a constant In-Reply-To: References: Message-ID: <88F004BA-B170-4865-B43D-B71C23302832@alumni.cmu.edu> I see, thanks all! Rohan Yadav > On Dec 14, 2021, at 3:55 PM, Junchao Zhang wrote: > > ? > MatDenseGetArrayWrite() only returns the pointer to the local array, so it does not need any extra memory. > 2130: PetscErrorCode MatDenseGetArray_SeqDense(Mat A,PetscScalar **array) > 2131: { > 2132: Mat_SeqDense *mat = (Mat_SeqDense*)A->data; > > 2135: if (mat->matinuse) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ORDER,"Need to call MatDenseRestoreSubMatrix() first"); > 2136: *array = mat->v; > 2137: return(0); > 2138: } > > --Junchao Zhang > > >> On Tue, Dec 14, 2021 at 2:29 PM Rohan Yadav wrote: >> Thanks Mark, I will try that -- that seems to be what I want. >> >> Junchao, that excerpt seems like it runs into the same problem as above right? If every rank tries to get the whole matrix then the process will surely OOM. >> >> Rohan >> >>> On Tue, Dec 14, 2021 at 3:27 PM Junchao Zhang wrote: >>> From https://petsc.org/release/src/ksp/ksp/tutorials/ex77.c.html >>> 114: MatDenseGetArrayWrite(B,&x); >>> 115: for (i=0; i>> 116: MatDenseRestoreArrayWrite(B,&x); >>> --Junchao Zhang >>> >>> >>>> On Tue, Dec 14, 2021 at 1:05 PM Rohan Yadav wrote: >>>> Hi, >>>> >>>> I'm having trouble setting all entries of a matrix to a constant value, similar to the `VecSet` method on vectors. I have a dense matrix that I want to initialize all entries to 1. The only related method I see on the `Mat` interface is `MatZeroEntries`, which sets all entries to 0. The obvious first attempt is to use the `MatSetValue` function to set all entries to the constant. >>>> >>>> ``` >>>> Mat C; >>>> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, k, j, NULL, &C); >>>> for (int kk = 0; kk < k; kk++) { >>>> for (int jj = 0; jj < j; jj++) { >>>> MatSetValue(C, kk, jj, 1, INSERT_VALUES); >>>> } >>>> } >>>> MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY); >>>> MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY); >>>> ``` >>>> >>>> However, when run with a relatively large matrix C (5GB) and a rank-per-core on my 40-core machine this code OOMs and crashes. It does not OOM with only 1 and 10 rank, leading me to believe that this API call is somehow causing the entire matrix to be replicated on each rank. >>>> >>>> Despite looking through the documentation, I could not find another API call that would allow me to set all the values in the matrix to a constant. What should I do here? >>>> >>>> Thanks, >>>> >>>> Rohan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Tue Dec 14 21:30:24 2021 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Tue, 14 Dec 2021 22:30:24 -0500 Subject: [petsc-users] PCs and MATIS In-Reply-To: <31BA04FD-18EF-4E16-92C0-3EA53EF81F1F@joliv.et> References: <31BA04FD-18EF-4E16-92C0-3EA53EF81F1F@joliv.et> Message-ID: <155dde15-dbd8-5453-a21c-17a77e4a552d@giref.ulaval.ca> Hi Pierre, ok, that's a nice trick! Thanks for this great hint! :) Eric On 2021-12-14 3:58 p.m., Pierre Jolivet wrote: > Hello Eric, > Why do you want to use a MATIS? > This solution that Barry suggested to me some time ago is perfectly > functioning: > https://lists.mcs.anl.gov/pipermail/petsc-dev/2020-January/025491.html > > That?s what I do for optimized (restricted) additive Schwarz. > You can see also how to use this ?trick? from Barry here: > https://gitlab.com/petsc/petsc/-/commit/ba409789b0c94205af794ad85bdac232504f2b3f#2c7d367ac831f3b0c5fb767c0eb16c1ea7ae7fe0_821_821 > ?(it?s > in a WIP branch, and for the fine-level PCASM smoother used in > PCHPDDM, but at least it?s actual code, in case you do not follow what > Barry and I are saying in the old thread). > > Thanks, > Pierre > >> On 14 Dec 2021, at 9:37 PM, Eric Chamberland >> > > wrote: >> >> Hi, >> >> We want to use an Additive Schwarz preconditioner (like PCASM) >> combined with overlapping meshes *and* specific boundary conditions >> to local (MATIS) matrices. >> >> At first sight, MATIS is only supported for BDDC and FETI-DP and is >> not working with PCASM. >> >> Do we have to write a new PC from scratch to combine the use of mesh >> overlap, MATIS and customized local matrices? >> >> ...or is there any working example we should look at to start from? :) >> >> Thanks a lot! >> >> Eric >> >> -- >> Eric Chamberland, ing., M. Ing >> Professionnel de recherche >> GIREF/Universit? Laval >> (418) 656-2131 poste 41 22 42 >> > -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Tue Dec 14 21:42:33 2021 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Tue, 14 Dec 2021 22:42:33 -0500 Subject: [petsc-users] PCs and MATIS In-Reply-To: References: Message-ID: Hi Barry, yes the overlapping meshes match.? They will be generated by PETSc DMPlexDistributeOverlap but transposed in our in-house code and we are using Matthew's branch (https://gitlab.com/petsc/petsc/-/merge_requests/4547) to have it work all right. On 2021-12-14 4:00 p.m., Barry Smith wrote: > Do the "overlapping meshes" "match" in the overlap regions or are you connecting completely different meshes discretizations by boundary conditions along the edges of all the sub meshes? In other words, will your global linear system be defined by your overlapping meshes? > > If they match but you want to use more general boundary conditions for the subproblems than PCASM supports by default you might be able to use PCSetModifySubMatrices() to allow you to modify the sub matrices before they are used in the preconditioned; you can for example modify the entries along the boundary of the domain to represent Robin's conditions. Or you can put whatever you want into the entire submatrix if modifying them is too tedious. Ok, that is exactly what we spoted! But since we will need the unassembled (MATIS) matrices, we have hit the same problem Pierre got: the call to MatCreateSubMatrices is not directly avoidable, so we will have to use the same trick you gave him some time ago... ;) > > But if the meshes don't match then you don't really need a new PC you need to even define what your nonlinear system is and you have a very big project to write a PDE solver for non-matching overlapping grids using PETSc. ok, no, that was not our goal right now... Thanks again Pierre and Barry for your fast answers! :) Eric > Barry > > >> On Dec 14, 2021, at 3:37 PM, Eric Chamberland wrote: >> >> Hi, >> >> We want to use an Additive Schwarz preconditioner (like PCASM) combined with overlapping meshes *and* specific boundary conditions to local (MATIS) matrices. >> >> At first sight, MATIS is only supported for BDDC and FETI-DP and is not working with PCASM. >> >> Do we have to write a new PC from scratch to combine the use of mesh overlap, MATIS and customized local matrices? >> >> ...or is there any working example we should look at to start from? :) >> >> Thanks a lot! >> >> Eric >> >> -- >> Eric Chamberland, ing., M. Ing >> Professionnel de recherche >> GIREF/Universit? Laval >> (418) 656-2131 poste 41 22 42 >> -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 From stefano.zampini at gmail.com Tue Dec 14 21:49:16 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 15 Dec 2021 06:49:16 +0300 Subject: [petsc-users] PCs and MATIS In-Reply-To: References: Message-ID: Eric What Pierre and Barry suggested is OK. If you want to take a look at how to use MATIS with overlapped meshes, see https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/plexhpddm.c This code assembles the local Neumann problem in the overlapped mesh as needed by there GenEO preconditioner. > On Dec 15, 2021, at 6:42 AM, Eric Chamberland wrote: > > Hi Barry, > > yes the overlapping meshes match. They will be generated by PETSc DMPlexDistributeOverlap but transposed in our in-house code and we are using Matthew's branch (https://gitlab.com/petsc/petsc/-/merge_requests/4547 ) to have it work all right. > > > On 2021-12-14 4:00 p.m., Barry Smith wrote: >> Do the "overlapping meshes" "match" in the overlap regions or are you connecting completely different meshes discretizations by boundary conditions along the edges of all the sub meshes? In other words, will your global linear system be defined by your overlapping meshes? >> >> If they match but you want to use more general boundary conditions for the subproblems than PCASM supports by default you might be able to use PCSetModifySubMatrices() to allow you to modify the sub matrices before they are used in the preconditioned; you can for example modify the entries along the boundary of the domain to represent Robin's conditions. Or you can put whatever you want into the entire submatrix if modifying them is too tedious. > > Ok, that is exactly what we spoted! > > But since we will need the unassembled (MATIS) matrices, we have hit the same problem Pierre got: the call to MatCreateSubMatrices is not directly avoidable, so we will have to use the same trick you gave him some time ago... ;) > > >> >> But if the meshes don't match then you don't really need a new PC you need to even define what your nonlinear system is and you have a very big project to write a PDE solver for non-matching overlapping grids using PETSc. > > ok, no, that was not our goal right now... > > Thanks again Pierre and Barry for your fast answers! :) > > Eric > >> Barry >> >> >>> On Dec 14, 2021, at 3:37 PM, Eric Chamberland wrote: >>> >>> Hi, >>> >>> We want to use an Additive Schwarz preconditioner (like PCASM) combined with overlapping meshes *and* specific boundary conditions to local (MATIS) matrices. >>> >>> At first sight, MATIS is only supported for BDDC and FETI-DP and is not working with PCASM. >>> >>> Do we have to write a new PC from scratch to combine the use of mesh overlap, MATIS and customized local matrices? >>> >>> ...or is there any working example we should look at to start from? :) >>> >>> Thanks a lot! >>> >>> Eric >>> >>> -- >>> Eric Chamberland, ing., M. Ing >>> Professionnel de recherche >>> GIREF/Universit? Laval >>> (418) 656-2131 poste 41 22 42 >>> > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 -------------- next part -------------- An HTML attachment was scrubbed... URL: From celestechevali at gmail.com Wed Dec 15 10:28:52 2021 From: celestechevali at gmail.com (celestechevali at gmail.com) Date: Wed, 15 Dec 2021 17:28:52 +0100 Subject: [petsc-users] SNES always ends at iteration 0 In-Reply-To: References: Message-ID: Many thanks for your reply ! I used -snes_converged_reason and found that the ksp linear solver diverged during the 1st iteration... The KSP iterations reached 10000 and stopped automatically. After checking the tolerance values with -snes_view, I found that the tolerances are in fact correctly set to default... However I previously used "printf" instead of standard PETSc "view" to output the tolerance values... And this led to incorrect printing of tolerances... Sorry about the confusion... Thank you so much for helping me debug the code. Mark Adams ?2021?12?13??? 03:53??? > > > On Sun, Dec 12, 2021 at 7:47 PM celestechevali at gmail.com < > celestechevali at gmail.com> wrote: > >> Thank you so much for your reply ! >> >> In fact I didn't know how to set tolerances, so I proceeded without >> specifying the tolerances, hoping that this could lead to the >> implementation of default values... >> >> I just added *SNESSetFromOptions(snes); *However, it doesn't make any >> difference... But it's true that the "tol" values are somehow set to >> zero... >> >> *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = >> 10000* >> > > I'm not sure where this output comes from. This does not look like PETSc > "view" output. The default values are small (eg, 1e-8). > maxit=50 and maxf=10000 look like the defaults. Maybe you are printing > floats incorrectly. > > Your output below is converging due to maxit=50. > rtol==0 would never converge because the relative residual can essentially > never be 0. > You can also use -ksp_monitor to view the linear solver iterations and > -ksp_converged_reason to have the solver print the reason that it > "converged". > -snes_converged_reason makes the SNES print why it decided to stop > iterating. > These parameters should give you more information about the case where you > see no output (unless the code is hung). > > Mark > > >> >> >> * 0 SNES Function norm 7.604910424038e+02* >> >> Is it possible that it has something to do with my makefile ? >> >> Since I didn't figure out the PETSc makefile format (which seems to be >> different from standard C makefile format), I named my source code as >> *ex1.c* to make use of the default settings for PETSc example programs... >> >> And in my makefile I wrote : >> >> >> >> >> *include ${PETSC_DIR}/lib/petsc/conf/variablesinclude >> ${PETSC_DIR}/lib/petsc/conf/testex1: ex1.o* >> >> Is it possible the "tol" values are set to 0 by the default setting used >> for example programs ? >> >> Thank you so much for your help. >> >> PS: I just tried the same code with less degrees of freedom and this time >> it worked... But for a large system it didn't... >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = >> 10000 0 SNES Function norm 1.164809703659e+00 1 SNES Function norm >> 1.311388740736e-01 2 SNES Function norm 7.232579319557e-02 3 SNES >> Function norm 4.984271911548e-02 4 SNES Function norm 3.224387373805e-02 >> 5 SNES Function norm 6.898280568053e-03 6 SNES Function norm >> 6.297558001575e-03 7 SNES Function norm 5.358028396052e-03 8 SNES >> Function norm 4.591005105466e-03 9 SNES Function norm 4.063981130201e-03 >> 10 SNES Function norm 3.715929394609e-03 11 SNES Function norm >> 3.428330101253e-03 12 SNES Function norm 3.177113603032e-03 13 SNES >> Function norm 2.958574186594e-03 14 SNES Function norm 2.769227811865e-03 >> 15 SNES Function norm 2.605947870584e-03 16 SNES Function norm >> 2.465934405221e-03 17 SNES Function norm 2.346761136962e-03 18 SNES >> Function norm 2.246362261451e-03 19 SNES Function norm 2.163102452591e-03 >> 20 SNES Function norm 2.095849101382e-03 21 SNES Function norm >> 2.043740325461e-03 22 SNES Function norm 2.005106316761e-03 23 SNES >> Function norm 1.975748994170e-03 24 SNES Function norm 1.949413335428e-03 >> 25 SNES Function norm 1.920795414593e-03 26 SNES Function norm >> 1.886883259141e-03 27 SNES Function norm 1.846374653045e-03 28 SNES >> Function norm 1.799050087038e-03 29 SNES Function norm 1.745284156916e-03 >> 30 SNES Function norm 1.685885151987e-03 31 SNES Function norm >> 1.621850994665e-03 32 SNES Function norm 1.554258940064e-03 33 SNES >> Function norm 1.484213253375e-03 34 SNES Function norm 1.412768267404e-03 >> 35 SNES Function norm 1.340893218332e-03 36 SNES Function norm >> 1.269412489589e-03 37 SNES Function norm 1.199029202116e-03 38 SNES >> Function norm 1.130300263372e-03 39 SNES Function norm 1.063694395854e-03 >> 40 SNES Function norm 9.995826338243e-04 41 SNES Function norm >> 9.383610129089e-04 42 SNES Function norm 8.807543352645e-04 43 SNES >> Function norm 8.288695938590e-04 44 SNES Function norm 7.898873173876e-04 >> 45 SNES Function norm 7.752509690373e-04 46 SNES Function norm >> 7.625724154377e-04 47 SNES Function norm 7.503152403370e-04 48 SNES >> Function norm 7.364744378378e-04 49 SNES Function norm 7.202926541551e-04 >> 50 SNES Function norm 7.015245603442e-04 * >> >> Mark Adams ?2021?12?13??? 01:11??? >> >>> The three "tol" values should be finite. It sounds like you set them to >>> 0. >>> Don't do that and use the defaults to start. >>> The behavior with zero tolerances is not defined. >>> You can use -snes_monitor to print out the iterations. >>> >>> On Sun, Dec 12, 2021 at 6:22 PM celestechevali at gmail.com < >>> celestechevali at gmail.com> wrote: >>> >>>> Hello, >>>> >>>> I encountered a strange problem concerning the convergence of SNES. >>>> >>>> In my recent test runs I found that SNES always stops at iteration 0... >>>> >>>> At first I thought there may be an error with the tolerance setting, so >>>> I output the tolerances : >>>> >>>> >>>> >>>> *atol = 0.000000, rtol = 0.000000, stol = 0.000000, maxit = 50, maxf = >>>> 10000 Norm of error 760.491 Iterations 0* >>>> >>>> Which are exactly the default values that I always used. However, for >>>> the same tolerance settings, the SNES solver converges successfully if I >>>> decrease the number of degrees of freedom in my system... >>>> >>>> I wish to know if anyone has experienced the same type of problems or >>>> has an idea about what could possibly cause the problem... >>>> >>>> Thank you so much in advance. >>>> >>>> I appreciate any advice that you provide. >>>> >>>> >>>> >>>> >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Wed Dec 15 12:44:58 2021 From: hongzhang at anl.gov (Zhang, Hong) Date: Wed, 15 Dec 2021 18:44:58 +0000 Subject: [petsc-users] PETSC TS Extrapolation In-Reply-To: References: Message-ID: <5D7C595D-4758-45F2-88BE-15209E8A5FD3@anl.gov> On Dec 14, 2021, at 2:18 PM, Alfredo J Duarte Gomez > wrote: Hello PETSC team, I have some questions about the extrapolation routines used in the TS solvers. I wish to use an extrapolation as the initial guess for the solution at the next time step, which should have an order equivalent to the order of my TS solver. So far, I have been using the TS THETA with the extrapolate flag, but it is also my understanding that the TS BDF extrapolates by default. What values are used in these extrapolation routines? Is it the derivative plus k solutions (where k is the order of the extrapolation) or is it k+1 previous solutions? Looking at the function TSBDF_Extrapolate(), I think it is the latter. I am also unsure about what happens at the boundaries when using DAEs. Are the values at the boundaries extrapolated as well? Or is there no extrapolation at the boundary points? All the values in the solution vector will be extrapolated. So the boundary values are extrapolated as soon as they are included in the vector. Hong (Mr.) Thank you, -Alfredo -- Alfredo Duarte Graduate Research Assistant The University of Texas at Austin -------------- next part -------------- An HTML attachment was scrubbed... URL: From karthikeyan.chockalingam at stfc.ac.uk Wed Dec 15 14:21:00 2021 From: karthikeyan.chockalingam at stfc.ac.uk (Karthikeyan Chockalingam - STFC UKRI) Date: Wed, 15 Dec 2021 20:21:00 +0000 Subject: [petsc-users] Unstructured mesh In-Reply-To: References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> <44E3FC40-727F-4ECB-8DCA-DCBA99ED74B1@stfc.ac.uk> <65FBFEA6-FD79-4FF2-9E1E-7E4FF81147D8@stfc.ac.uk> Message-ID: <398DB50A-FB8E-4996-9B89-B69CA36D1D3D@stfc.ac.uk> Thank you for your detailed response. Yes, using aijcusparse worked (as you mentioned it is not optimal). My objective was to run an unstructured mesh (which I believe would be of type DMPlex), for varying size using different preconditioners by comparing their performance on cpus and gpus. I understand that would not be possible using hypre. Leaving out hypre - I need some recommendation to move onto other unstructured (DMPlex) example problems, where I change the size of the domain via command line input. Kind regards, Karthik. From: Mark Adams Date: Tuesday, 14 December 2021 at 18:58 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" Cc: Matthew Knepley , "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Unstructured mesh I was able to get hypre to work on ex56 (snes) with -ex56_dm_mat_type aijcusparse -ex56_dm_vec_type cuda (not hypre matrix). This should copy the cusparse matrix to a hypre matrix before the solve. So not optimal but the actual solve should be the same. Hypre is not yet supported on this example so you might not want to spend too much time on it. In particular, we do not have an example that uses DMPlex and hypre on a GPU. src/ksp/ksp/tutotials/ex56 is old and does not use DMPLex, but it is missing a call like ksp ex4 for hypre like this: #if defined(PETSC_HAVE_HYPRE) ierr = MatHYPRESetPreallocation(A,5,NULL,5,NULL);CHKERRQ(ierr); #endif If you add that it might work, but again this is all pretty fragile at this point. Mark Mark On Tue, Dec 14, 2021 at 2:42 AM Karthikeyan Chockalingam - STFC UKRI > wrote: I tried adding the -mat_block_size 3 but I still get the same error message. Thanks, Karthik. From: Mark Adams > Date: Monday, 13 December 2021 at 19:54 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh Try adding -mat_block_size 3 On Mon, Dec 13, 2021 at 11:57 AM Karthikeyan Chockalingam - STFC UKRI > wrote: I tried to run the problem using -pc_type hypre but it errored out: ./ex56 -cells 4,4,2 -max_conv_its 1 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type hypre -pc_hypre_type boomeramg -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view -ex56_dm_vec_type cuda -ex56_dm_mat_type hypre -options_left [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: Blocksize of layout 1 must match that of mapping 3 (or the latter must be 1) [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.16.1-353-g887dddf386 GIT Date: 2021-11-19 20:24:41 +0000 [0]PETSC ERROR: ./ex56 on a arch-linux2-c-opt named sqg2b13.bullx by kxc07-lxm25 Mon Dec 13 16:50:02 2021 [0]PETSC ERROR: Configure options --with-debugging=0 --with-blaslapack-dir=/lustre/scafellpike/local/apps/intel/intel_cs/2018.0.128/mkl --with-cuda=1 --with-cuda-arch=70 --download-hypre=yes --download-hypre-configure-arguments="--with-cuda=yes --enable-gpu-profiling=yes --enable-cusparse=yes --enable-cublas=yes --enable-curand=yes --enable-unified-memory=yes HYPRE_CUDA_SM=70" --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-cc=mpicc --with-cxx=mpicxx -with-fc=mpif90 [0]PETSC ERROR: #1 PetscLayoutSetISLocalToGlobalMapping() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/vec/is/utils/pmap.c:371 [0]PETSC ERROR: #2 MatSetLocalToGlobalMapping() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/mat/interface/matrix.c:2089 [0]PETSC ERROR: #3 DMCreateMatrix_Plex() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/impls/plex/plex.c:2460 [0]PETSC ERROR: #4 DMCreateMatrix() at /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/interface/dm.c:1445 [0]PETSC ERROR: #5 main() at ex56.c:439 [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -alpha .01 [0]PETSC ERROR: -cells 4,4,2 [0]PETSC ERROR: -ex56_dm_mat_type hypre [0]PETSC ERROR: -ex56_dm_vec_type cuda [0]PETSC ERROR: -ex56_dm_view [0]PETSC ERROR: -ksp_monitor [0]PETSC ERROR: -ksp_rtol 1.e-8 [0]PETSC ERROR: -ksp_type cg [0]PETSC ERROR: -log_view [0]PETSC ERROR: -lx 1. [0]PETSC ERROR: -max_conv_its 1 [0]PETSC ERROR: -options_left [0]PETSC ERROR: -pc_hypre_type boomeramg [0]PETSC ERROR: -pc_type hypre [0]PETSC ERROR: -petscspace_degree 1 [0]PETSC ERROR: -snes_monitor [0]PETSC ERROR: -snes_rtol 1.e-10 [0]PETSC ERROR: -use_gpu_aware_mpi 0 [0]PETSC ERROR: -use_mat_nearnullspace true [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 77. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- From: Mark Adams > Date: Monday, 13 December 2021 at 13:58 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thanks Matt. Couple of weeks back you mentioned ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. The solver can run on the GPU, but the vector/matrix FEM assembly does not. I am working on that now.? I am able to run other examples in ksp/tutorials on gpus. I complied ex56 in snes/tutorials no differently. The only difference being I didn?t specify _dm_vec_type and _dm_vec_type (as you mentioned they are not assembled on gpus anyways plus I am working on an unstructured grid thought _dm is not right type for this problem). I was hoping to see gpu flops recorded for KSPSolve, which I didn?t. Okay, I will wait for Mark to comment. This (DM) example works like any other, with a prefix, as far as GPU: -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, etc. Run with -options_left to verify that these are used. Kind regards, Karthik. From: Matthew Knepley > Date: Monday, 13 December 2021 at 13:17 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Mark Adams >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank you. I was able to confirm both the below options produced the same mesh ./ex56 -cells 2,2,1 -max_conv_its 2 ./ex56 -cells 4,4,2 -max_conv_its 1 Good But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of MPI processes. It is not. The number of processes is specified independently using 'mpiexec -n

' or when using the test system NP=

. (i) Say I start with -cells 1,1,1 -max_conv its 7; that would eventually leave all refinement on level 7 running on 1 MPI process? (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended to run on 4 MPI processes? No, those options do not influence the number of processes. I am running ex56 on gpu; I am looking at KSPSolve (or any other event) but no gpu flops are recorded in the -log_view? I do not think you are running on the GPU then. Mark can comment, but we usually specify GPU execution using the Vec and Mat types through -dm_vec_type and -dm_mat_type. Thanks, Matt For your reference I used the below flags: ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view Kind regards, Karthik. From: Mark Adams > Date: Sunday, 12 December 2021 at 23:00 To: "Chockalingam, Karthikeyan (STFC,DL,HC)" > Cc: Matthew Knepley >, "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] Unstructured mesh On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI > wrote: Thank for your response that was helpful. I have a couple of questions: (i) How can I control the level of refinement? I tried to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement from 8 giving 32 cubes. I answered this question recently but ex56 clobbers ex56_dm_refine in the convergence loop. I have an MR that prints a warning if you provide a ex56_dm_refine. * snes/ex56 runs a convergence study and confusingly sets the options manually, thus erasing your -ex56_dm_refine. * To refine, use -max_conv_its N <3>, this sets the number of steps of refinement. That is, the length of the convergence study * You can adjust where it starts from with -cells i,j,k <1,1,1> You do want to set this if you have multiple MPI processes so that the size of this mesh is the number of processes. That way it starts with one cell per process and refines from there. (ii) What does -cell 2,2,1 correspond to? The initial mesh or mesh_0. The convergence test uniformly refines this mesh. So if you want to refine this twice you could use -cells 8,8,4 How can I determine the total number of dofs? Unfortunately, that is not printed but you can calculate from the initial cell grid, the order of the element and the refinement in each iteration of the convergence tests. So that I can perform a scaling study by changing the input of the flag -cells. You can and the convergence test gives you data for a strong speedup study in one run. Each solve is put in its own "stage" of the output and you want to look at KSPSolve lines in the log_view output. This email and any attachments are intended solely for the use of the named recipients. If you are not the intended recipient you must not use, disclose, copy or distribute this email or any of its attachments and should notify the sender immediately and delete this email from your system. UK Research and Innovation (UKRI) has taken every reasonable precaution to minimise risk of this email or any attachments containing viruses or malware but the recipient should carry out its own virus and malware checks before opening the attachments. UKRI does not accept any liability for any losses or damages which the recipient may sustain due to presence of any viruses. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yuanxi at advancesoft.jp Wed Dec 15 23:47:09 2021 From: yuanxi at advancesoft.jp (=?UTF-8?B?6KKB54WV?=) Date: Thu, 16 Dec 2021 14:47:09 +0900 Subject: [petsc-users] non-manifold DMPLEX In-Reply-To: References: Message-ID: Dear Nicolas : Sorry for cutting in! I'd like to indicate that I have also encountered the same problem. Pls refer to the follows https://www.mail-archive.com/petsc-users at mcs.anl.gov/msg42462.html I have read those kinds of mesh successfully but my program cracked when doing DMPlexInterpolate. It would be great that you could solve this problem. Yuan 2021?12?13?(?) 6:36 TARDIEU Nicolas via petsc-users : > Dear Patrick and Matthew, > > Thank you very much for your answers. I am gonna try to set up such a test > by assigning cell types. > Shall I open a MR in order to contribute this example ? > > Regards, > Nicolas > > ------------------------------ > *De :* knepley at gmail.com > *Envoy? :* dimanche 12 d?cembre 2021 12:17 > *? :* Patrick Sanan > *Cc :* TARDIEU Nicolas ; petsc-users at mcs.anl.gov < > petsc-users at mcs.anl.gov> > *Objet :* Re: [petsc-users] non-manifold DMPLEX > > On Sun, Dec 12, 2021 at 6:11 AM Patrick Sanan > wrote: > > Here you have the following "points": > > - 1 3-cell (the cube volume) > - 7 2-cells (the 6 faces of the cube plus the extra one) > - 16 1-cells (the 12 edges of the cube, plus 3 extra ones from the extra > face, plus the extra edge) > - 11 0-cells (the 8 vertices of the cube, pus 2 extra ones from the extra > face, plus the extra vertex) > > You could encode your mesh as here, by directly specifying relationships > between these points in the Hasse diagram: > > https://petsc.org/release/docs/manual/dmplex/#representing-unstructured-grids > > > Then, maybe the special relation is captured because you've defined the > "cone" or "support" for each "point", which tells you about the local > topology everywhere. E.g. to take the simpler case, three of the faces have > the yellow edge in their "cone", or equivalently the yellow edge has those > three faces in its "support". > > > This is correct. I can help you make this if you want. I think if you > assign cell types, you can even get Plex to automatically interpolate. > > Note that with this kind of mesh, algorithms which assume a uniform cell > dimension will break, but I am guessing you would not > be interested in those anyway. > > Thanks, > > Matt > > > Am Fr., 10. Dez. 2021 um 17:04 Uhr schrieb TARDIEU Nicolas via petsc-users > : > > Dear PETSc Team, > > Following a previous discussion on the mailing list, I'd like to > experiment with DMPLEX with a very simple non-manifold mesh as shown in the > attached picture : a cube connected to a square by an edge and to an edge > by a point. > I have read some of the papers that Matthew et al. have written, but I > must admit that I do not see how to start... > I see how the define the different elements but I do not see how to > specify the special relationship between the cube and the square and > between the cube and the edge. > Once it will have been set correctly, what I am hoping is to be able to > use all the nice features of the DMPLEX object. > > Best regards, > Nicolas > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.tardieu at edf.fr Thu Dec 16 02:09:53 2021 From: nicolas.tardieu at edf.fr (TARDIEU Nicolas) Date: Thu, 16 Dec 2021 08:09:53 +0000 Subject: [petsc-users] non-manifold DMPLEX In-Reply-To: References: Message-ID: Dear Yuan, I had read with great interest your posts on the mailing-list. I am also a structural mechanics practitioner and I share your need to manage non-maniforld meshes. I will let you know if we manage to do stuff with this kind of meshes in DMPLEX. Regards, Nicolas ________________________________ De : yuanxi at advancesoft.jp Envoy? : jeudi 16 d?cembre 2021 06:47 ? : TARDIEU Nicolas Cc : knepley at gmail.com ; Patrick Sanan ; petsc-users at mcs.anl.gov Objet : Re: [petsc-users] non-manifold DMPLEX Dear Nicolas : Sorry for cutting in! I'd like to indicate that I have also encountered the same problem. Pls refer to the follows https://www.mail-archive.com/petsc-users at mcs.anl.gov/msg42462.html I have read those kinds of mesh successfully but my program cracked when doing DMPlexInterpolate. It would be great that you could solve this problem. Yuan 2021?12?13?(?) 6:36 TARDIEU Nicolas via petsc-users >: Dear Patrick and Matthew, Thank you very much for your answers. I am gonna try to set up such a test by assigning cell types. Shall I open a MR in order to contribute this example ? Regards, Nicolas ________________________________ De : knepley at gmail.com > Envoy? : dimanche 12 d?cembre 2021 12:17 ? : Patrick Sanan > Cc : TARDIEU Nicolas >; petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] non-manifold DMPLEX On Sun, Dec 12, 2021 at 6:11 AM Patrick Sanan > wrote: Here you have the following "points": - 1 3-cell (the cube volume) - 7 2-cells (the 6 faces of the cube plus the extra one) - 16 1-cells (the 12 edges of the cube, plus 3 extra ones from the extra face, plus the extra edge) - 11 0-cells (the 8 vertices of the cube, pus 2 extra ones from the extra face, plus the extra vertex) You could encode your mesh as here, by directly specifying relationships between these points in the Hasse diagram: https://petsc.org/release/docs/manual/dmplex/#representing-unstructured-grids Then, maybe the special relation is captured because you've defined the "cone" or "support" for each "point", which tells you about the local topology everywhere. E.g. to take the simpler case, three of the faces have the yellow edge in their "cone", or equivalently the yellow edge has those three faces in its "support". This is correct. I can help you make this if you want. I think if you assign cell types, you can even get Plex to automatically interpolate. Note that with this kind of mesh, algorithms which assume a uniform cell dimension will break, but I am guessing you would not be interested in those anyway. Thanks, Matt Am Fr., 10. Dez. 2021 um 17:04 Uhr schrieb TARDIEU Nicolas via petsc-users >: Dear PETSc Team, Following a previous discussion on the mailing list, I'd like to experiment with DMPLEX with a very simple non-manifold mesh as shown in the attached picture : a cube connected to a square by an edge and to an edge by a point. I have read some of the papers that Matthew et al. have written, but I must admit that I do not see how to start... I see how the define the different elements but I do not see how to specify the special relationship between the cube and the square and between the cube and the edge. Once it will have been set correctly, what I am hoping is to be able to use all the nice features of the DMPLEX object. Best regards, Nicolas Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yuanxi at advancesoft.jp Thu Dec 16 03:12:24 2021 From: yuanxi at advancesoft.jp (=?UTF-8?B?6KKB54WV?=) Date: Thu, 16 Dec 2021 18:12:24 +0900 Subject: [petsc-users] non-manifold DMPLEX In-Reply-To: References: Message-ID: Great! 2021?12?16?(?) 17:14 TARDIEU Nicolas : > Dear Yuan, > > I had read with great interest your posts on the mailing-list. I am also a > structural mechanics practitioner and I share your need to manage > non-maniforld meshes. > I will let you know if we manage to do stuff with this kind of meshes in > DMPLEX. > > Regards, > Nicolas > > ------------------------------ > *De :* yuanxi at advancesoft.jp > *Envoy? :* jeudi 16 d?cembre 2021 06:47 > *? :* TARDIEU Nicolas > *Cc :* knepley at gmail.com ; Patrick Sanan < > patrick.sanan at gmail.com>; petsc-users at mcs.anl.gov > > *Objet :* Re: [petsc-users] non-manifold DMPLEX > > Dear Nicolas : > > Sorry for cutting in! > > I'd like to indicate that I have also encountered the same problem. Pls > refer to the follows > > https://www.mail-archive.com/petsc-users at mcs.anl.gov/msg42462.html > > > I have read those kinds of mesh successfully but my program cracked when > doing DMPlexInterpolate. It would be great that you could solve this > problem. > > Yuan > > > 2021?12?13?(?) 6:36 TARDIEU Nicolas via petsc-users < > petsc-users at mcs.anl.gov>: > > Dear Patrick and Matthew, > > Thank you very much for your answers. I am gonna try to set up such a test > by assigning cell types. > Shall I open a MR in order to contribute this example ? > > Regards, > Nicolas > > ------------------------------ > *De :* knepley at gmail.com > *Envoy? :* dimanche 12 d?cembre 2021 12:17 > *? :* Patrick Sanan > *Cc :* TARDIEU Nicolas ; petsc-users at mcs.anl.gov < > petsc-users at mcs.anl.gov> > *Objet :* Re: [petsc-users] non-manifold DMPLEX > > On Sun, Dec 12, 2021 at 6:11 AM Patrick Sanan > wrote: > > Here you have the following "points": > > - 1 3-cell (the cube volume) > - 7 2-cells (the 6 faces of the cube plus the extra one) > - 16 1-cells (the 12 edges of the cube, plus 3 extra ones from the extra > face, plus the extra edge) > - 11 0-cells (the 8 vertices of the cube, pus 2 extra ones from the extra > face, plus the extra vertex) > > You could encode your mesh as here, by directly specifying relationships > between these points in the Hasse diagram: > > https://petsc.org/release/docs/manual/dmplex/#representing-unstructured-grids > > > Then, maybe the special relation is captured because you've defined the > "cone" or "support" for each "point", which tells you about the local > topology everywhere. E.g. to take the simpler case, three of the faces have > the yellow edge in their "cone", or equivalently the yellow edge has those > three faces in its "support". > > > This is correct. I can help you make this if you want. I think if you > assign cell types, you can even get Plex to automatically interpolate. > > Note that with this kind of mesh, algorithms which assume a uniform cell > dimension will break, but I am guessing you would not > be interested in those anyway. > > Thanks, > > Matt > > > Am Fr., 10. Dez. 2021 um 17:04 Uhr schrieb TARDIEU Nicolas via petsc-users > : > > Dear PETSc Team, > > Following a previous discussion on the mailing list, I'd like to > experiment with DMPLEX with a very simple non-manifold mesh as shown in the > attached picture : a cube connected to a square by an edge and to an edge > by a point. > I have read some of the papers that Matthew et al. have written, but I > must admit that I do not see how to start... > I see how the define the different elements but I do not see how to > specify the special relationship between the cube and the square and > between the cube and the edge. > Once it will have been set correctly, what I am hoping is to be able to > use all the nice features of the DMPLEX object. > > Best regards, > Nicolas > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Thu Dec 16 06:40:53 2021 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 16 Dec 2021 07:40:53 -0500 Subject: [petsc-users] Unstructured mesh In-Reply-To: <398DB50A-FB8E-4996-9B89-B69CA36D1D3D@stfc.ac.uk> References: <1B43C320-E775-41C6-9288-28C03EB05EA4@stfc.ac.uk> <604C0744-8B06-4D15-B47A-92FDF2343C56@stfc.ac.uk> <8AE7CEC3-3BED-4417-8260-796D8B58455D@stfc.ac.uk> <7DBCFA64-7A52-4A8A-A3E1-D3F034E4F330@stfc.ac.uk> <4DECDA33-0C19-4C9F-962E-6DA977EA9A14@stfc.ac.uk> <44E3FC40-727F-4ECB-8DCA-DCBA99ED74B1@stfc.ac.uk> <65FBFEA6-FD79-4FF2-9E1E-7E4FF81147D8@stfc.ac.uk> <398DB50A-FB8E-4996-9B89-B69CA36D1D3D@stfc.ac.uk> Message-ID: Just a note, I think you can get meaningful hypre data by not timing the first solve and perhaps by simply calling KSPSetup before KSPSolve to get the conversion to a hypre matrix out of the way and not timed in KSPSolve. ksp/ksp/tutorials/ex56.c does do this but I see snes does not. You could add KSPSetup before KSPSolve and you should see that the time reported for setup + solve == original solve. Mark On Wed, Dec 15, 2021 at 3:31 PM Karthikeyan Chockalingam - STFC UKRI < karthikeyan.chockalingam at stfc.ac.uk> wrote: > Thank you for your detailed response. Yes, using aijcusparse worked (as > you mentioned it is not optimal). > > > > My objective was to run an unstructured mesh (which I believe would be of > type DMPlex), for varying size using different preconditioners by comparing > their performance on cpus and gpus. I understand that would not be possible > using hypre. Leaving out hypre - I need some recommendation to move onto > other unstructured (DMPlex) example problems, where I change the size of > the domain via command line input. > > > > Kind regards, > > Karthik. > > > > > > > > > > > > *From: *Mark Adams > *Date: *Tuesday, 14 December 2021 at 18:58 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > I was able to get hypre to work on ex56 (snes) with -ex56_dm_mat_type > aijcusparse -ex56_dm_vec_type cuda (not hypre matrix). > > This should copy the cusparse matrix to a hypre matrix before the solve. > So not optimal but the actual solve should be the same. > > Hypre is not yet supported on this example so you might not want to spend > too much time on it. > > In particular, we do not have an example that uses DMPlex and hypre on a > GPU. > > > > src/ksp/ksp/tutotials/ex56 is old and does not use DMPLex, but it is > missing a call like ksp ex4 for hypre like this: > > #if defined(PETSC_HAVE_HYPRE) > ierr = MatHYPRESetPreallocation(A,5,NULL,5,NULL);CHKERRQ(ierr); > #endif > > If you add that it might work, but again this is all pretty fragile at > this point. > > > > Mark > > > > Mark > > > > On Tue, Dec 14, 2021 at 2:42 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > I tried adding the -mat_block_size 3 but I still get the same error > message. > > > > Thanks, > > Karthik. > > > > *From: *Mark Adams > *Date: *Monday, 13 December 2021 at 19:54 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > Try adding -mat_block_size 3 > > > > > > On Mon, Dec 13, 2021 at 11:57 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > I tried to run the problem using -pc_type hypre but it errored out: > > > > ./ex56 -cells 4,4,2 -max_conv_its 1 -lx 1. -alpha .01 -petscspace_degree > 1 -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type hypre -pc_hypre_type boomeramg > -snes_monitor -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view > -log_view -ex56_dm_vec_type cuda *-ex56_dm_mat_type hypre* -options_left > > > > > > > > *[0]PETSC ERROR: --------------------- Error Message > --------------------------------------------------------------* > > [0]PETSC ERROR: Petsc has generated inconsistent data > > [0]PETSC ERROR: Blocksize of layout 1 must match that of mapping 3 (or the > latter must be 1) > > [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. > > [0]PETSC ERROR: Petsc Development GIT revision: v3.16.1-353-g887dddf386 GIT > Date: 2021-11-19 20:24:41 +0000 > > [0]PETSC ERROR: ./ex56 on a arch-linux2-c-opt named sqg2b13.bullx by > kxc07-lxm25 Mon Dec 13 16:50:02 2021 > > [0]PETSC ERROR: Configure options --with-debugging=0 > --with-blaslapack-dir=/lustre/scafellpike/local/apps/intel/intel_cs/2018.0.128/mkl > --with-cuda=1 --with-cuda-arch=70 --download-hypre=yes > --download-hypre-configure-arguments="--with-cuda=yes > --enable-gpu-profiling=yes --enable-cusparse=yes --enable-cublas=yes > --enable-curand=yes --enable-unified-memory=yes HYPRE_CUDA_SM=70" > --with-shared-libraries=1 --known-mpi-shared-libraries=1 --with-cc=mpicc > --with-cxx=mpicxx -with-fc=mpif90 > > [0]PETSC ERROR: #1 PetscLayoutSetISLocalToGlobalMapping() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/vec/is/utils/pmap.c:371 > > [0]PETSC ERROR: #2 MatSetLocalToGlobalMapping() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/mat/interface/matrix.c:2089 > > [0]PETSC ERROR: #3 DMCreateMatrix_Plex() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/impls/plex/plex.c:2460 > > [0]PETSC ERROR: #4 DMCreateMatrix() at > /lustre/scafellpike/local/HT04048/lxm25/kxc07-lxm25/petsc-main/petsc/src/dm/interface/dm.c:1445 > > [0]PETSC ERROR: #5 main() at ex56.c:439 > > [0]PETSC ERROR: PETSc Option Table entries: > > [0]PETSC ERROR: -alpha .01 > > [0]PETSC ERROR: -cells 4,4,2 > > [0]PETSC ERROR: -ex56_dm_mat_type hypre > > [0]PETSC ERROR: -ex56_dm_vec_type cuda > > [0]PETSC ERROR: -ex56_dm_view > > [0]PETSC ERROR: -ksp_monitor > > [0]PETSC ERROR: -ksp_rtol 1.e-8 > > [0]PETSC ERROR: -ksp_type cg > > [0]PETSC ERROR: -log_view > > [0]PETSC ERROR: -lx 1. > > [0]PETSC ERROR: -max_conv_its 1 > > [0]PETSC ERROR: -options_left > > [0]PETSC ERROR: -pc_hypre_type boomeramg > > [0]PETSC ERROR: -pc_type hypre > > [0]PETSC ERROR: -petscspace_degree 1 > > [0]PETSC ERROR: -snes_monitor > > [0]PETSC ERROR: -snes_rtol 1.e-10 > > [0]PETSC ERROR: -use_gpu_aware_mpi 0 > > [0]PETSC ERROR: -use_mat_nearnullspace true > > *[0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov----------* > > -------------------------------------------------------------------------- > > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > > with errorcode 77. > > > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > > You may or may not see output from other processes, depending on > > exactly when Open MPI kills them. > > -------------------------------------------------------------------------- > > > > > > *From: *Mark Adams > *Date: *Monday, 13 December 2021 at 13:58 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Mon, Dec 13, 2021 at 8:35 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thanks Matt. Couple of weeks back you mentioned > > ?There are many unstructured grid examples, e.g. SNES ex13, ex17, ex56. > The solver can run on the GPU, but the vector/matrix FEM assembly does not. > I am working on that now.? > > > > I am able to run other examples in ksp/tutorials on gpus. I complied ex56 > in snes/tutorials no differently. The only difference being I didn?t > specify _dm_vec_type and _dm_vec_type (as you mentioned they are not > assembled on gpus anyways plus I am working on an unstructured grid thought > _dm is not right type for this problem). I was hoping to see gpu flops > recorded for KSPSolve, which I didn?t. > > > > Okay, I will wait for Mark to comment. > > > > This (DM) example works like any other, with a prefix, as far as GPU: > -ex56_dm_vec_type cuda and -ex56_dm_mat_type cusparse, or aijkokkos/kokkos, > etc. > > Run with -options_left to verify that these are used. > > > > > > Kind regards, > > Karthik. > > > > *From: *Matthew Knepley > *Date: *Monday, 13 December 2021 at 13:17 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Mark Adams , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > On Mon, Dec 13, 2021 at 7:15 AM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank you. I was able to confirm both the below options produced the same > mesh > > > > ./ex56 -cells 2,2,1 -max_conv_its 2 > > ./ex56 -cells 4,4,2 -max_conv_its 1 > > Good > > But I didn?t get how is -cells i,j,k <1,1,1> is related to the number of > MPI processes. > > It is not. The number of processes is specified independently using > 'mpiexec -n

' or when using the test system NP=

. > > (i) Say I start with -cells 1,1,1 -max_conv its 7; that would > eventually leave all refinement on level 7 running on 1 MPI process? > > (ii) Say I start with -cells 2,2,1 -max_conv its n; is it recommended > to run on 4 MPI processes? > > No, those options do not influence the number of processes. > > > > I am running ex56 on gpu; I am looking at KSPSolve (or any other event) > but no gpu flops are recorded in the -log_view? > > > > I do not think you are running on the GPU then. Mark can comment, but we > usually specify GPU execution using the Vec and Mat types > > through -dm_vec_type and -dm_mat_type. > > > > Thanks, > > > > Matt > > > > For your reference I used the below flags: > > ./ex56 -cells 1,1,1 -max_conv_its 3 -lx 1. -alpha .01 -petscspace_degree 1 > -ksp_type cg -ksp_monitor -ksp_rtol 1.e-8 -pc_type asm -snes_monitor > -use_mat_nearnullspace true -snes_rtol 1.e-10 -ex56_dm_view -log_view > > > > Kind regards, > > Karthik. > > > > > > *From: *Mark Adams > *Date: *Sunday, 12 December 2021 at 23:00 > *To: *"Chockalingam, Karthikeyan (STFC,DL,HC)" < > karthikeyan.chockalingam at stfc.ac.uk> > *Cc: *Matthew Knepley , "petsc-users at mcs.anl.gov" < > petsc-users at mcs.anl.gov> > *Subject: *Re: [petsc-users] Unstructured mesh > > > > > > > > On Sun, Dec 12, 2021 at 3:19 PM Karthikeyan Chockalingam - STFC UKRI < > karthikeyan.chockalingam at stfc.ac.uk> wrote: > > Thank for your response that was helpful. I have a couple of questions: > > > > (i) How can I control the level of refinement? I tried > to pass the flag ?-ex56_dm_refine 0? but that didn?t stop the refinement > from 8 giving 32 cubes. > > > > I answered this question recently but ex56 clobbers ex56_dm_refine in the > convergence loop. I have an MR that prints a warning if you provide a > ex56_dm_refine. > > > > * snes/ex56 runs a convergence study and confusingly sets the options > manually, thus erasing your -ex56_dm_refine. > > > > * To refine, use -max_conv_its N <3>, this sets the number of steps of > refinement. That is, the length of the convergence study > > > > * You can adjust where it starts from with -cells i,j,k <1,1,1> > > You do want to set this if you have multiple MPI processes so that the > size of this mesh is the number of processes. That way it starts with one > cell per process and refines from there. > > > > (ii) What does -cell 2,2,1 correspond to? > > > > The initial mesh or mesh_0. The convergence test uniformly refines this > mesh. So if you want to refine this twice you could use -cells 8,8,4 > > > > How can I determine the total number of dofs? > > Unfortunately, that is not printed but you can calculate from the initial > cell grid, the order of the element and the refinement in each iteration of > the convergence tests. > > > > So that I can perform a scaling study by changing the input of the flag > -cells. > > > > > > You can and the convergence test gives you data for a strong speedup study > in one run. Each solve is put in its own "stage" of the output and you want > to look at KSPSolve lines in the log_view output. > > This email and any attachments are intended solely for the use of the > named recipients. If you are not the intended recipient you must not use, > disclose, copy or distribute this email or any of its attachments and > should notify the sender immediately and delete this email from your > system. UK Research and Innovation (UKRI) has taken every reasonable > precaution to minimise risk of this email or any attachments containing > viruses or malware but the recipient should carry out its own virus and > malware checks before opening the attachments. UKRI does not accept any > liability for any losses or damages which the recipient may sustain due to > presence of any viruses. > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Thu Dec 16 08:30:58 2021 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Thu, 16 Dec 2021 15:30:58 +0100 Subject: [petsc-users] Finite difference approximation of Jacobian In-Reply-To: <7E1ECF14-D585-4C6D-B51D-CA0885FC5B23@msu.edu> References: <231abd15aab544f9850826cb437366f7@lanl.gov> <6B7C0CDC-DD47-43DC-BE63-8B77C2DE6F76@msu.edu> <7E1ECF14-D585-4C6D-B51D-CA0885FC5B23@msu.edu> Message-ID: I will work on adding this - long overdue work from me! Am Di., 14. Dez. 2021 um 18:34 Uhr schrieb Tang, Qi : > Dear all, > > Will someone be able to help with this coloring request on dmstag in the > next few weeks? If not, we will try to fix that on our own. We really need > this capability for both debugging as well as performance comparison vs > analytical Jacobian/preconditioning we implemented. Thanks. > > Qi > LANL > > > > On Dec 13, 2021, at 12:26 PM, Tang, Qi wrote: > > ?overallocating? is exactly what we can live on at the moment, as long as > it is easier to work with coloring on dmstag. > > So it sounds like if we can provide a preallocated matrix with a proper > stencil through DMCreateMatrix, then it should work with dmstag and > coloring already. Most APIs are already there. > > Qi > > > > On Dec 13, 2021, at 12:13 PM, Matthew Knepley wrote: > > Yes, and would not handle higher order stencils.I think the > overallocating is livable for the first imeplementation. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.cisternino at optimad.it Thu Dec 16 10:09:45 2021 From: marco.cisternino at optimad.it (Marco Cisternino) Date: Thu, 16 Dec 2021 16:09:45 +0000 Subject: [petsc-users] Nullspaces In-Reply-To: References: Message-ID: Hello Matthew, as promised I prepared a minimal (112960 rows. I'm not able to produce anything smaller than this and triggering the issue) example of the behavior I was talking about some days ago. What I did is to produce matrix, right hand side and initial solution of the linear system. As I told you before, this linear system is the discretization of the pressure equation of a predictor-corrector method for NS equations in the framework of finite volume method. This case has homogeneous Neumann boundary conditions. Computational domain has two independent and separated sub-domains. I discretize the weak formulation and I divide every row of the linear system by the volume of the relative cell. The underlying mesh is not uniform, therefore cells have different volumes. The issue I'm going to explain does not show up if the mesh is uniform, same volume for all the cells. I usually build the null space sub-domain by sub-domain with MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, &nullspace); Where nConstants = 2 and constants contains two normalized arrays with constant values on degrees of freedom relative to the associated sub-domain and zeros elsewhere. However, as a test I tried the constant over the whole domain using 2 alternatives that should produce the same null space: 1. MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); 2. Vec* nsp; VecDuplicateVecs(solution, 1, &nsp); VecSet(nsp[0],1.0); VecNormalize(nsp[0], nullptr); MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, nsp, &nullspace); Once I created the null space I test it using: MatNullSpaceTest(nullspace, m_A, &isNullSpaceValid); The case 1 pass the test while case 2 don't. I have a small code for matrix loading, null spaces creation and testing. Unfortunately I cannot implement a small code able to produce that linear system. As attachment you can find an archive containing the matrix, the initial solution (used to manually build the null space) and the rhs (not used in the test code) in binary format. You can also find the testing code in the same archive. I used petsc 3.12(gcc+openMPI) and petsc 3.15.2(intelOneAPI) same results. If the attachment is not delivered, I can share a link to it. Thanks for any help. Marco Cisternino Marco Cisternino, PhD marco.cisternino at optimad.it ______________________ Optimad Engineering Srl Via Bligny 5, Torino, Italia. +3901119719782 www.optimad.it From: Marco Cisternino Sent: marted? 7 dicembre 2021 19:36 To: Matthew Knepley Cc: petsc-users Subject: Re: [petsc-users] Nullspaces I will, as soon as possible... Scarica Outlook per Android ________________________________ From: Matthew Knepley > Sent: Tuesday, December 7, 2021 7:25:43 PM To: Marco Cisternino > Cc: petsc-users > Subject: Re: [petsc-users] Nullspaces On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino > wrote: Good morning, I'm still struggling with the Poisson equation with Neumann BCs. I discretize the equation by finite volume method and I divide every line of the linear system by the volume of the cell. I could avoid this division, but I'm trying to understand. My mesh is not uniform, i.e. cells have different volumes (it is an octree mesh). Moreover, in my computational domain there are 2 separated sub-domains. I build the null space and then I use MatNullSpaceTest to check it. If I do this: MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); It works This produces the normalized constant vector. If I do this: Vec nsp; VecDuplicate(m_rhs, &nsp); VecSet(nsp,1.0); VecNormalize(nsp, nullptr); MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); It does not work This is also the normalized constant vector. So you are saying that these two vectors give different results with MatNullSpaceTest()? Something must be wrong in the code. Can you send a minimal example of this? I will go through and debug it. Thanks, Matt Probably, I have wrong expectations, but should not it be the same? Thanks Marco Cisternino, PhD marco.cisternino at optimad.it ______________________ Optimad Engineering Srl Via Bligny 5, Torino, Italia. +3901119719782 www.optimad.it -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nullspaceTest.tar.gz Type: application/x-gzip Size: 4457490 bytes Desc: nullspaceTest.tar.gz URL: From halverson at Princeton.EDU Fri Dec 17 12:36:40 2021 From: halverson at Princeton.EDU (Jonathan D. Halverson) Date: Fri, 17 Dec 2021 18:36:40 +0000 Subject: [petsc-users] NVIDIA HPC SDK and complex data type Message-ID: Hello, We are unable to build PETSc using the NVIDIA HPC SDK and --with-scalar-type=complex. Below is our procedure: $ module load nvhpc/21.11 $ module load openmpi/nvhpc-21.11/4.1.2/64 $ git clone -b release https://gitlab.com/petsc/petsc.git petsc; cd petsc $ ./configure --with-debugging=1 --with-scalar-type=complex PETSC_ARCH=openmpi-power $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power all $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power check "make check" fails with a segmentation fault when running ex19. The fortran test ex5f passes. The procedure above fails on x86_64 and POWER both running RHEL8. It also fails using nvhpc 20.7. The procedure above works for "real" instead of "complex". A "hello world" MPI code using a complex data type works with our nvhpc modules. The procedure above works successfully when GCC and an Open MPI library built using GCC is used. The only trouble is the combination of PETSc with nvhpc and complex. Any known issues? The build log for the procedure above is here: https://tigress-web.princeton.edu/~jdh4/petsc_nvhpc_complex_17dec2021.log Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Fri Dec 17 19:58:31 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Fri, 17 Dec 2021 19:58:31 -0600 Subject: [petsc-users] NVIDIA HPC SDK and complex data type In-Reply-To: References: Message-ID: Hi, Jon, I could reproduce the error exactly. I will have a look. Thanks for reporting. --Junchao Zhang On Fri, Dec 17, 2021 at 2:56 PM Jonathan D. Halverson < halverson at princeton.edu> wrote: > Hello, > > We are unable to build PETSc using the NVIDIA HPC SDK and > --with-scalar-type=complex. Below is our procedure: > > $ module load nvhpc/21.11 > > $ module load openmpi/nvhpc-21.11/4.1.2/64 > $ git clone -b release https://gitlab.com/petsc/petsc.git petsc; cd petsc > > $ ./configure --with-debugging=1 --with-scalar-type=complex > PETSC_ARCH=openmpi-power > > $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power all > > $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power check > > "make check" fails with a segmentation fault when running ex19. The > fortran test ex5f passes. > > The procedure above fails on x86_64 and POWER both running RHEL8. It also > fails using nvhpc 20.7. > > The procedure above works for "real" instead of "complex". > > A "hello world" MPI code using a complex data type works with our nvhpc > modules. > > The procedure above works successfully when GCC and an Open MPI library > built using GCC is used. > > The only trouble is the combination of PETSc with nvhpc and complex. Any > known issues? > > The build log for the procedure above is here: > https://tigress-web.princeton.edu/~jdh4/petsc_nvhpc_complex_17dec2021.log > > Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tabrezali at gmail.com Fri Dec 17 18:16:26 2021 From: tabrezali at gmail.com (Tabrez Ali) Date: Fri, 17 Dec 2021 19:16:26 -0500 Subject: [petsc-users] --with-mpi=0 Message-ID: Hi, I am trying to compile Fortran code with PETSc 3.16 built without MPI, i.e., --with-mpi=0, and am getting the following error: call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) 1 Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type There are no issues with PETSc 3.14 or prior versions. Any ideas as to what could be wrong? I do see the following note (below) in https://petsc.org/main/docs/changes/315/ but I am not sure if it's related: *Add configure option --with-mpi-f90module-visibility [default=``1``]. With 0, mpi.mod will not be visible in use code (via petscsys.mod) - so mpi_f08 can now be used* Regards, Tabrez -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Dec 18 09:37:54 2021 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 18 Dec 2021 09:37:54 -0600 Subject: [petsc-users] --with-mpi=0 In-Reply-To: References: Message-ID: On Sat, Dec 18, 2021 at 9:26 AM Tabrez Ali wrote: > Hi, > > I am trying to compile Fortran code with PETSc 3.16 built without MPI, > i.e., --with-mpi=0, and am getting the following error: > > call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) > 1 > Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type > Hi Tabrez, The definition of MPI_COMM_WORLD is in mpif.h. Are you #including that? Thanks, Matt > There are no issues with PETSc 3.14 or prior versions. Any ideas as to > what could be wrong? > > I do see the following note (below) in > https://petsc.org/main/docs/changes/315/ but I am not sure if it's > related: > > *Add configure option --with-mpi-f90module-visibility [default=``1``]. > With 0, mpi.mod will not be visible in use code (via petscsys.mod) - > so mpi_f08 can now be used* > > Regards, > > Tabrez > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sat Dec 18 09:42:09 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 18 Dec 2021 09:42:09 -0600 (CST) Subject: [petsc-users] --with-mpi=0 In-Reply-To: References: Message-ID: <1bd6f58c-d4f-d697-bce6-155e92d5d439@mcs.anl.gov> Do you get this error with a petsc example that calls MPI_Comm_rank()? Say src/vec/vec/tutorials/ex8f.F90 Satish [balay at pj01 tutorials]$ make ex8f gfortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O0 -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O0 -I/home/balay/petsc/include -I/home/balay/petsc/arch-linux-c-debug/include ex8f.F90 -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib -L/home/balay/petsc/arch-linux-c-debug/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/9 -L/usr/lib/gcc/x86_64-redhat-linux/9 -lpetsc -llapack -lblas -lpthread -lm -lX11 -lstdc++ -ldl -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -o ex8f [balay at pj01 tutorials]$ ./ex8f Vec Object: 1 MPI processes type: seq 4. [balay at pj01 tutorials]$ nm -Ao ex8f |grep mpi_comm_rank ex8f: U petsc_mpi_comm_rank_ [balay at pj01 tutorials]$ On Fri, 17 Dec 2021, Tabrez Ali wrote: > Hi, > > I am trying to compile Fortran code with PETSc 3.16 built without MPI, > i.e., --with-mpi=0, and am getting the following error: > > call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) > 1 > Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type > > There are no issues with PETSc 3.14 or prior versions. Any ideas as to what > could be wrong? > > I do see the following note (below) in > https://petsc.org/main/docs/changes/315/ but I am not sure if it's related: > > *Add configure option --with-mpi-f90module-visibility [default=``1``]. > With 0, mpi.mod will not be visible in use code (via petscsys.mod) - > so mpi_f08 can now be used* > > Regards, > > Tabrez > From balay at mcs.anl.gov Sat Dec 18 10:04:07 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 18 Dec 2021 10:04:07 -0600 (CST) Subject: [petsc-users] --with-mpi=0 In-Reply-To: References: Message-ID: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> Ah - you need 'use petscmpi' For example: ksp/ksp/tutorials/ex44f.F90 Satish On Sat, 18 Dec 2021, Matthew Knepley wrote: > On Sat, Dec 18, 2021 at 9:26 AM Tabrez Ali wrote: > > > Hi, > > > > I am trying to compile Fortran code with PETSc 3.16 built without MPI, > > i.e., --with-mpi=0, and am getting the following error: > > > > call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) > > 1 > > Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type > > > > Hi Tabrez, > > The definition of MPI_COMM_WORLD is in mpif.h. Are you #including that? > > Thanks, > > Matt > > > > There are no issues with PETSc 3.14 or prior versions. Any ideas as to > > what could be wrong? > > > > I do see the following note (below) in > > https://petsc.org/main/docs/changes/315/ but I am not sure if it's > > related: > > > > *Add configure option --with-mpi-f90module-visibility [default=``1``]. > > With 0, mpi.mod will not be visible in use code (via petscsys.mod) - > > so mpi_f08 can now be used* > > > > Regards, > > > > Tabrez > > > > > From tabrezali at gmail.com Sat Dec 18 11:57:59 2021 From: tabrezali at gmail.com (Tabrez Ali) Date: Sat, 18 Dec 2021 12:57:59 -0500 Subject: [petsc-users] --with-mpi=0 In-Reply-To: <1bd6f58c-d4f-d697-bce6-155e92d5d439@mcs.anl.gov> References: <1bd6f58c-d4f-d697-bce6-155e92d5d439@mcs.anl.gov> Message-ID: Satish, If you replace PETSC_COMM_WORLD with MPI_COMM_WORLD and compile ex8f.F90 using 3.14 then it will work but if you compile it using 3.15 or 3.16 then it fails, e.g., stali at i5:~$ cd /tmp/petsc-3.14.6/src/vec/vec/tutorials/ stali at i5:/tmp/petsc-3.14.6/src/vec/vec/tutorials$ grep MPI_COMM_WORLD ex8f.F90 call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) call VecCreate(MPI_COMM_WORLD,x,ierr);CHKERRA(ierr) stali at i5:/tmp/petsc-3.14.6/src/vec/vec/tutorials$ make ex8f PETSC_DIR=/tmp/petsc-3.14.6 gfortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -I/tmp/petsc-3.14.6/include -I/tmp/petsc-3.14.6/arch-linux2-c-debug/include ex8f.F90 -Wl,-rpath,/tmp/petsc-3.14.6/arch-linux2-c-debug/lib -L/tmp/petsc-3.14.6/arch-linux2-c-debug/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/6 -L/usr/lib/gcc/x86_64-linux-gnu/6 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lpetsc -llapack -lblas -lpthread -lm -lstdc++ -ldl -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -o ex8f stali at i5:/tmp/petsc-3.14.6/src/vec/vec/tutorials$ rm ex8f stali at i5:/tmp/petsc-3.14.6/src/vec/vec/tutorials$ make ex8f PETSC_DIR=/tmp/petsc-3.16.1 gfortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O0 -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O0 -I/tmp/petsc-3.16.1/include -I/tmp/petsc-3.16.1/arch-linux2-c-debug/include ex8f.F90 -Wl,-rpath,/tmp/petsc-3.16.1/arch-linux2-c-debug/lib -L/tmp/petsc-3.16.1/arch-linux2-c-debug/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/6 -L/usr/lib/gcc/x86_64-linux-gnu/6 -lpetsc -llapack -lblas -lpthread -lm -lstdc++ -ldl -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -o ex8f ex8f.F90:29:41: call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) 1 Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type /tmp/petsc-3.16.1/lib/petsc/conf/test:24: recipe for target 'ex8f' failed Regards, Tabrez On Sat, Dec 18, 2021 at 10:42 AM Satish Balay wrote: > Do you get this error with a petsc example that calls MPI_Comm_rank()? > > Say src/vec/vec/tutorials/ex8f.F90 > > Satish > > [balay at pj01 tutorials]$ make ex8f > gfortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g > -O0 -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O0 > -I/home/balay/petsc/include -I/home/balay/petsc/arch-linux-c-debug/include > ex8f.F90 -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib > -L/home/balay/petsc/arch-linux-c-debug/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/9 > -L/usr/lib/gcc/x86_64-redhat-linux/9 -lpetsc -llapack -lblas -lpthread -lm > -lX11 -lstdc++ -ldl -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath > -lstdc++ -ldl -o ex8f > [balay at pj01 tutorials]$ ./ex8f > Vec Object: 1 MPI processes > type: seq > 4. > [balay at pj01 tutorials]$ nm -Ao ex8f |grep mpi_comm_rank > ex8f: U petsc_mpi_comm_rank_ > [balay at pj01 tutorials]$ > > > > On Fri, 17 Dec 2021, Tabrez Ali wrote: > > > Hi, > > > > I am trying to compile Fortran code with PETSc 3.16 built without MPI, > > i.e., --with-mpi=0, and am getting the following error: > > > > call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) > > 1 > > Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type > > > > There are no issues with PETSc 3.14 or prior versions. Any ideas as to > what > > could be wrong? > > > > I do see the following note (below) in > > https://petsc.org/main/docs/changes/315/ but I am not sure if it's > related: > > > > *Add configure option --with-mpi-f90module-visibility [default=``1``]. > > With 0, mpi.mod will not be visible in use code (via petscsys.mod) - > > so mpi_f08 can now be used* > > > > Regards, > > > > Tabrez > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tabrezali at gmail.com Sat Dec 18 12:00:02 2021 From: tabrezali at gmail.com (Tabrez Ali) Date: Sat, 18 Dec 2021 13:00:02 -0500 Subject: [petsc-users] --with-mpi=0 In-Reply-To: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> References: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> Message-ID: Thanks! On Sat, Dec 18, 2021 at 11:04 AM Satish Balay wrote: > Ah - you need 'use petscmpi' > > For example: ksp/ksp/tutorials/ex44f.F90 > > Satish > > On Sat, 18 Dec 2021, Matthew Knepley wrote: > > > On Sat, Dec 18, 2021 at 9:26 AM Tabrez Ali wrote: > > > > > Hi, > > > > > > I am trying to compile Fortran code with PETSc 3.16 built without MPI, > > > i.e., --with-mpi=0, and am getting the following error: > > > > > > call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) > > > 1 > > > Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type > > > > > > > Hi Tabrez, > > > > The definition of MPI_COMM_WORLD is in mpif.h. Are you #including that? > > > > Thanks, > > > > Matt > > > > > > > There are no issues with PETSc 3.14 or prior versions. Any ideas as to > > > what could be wrong? > > > > > > I do see the following note (below) in > > > https://petsc.org/main/docs/changes/315/ but I am not sure if it's > > > related: > > > > > > *Add configure option --with-mpi-f90module-visibility [default=``1``]. > > > With 0, mpi.mod will not be visible in use code (via petscsys.mod) - > > > so mpi_f08 can now be used* > > > > > > Regards, > > > > > > Tabrez > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Dec 18 14:49:15 2021 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 18 Dec 2021 15:49:15 -0500 Subject: [petsc-users] --with-mpi=0 In-Reply-To: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> References: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> Message-ID: <971878B0-F13A-49EC-BB93-9C05EFE7CFFA@petsc.dev> Urg, this is pretty horrible. petscmpi? Why can't MPIUNI provide this as "standard" MPI modules? > On Dec 18, 2021, at 11:04 AM, Satish Balay via petsc-users wrote: > > Ah - you need 'use petscmpi' > > For example: ksp/ksp/tutorials/ex44f.F90 > > Satish > > On Sat, 18 Dec 2021, Matthew Knepley wrote: > >> On Sat, Dec 18, 2021 at 9:26 AM Tabrez Ali wrote: >> >>> Hi, >>> >>> I am trying to compile Fortran code with PETSc 3.16 built without MPI, >>> i.e., --with-mpi=0, and am getting the following error: >>> >>> call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) >>> 1 >>> Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type >>> >> >> Hi Tabrez, >> >> The definition of MPI_COMM_WORLD is in mpif.h. Are you #including that? >> >> Thanks, >> >> Matt >> >> >>> There are no issues with PETSc 3.14 or prior versions. Any ideas as to >>> what could be wrong? >>> >>> I do see the following note (below) in >>> https://petsc.org/main/docs/changes/315/ but I am not sure if it's >>> related: >>> >>> *Add configure option --with-mpi-f90module-visibility [default=``1``]. >>> With 0, mpi.mod will not be visible in use code (via petscsys.mod) - >>> so mpi_f08 can now be used* >>> >>> Regards, >>> >>> Tabrez >>> >> >> >> From balay at mcs.anl.gov Sat Dec 18 15:36:31 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 18 Dec 2021 15:36:31 -0600 (CST) Subject: [petsc-users] --with-mpi=0 In-Reply-To: <971878B0-F13A-49EC-BB93-9C05EFE7CFFA@petsc.dev> References: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> <971878B0-F13A-49EC-BB93-9C05EFE7CFFA@petsc.dev> Message-ID: Sure - that's the trade-off. At some point I think we decided not to map mpiuni.mod to mpi.mod [to avoid potential clash with other mpi/mpiuni type impls]. If I remember correctly - we could limit on the C side - by not installing mpi.h in std location [its at include/petsc/mpiuni/mpi.h] And since mpiuni's mpi.mod is to be installed in prefix/include - I think this different name was chosen. And now that we are supporting 'use mpi or mpi_f08' - its 'use mpiuni or mpi or mpi_f08' or use --with-mpi-f90module-visibility=1 For now petscmpi maps to mpi or mpiuni. Perhaps its possible to change our installer to have mpiuni's mod file at prefix/include/petsc/mpiuni/mpi.mod - [to enable reusing mpi.mod name ] [at some point we had an option to skip fortran mpi_comm_rank() etc - as they would clash with similar uni impls. But I see that option no longer exists?] Satish On Sat, 18 Dec 2021, Barry Smith wrote: > > Urg, this is pretty horrible. petscmpi? Why can't MPIUNI provide this as "standard" MPI modules? > > > > > On Dec 18, 2021, at 11:04 AM, Satish Balay via petsc-users wrote: > > > > Ah - you need 'use petscmpi' > > > > For example: ksp/ksp/tutorials/ex44f.F90 > > > > Satish > > > > On Sat, 18 Dec 2021, Matthew Knepley wrote: > > > >> On Sat, Dec 18, 2021 at 9:26 AM Tabrez Ali wrote: > >> > >>> Hi, > >>> > >>> I am trying to compile Fortran code with PETSc 3.16 built without MPI, > >>> i.e., --with-mpi=0, and am getting the following error: > >>> > >>> call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) > >>> 1 > >>> Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type > >>> > >> > >> Hi Tabrez, > >> > >> The definition of MPI_COMM_WORLD is in mpif.h. Are you #including that? > >> > >> Thanks, > >> > >> Matt > >> > >> > >>> There are no issues with PETSc 3.14 or prior versions. Any ideas as to > >>> what could be wrong? > >>> > >>> I do see the following note (below) in > >>> https://petsc.org/main/docs/changes/315/ but I am not sure if it's > >>> related: > >>> > >>> *Add configure option --with-mpi-f90module-visibility [default=``1``]. > >>> With 0, mpi.mod will not be visible in use code (via petscsys.mod) - > >>> so mpi_f08 can now be used* > >>> > >>> Regards, > >>> > >>> Tabrez > >>> > >> > >> > >> > From bsmith at petsc.dev Sat Dec 18 15:46:27 2021 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 18 Dec 2021 16:46:27 -0500 Subject: [petsc-users] --with-mpi=0 In-Reply-To: References: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> <971878B0-F13A-49EC-BB93-9C05EFE7CFFA@petsc.dev> Message-ID: <5A070A54-B759-47C7-B0B4-BAD743CB4A0C@petsc.dev> It seems like this might be a good strategy, presumably the directory is already made to hold mpi.h for MPI uni Perhaps its possible to change our installer to have mpiuni's mod file at prefix/include/petsc/mpiuni/mpi.mod - [to enable reusing mpi.mod name ] > On Dec 18, 2021, at 4:36 PM, Satish Balay wrote: > > Sure - that's the trade-off. > > At some point I think we decided not to map mpiuni.mod to mpi.mod [to avoid potential clash with other mpi/mpiuni type impls]. > > If I remember correctly - we could limit on the C side - by not installing mpi.h in std location [its at include/petsc/mpiuni/mpi.h] > > And since mpiuni's mpi.mod is to be installed in prefix/include - I think this different name was chosen. > > And now that we are supporting 'use mpi or mpi_f08' - its 'use mpiuni or mpi or mpi_f08' or use --with-mpi-f90module-visibility=1 > > For now petscmpi maps to mpi or mpiuni. > > Perhaps its possible to change our installer to have mpiuni's mod file at prefix/include/petsc/mpiuni/mpi.mod - [to enable reusing mpi.mod name ] > > [at some point we had an option to skip fortran mpi_comm_rank() etc - as they would clash with similar uni impls. But I see that option no longer exists?] > > Satish > > On Sat, 18 Dec 2021, Barry Smith wrote: > >> >> Urg, this is pretty horrible. petscmpi? Why can't MPIUNI provide this as "standard" MPI modules? >> >> >> >>> On Dec 18, 2021, at 11:04 AM, Satish Balay via petsc-users wrote: >>> >>> Ah - you need 'use petscmpi' >>> >>> For example: ksp/ksp/tutorials/ex44f.F90 >>> >>> Satish >>> >>> On Sat, 18 Dec 2021, Matthew Knepley wrote: >>> >>>> On Sat, Dec 18, 2021 at 9:26 AM Tabrez Ali wrote: >>>> >>>>> Hi, >>>>> >>>>> I am trying to compile Fortran code with PETSc 3.16 built without MPI, >>>>> i.e., --with-mpi=0, and am getting the following error: >>>>> >>>>> call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) >>>>> 1 >>>>> Error: Symbol ?mpi_comm_world? at (1) has no IMPLICIT type >>>>> >>>> >>>> Hi Tabrez, >>>> >>>> The definition of MPI_COMM_WORLD is in mpif.h. Are you #including that? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> There are no issues with PETSc 3.14 or prior versions. Any ideas as to >>>>> what could be wrong? >>>>> >>>>> I do see the following note (below) in >>>>> https://petsc.org/main/docs/changes/315/ but I am not sure if it's >>>>> related: >>>>> >>>>> *Add configure option --with-mpi-f90module-visibility [default=``1``]. >>>>> With 0, mpi.mod will not be visible in use code (via petscsys.mod) - >>>>> so mpi_f08 can now be used* >>>>> >>>>> Regards, >>>>> >>>>> Tabrez >>>>> >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sat Dec 18 15:57:21 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 18 Dec 2021 15:57:21 -0600 (CST) Subject: [petsc-users] --with-mpi=0 In-Reply-To: <5A070A54-B759-47C7-B0B4-BAD743CB4A0C@petsc.dev> References: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> <971878B0-F13A-49EC-BB93-9C05EFE7CFFA@petsc.dev> <5A070A54-B759-47C7-B0B4-BAD743CB4A0C@petsc.dev> Message-ID: On Sat, 18 Dec 2021, Barry Smith wrote: > > It seems like this might be a good strategy, presumably the directory is already made to hold mpi.h for MPI uni > > > Perhaps its possible to change our installer to have mpiuni's mod file at prefix/include/petsc/mpiuni/mpi.mod - [to enable reusing mpi.mod name ] >>> module petscmpi #include #include "petsc/finclude/petscsys.h" #if defined(PETSC_HAVE_MPIUNI) use mpiuni #else #if defined(PETSC_HAVE_MPI_F90MODULE) use mpi #else #include "mpif.h" #endif #endif <<< There is also this use-case where there is no usable mpi.mod - that petscmpi is currently handling... Not sure how to deal with that.. Satish From balay at mcs.anl.gov Sat Dec 18 17:03:29 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 18 Dec 2021 17:03:29 -0600 (CST) Subject: [petsc-users] --with-mpi=0 In-Reply-To: References: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> <971878B0-F13A-49EC-BB93-9C05EFE7CFFA@petsc.dev> <5A070A54-B759-47C7-B0B4-BAD743CB4A0C@petsc.dev> Message-ID: <2196b580-7ef7-4b9f-6d6e-419ca6e8bd24@mcs.anl.gov> Also we have: include/petscsys.h:# include src/sys/mpiuni/f90-mod/mpiunimod.F90:#include And avoid -Iprefix/include/petsc/mpiuni/ So I'm not sure if adding this in can cause grief (as it would be required for mpi.mod at this location). I have changes for mpiuni.mod -> mpi.mod at: https://gitlab.com/petsc/petsc/-/merge_requests/4662 [they are a bit hakey] Satish On Sat, 18 Dec 2021, Satish Balay via petsc-users wrote: > On Sat, 18 Dec 2021, Barry Smith wrote: > > > > > It seems like this might be a good strategy, presumably the directory is already made to hold mpi.h for MPI uni > > > > > Perhaps its possible to change our installer to have mpiuni's mod file at prefix/include/petsc/mpiuni/mpi.mod - [to enable reusing mpi.mod name ] > > >>> > module petscmpi > #include > #include "petsc/finclude/petscsys.h" > #if defined(PETSC_HAVE_MPIUNI) > use mpiuni > #else > #if defined(PETSC_HAVE_MPI_F90MODULE) > use mpi > #else > #include "mpif.h" > #endif > #endif > <<< > > There is also this use-case where there is no usable mpi.mod - that petscmpi is currently > handling... Not sure how to deal with that.. > > Satish > From bsmith at petsc.dev Sat Dec 18 17:08:15 2021 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 18 Dec 2021 18:08:15 -0500 Subject: [petsc-users] --with-mpi=0 In-Reply-To: <2196b580-7ef7-4b9f-6d6e-419ca6e8bd24@mcs.anl.gov> References: <9ad81d3-bad0-8467-bd17-5934e471b1@mcs.anl.gov> <971878B0-F13A-49EC-BB93-9C05EFE7CFFA@petsc.dev> <5A070A54-B759-47C7-B0B4-BAD743CB4A0C@petsc.dev> <2196b580-7ef7-4b9f-6d6e-419ca6e8bd24@mcs.anl.gov> Message-ID: <6412A209-51C1-443A-A7B5-B8BF412F067B@petsc.dev> Satish, Yes, you are probably right; there may be no way to organize things to handle all the possibilities well. Probably I over-reacted and it is best to leave things as if. Barry > On Dec 18, 2021, at 6:03 PM, Satish Balay wrote: > > Also we have: > > include/petscsys.h:# include > src/sys/mpiuni/f90-mod/mpiunimod.F90:#include > > And avoid -Iprefix/include/petsc/mpiuni/ > > So I'm not sure if adding this in can cause grief (as it would be required for mpi.mod at this location). > > I have changes for mpiuni.mod -> mpi.mod at: > > https://gitlab.com/petsc/petsc/-/merge_requests/4662 > > [they are a bit hakey] > > Satish > > > > On Sat, 18 Dec 2021, Satish Balay via petsc-users wrote: > >> On Sat, 18 Dec 2021, Barry Smith wrote: >> >>> >>> It seems like this might be a good strategy, presumably the directory is already made to hold mpi.h for MPI uni >>> >>>> Perhaps its possible to change our installer to have mpiuni's mod file at prefix/include/petsc/mpiuni/mpi.mod - [to enable reusing mpi.mod name ] >> >>>>> >> module petscmpi >> #include >> #include "petsc/finclude/petscsys.h" >> #if defined(PETSC_HAVE_MPIUNI) >> use mpiuni >> #else >> #if defined(PETSC_HAVE_MPI_F90MODULE) >> use mpi >> #else >> #include "mpif.h" >> #endif >> #endif >> <<< >> >> There is also this use-case where there is no usable mpi.mod - that petscmpi is currently >> handling... Not sure how to deal with that.. >> >> Satish >> > From junchao.zhang at gmail.com Sat Dec 18 19:02:47 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Sat, 18 Dec 2021 19:02:47 -0600 Subject: [petsc-users] NVIDIA HPC SDK and complex data type In-Reply-To: References: Message-ID: I found it is a NVIDIA C/C++ compiler bug. I can reproduce it with #include #include #include typedef double _Complex PetscScalar; typedef struct { int row; PetscScalar *valaddr; } MatEntry2; int main(int arc, char** argv) { int i=2; MatEntry2 *Jentry2 = (MatEntry2*)malloc(64*sizeof(MatEntry2)); PetscScalar a=1, b=1; printf("sizeof(MatEntry2)=%lu\n",sizeof(MatEntry2)); Jentry2[2].valaddr = (PetscScalar*)malloc(16*sizeof(PetscScalar)); *(Jentry2[i].valaddr) = a*b; // Segfault free(Jentry2[2].valaddr); free(Jentry2); return 0; } $ nvc -O0 -o test test.c $ ./test sizeof(MatEntry2)=16 Segmentation fault (core dumped) If I change *(Jentry2[i].valaddr) = a*b; to PetscScalar *p = Jentry2[2].valaddr; *p = a*b; Then the code works fine. Using -O0 to -O2 will also avoid this error for this simple test, but not for PETSc. In PETSc, I could apply the above silly trick, but I am not sure it is worth it. We should instead report it to NVIDIA. Looking at the assembly code for the segfault line, we can find the problem movslq 52(%rsp), %rcx movq 40(%rsp), %rax movq 8(%rax,%rcx,8), %rax // Here %rax = &Jentry2, %rcx = i; The instruction wrongly calculates Jentry2[2].valaddr as (%rax + %rcx*8)+8, which should instead be (%rax + %rcx*16)+8 vmovsd %xmm1, 8(%rax) vmovsd %xmm0, (%rax) --Junchao Zhang On Fri, Dec 17, 2021 at 7:58 PM Junchao Zhang wrote: > Hi, Jon, > I could reproduce the error exactly. I will have a look. > Thanks for reporting. > --Junchao Zhang > > > On Fri, Dec 17, 2021 at 2:56 PM Jonathan D. Halverson < > halverson at princeton.edu> wrote: > >> Hello, >> >> We are unable to build PETSc using the NVIDIA HPC SDK and >> --with-scalar-type=complex. Below is our procedure: >> >> $ module load nvhpc/21.11 >> >> $ module load openmpi/nvhpc-21.11/4.1.2/64 >> $ git clone -b release https://gitlab.com/petsc/petsc.git petsc; cd petsc >> >> $ ./configure --with-debugging=1 --with-scalar-type=complex >> PETSC_ARCH=openmpi-power >> >> $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power all >> >> $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power check >> >> "make check" fails with a segmentation fault when running ex19. The >> fortran test ex5f passes. >> >> The procedure above fails on x86_64 and POWER both running RHEL8. It also >> fails using nvhpc 20.7. >> >> The procedure above works for "real" instead of "complex". >> >> A "hello world" MPI code using a complex data type works with our nvhpc >> modules. >> >> The procedure above works successfully when GCC and an Open MPI library >> built using GCC is used. >> >> The only trouble is the combination of PETSc with nvhpc and complex. Any >> known issues? >> >> The build log for the procedure above is here: >> https://tigress-web.princeton.edu/~jdh4/petsc_nvhpc_complex_17dec2021.log >> >> Jon >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Dec 18 19:22:33 2021 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 18 Dec 2021 19:22:33 -0600 Subject: [petsc-users] NVIDIA HPC SDK and complex data type In-Reply-To: References: Message-ID: On Sat, Dec 18, 2021 at 7:03 PM Junchao Zhang wrote: > I found it is a NVIDIA C/C++ compiler bug. I can reproduce it with > Great find! Matt > #include > #include > #include > > typedef double _Complex PetscScalar; > typedef struct { > int row; > PetscScalar *valaddr; > } MatEntry2; > > int main(int arc, char** argv) > { > int i=2; > MatEntry2 *Jentry2 = (MatEntry2*)malloc(64*sizeof(MatEntry2)); > PetscScalar a=1, b=1; > > printf("sizeof(MatEntry2)=%lu\n",sizeof(MatEntry2)); > Jentry2[2].valaddr = (PetscScalar*)malloc(16*sizeof(PetscScalar)); > *(Jentry2[i].valaddr) = a*b; // Segfault > > free(Jentry2[2].valaddr); > free(Jentry2); > return 0; > } > > $ nvc -O0 -o test test.c > $ ./test > sizeof(MatEntry2)=16 > Segmentation fault (core dumped) > > If I change *(Jentry2[i].valaddr) = a*b; to > > PetscScalar *p = Jentry2[2].valaddr; > *p = a*b; > > Then the code works fine. Using -O0 to -O2 will also avoid this error for > this simple test, but not for PETSc. In PETSc, I could apply the above > silly trick, but I am not sure it is worth it. We should instead report it > to NVIDIA. > > Looking at the assembly code for the segfault line, we can find the > problem > movslq 52(%rsp), %rcx > movq 40(%rsp), %rax > movq 8(%rax,%rcx,8), %rax // Here %rax = &Jentry2, %rcx = i; The > instruction wrongly calculates Jentry2[2].valaddr as (%rax + %rcx*8)+8, > which should instead be (%rax + %rcx*16)+8 > vmovsd %xmm1, 8(%rax) > vmovsd %xmm0, (%rax) > > --Junchao Zhang > > > On Fri, Dec 17, 2021 at 7:58 PM Junchao Zhang > wrote: > >> Hi, Jon, >> I could reproduce the error exactly. I will have a look. >> Thanks for reporting. >> --Junchao Zhang >> >> >> On Fri, Dec 17, 2021 at 2:56 PM Jonathan D. Halverson < >> halverson at princeton.edu> wrote: >> >>> Hello, >>> >>> We are unable to build PETSc using the NVIDIA HPC SDK and >>> --with-scalar-type=complex. Below is our procedure: >>> >>> $ module load nvhpc/21.11 >>> >>> $ module load openmpi/nvhpc-21.11/4.1.2/64 >>> $ git clone -b release https://gitlab.com/petsc/petsc.git petsc; cd >>> petsc >>> >>> $ ./configure --with-debugging=1 --with-scalar-type=complex >>> PETSC_ARCH=openmpi-power >>> >>> $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power all >>> >>> $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power >>> check >>> >>> "make check" fails with a segmentation fault when running ex19. The >>> fortran test ex5f passes. >>> >>> The procedure above fails on x86_64 and POWER both running RHEL8. It >>> also fails using nvhpc 20.7. >>> >>> The procedure above works for "real" instead of "complex". >>> >>> A "hello world" MPI code using a complex data type works with our nvhpc >>> modules. >>> >>> The procedure above works successfully when GCC and an Open MPI library >>> built using GCC is used. >>> >>> The only trouble is the combination of PETSc with nvhpc and complex. Any >>> known issues? >>> >>> The build log for the procedure above is here: >>> https://tigress-web.princeton.edu/~jdh4/petsc_nvhpc_complex_17dec2021.log >>> >>> Jon >>> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Dec 18 19:51:09 2021 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 18 Dec 2021 20:51:09 -0500 Subject: [petsc-users] NVIDIA HPC SDK and complex data type In-Reply-To: References: Message-ID: Yes, Junchao deserves a bounty from NVIDIA for this find. > On Dec 18, 2021, at 8:22 PM, Matthew Knepley wrote: > > On Sat, Dec 18, 2021 at 7:03 PM Junchao Zhang > wrote: > I found it is a NVIDIA C/C++ compiler bug. I can reproduce it with > > Great find! > > Matt > > #include > #include > #include > > typedef double _Complex PetscScalar; > typedef struct { > int row; > PetscScalar *valaddr; > } MatEntry2; > > int main(int arc, char** argv) > { > int i=2; > MatEntry2 *Jentry2 = (MatEntry2*)malloc(64*sizeof(MatEntry2)); > PetscScalar a=1, b=1; > > printf("sizeof(MatEntry2)=%lu\n",sizeof(MatEntry2)); > Jentry2[2].valaddr = (PetscScalar*)malloc(16*sizeof(PetscScalar)); > *(Jentry2[i].valaddr) = a*b; // Segfault > > free(Jentry2[2].valaddr); > free(Jentry2); > return 0; > } > > $ nvc -O0 -o test test.c > $ ./test > sizeof(MatEntry2)=16 > Segmentation fault (core dumped) > > If I change *(Jentry2[i].valaddr) = a*b; to > > PetscScalar *p = Jentry2[2].valaddr; > *p = a*b; > > Then the code works fine. Using -O0 to -O2 will also avoid this error for this simple test, but not for PETSc. In PETSc, I could apply the above silly trick, but I am not sure it is worth it. We should instead report it to NVIDIA. > > Looking at the assembly code for the segfault line, we can find the problem > movslq 52(%rsp), %rcx > movq 40(%rsp), %rax > movq 8(%rax,%rcx,8), %rax // Here %rax = &Jentry2, %rcx = i; The instruction wrongly calculates Jentry2[2].valaddr as (%rax + %rcx*8)+8, which should instead be (%rax + %rcx*16)+8 > vmovsd %xmm1, 8(%rax) > vmovsd %xmm0, (%rax) > > --Junchao Zhang > > > On Fri, Dec 17, 2021 at 7:58 PM Junchao Zhang > wrote: > Hi, Jon, > I could reproduce the error exactly. I will have a look. > Thanks for reporting. > --Junchao Zhang > > > On Fri, Dec 17, 2021 at 2:56 PM Jonathan D. Halverson > wrote: > Hello, > > We are unable to build PETSc using the NVIDIA HPC SDK and --with-scalar-type=complex. Below is our procedure: > > $ module load nvhpc/21.11 > $ module load openmpi/nvhpc-21.11/4.1.2/64 > $ git clone -b release https://gitlab.com/petsc/petsc.git petsc; cd petsc > $ ./configure --with-debugging=1 --with-scalar-type=complex PETSC_ARCH=openmpi-power > $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power all > $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power check > > "make check" fails with a segmentation fault when running ex19. The fortran test ex5f passes. > > The procedure above fails on x86_64 and POWER both running RHEL8. It also fails using nvhpc 20.7. > > The procedure above works for "real" instead of "complex". > > A "hello world" MPI code using a complex data type works with our nvhpc modules. > > The procedure above works successfully when GCC and an Open MPI library built using GCC is used. > > The only trouble is the combination of PETSc with nvhpc and complex. Any known issues? > > The build log for the procedure above is here: > https://tigress-web.princeton.edu/~jdh4/petsc_nvhpc_complex_17dec2021.log > > Jon > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Sun Dec 19 17:38:16 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Sun, 19 Dec 2021 17:38:16 -0600 Subject: [petsc-users] NVIDIA HPC SDK and complex data type In-Reply-To: References: Message-ID: Since it will take a while for NVIDIA to fix the bug in their NVCHPC 21.11 (December 2021), I added a workaround to the MR in petsc, https://gitlab.com/petsc/petsc/-/merge_requests/4663 I tested it and it works with debugging (-O0) or no debugging (-O, or -O2). --Junchao Zhang On Sat, Dec 18, 2021 at 7:51 PM Barry Smith wrote: > > Yes, Junchao deserves a bounty from NVIDIA for this find. > > On Dec 18, 2021, at 8:22 PM, Matthew Knepley wrote: > > On Sat, Dec 18, 2021 at 7:03 PM Junchao Zhang > wrote: > >> I found it is a NVIDIA C/C++ compiler bug. I can reproduce it with >> > > Great find! > > Matt > > >> #include >> #include >> #include >> >> typedef double _Complex PetscScalar; >> typedef struct { >> int row; >> PetscScalar *valaddr; >> } MatEntry2; >> >> int main(int arc, char** argv) >> { >> int i=2; >> MatEntry2 *Jentry2 = (MatEntry2*)malloc(64*sizeof(MatEntry2)); >> PetscScalar a=1, b=1; >> >> printf("sizeof(MatEntry2)=%lu\n",sizeof(MatEntry2)); >> Jentry2[2].valaddr = (PetscScalar*)malloc(16*sizeof(PetscScalar)); >> *(Jentry2[i].valaddr) = a*b; // Segfault >> >> free(Jentry2[2].valaddr); >> free(Jentry2); >> return 0; >> } >> >> $ nvc -O0 -o test test.c >> $ ./test >> sizeof(MatEntry2)=16 >> Segmentation fault (core dumped) >> >> If I change *(Jentry2[i].valaddr) = a*b; to >> >> PetscScalar *p = Jentry2[2].valaddr; >> *p = a*b; >> >> Then the code works fine. Using -O0 to -O2 will also avoid this error >> for this simple test, but not for PETSc. In PETSc, I could apply the above >> silly trick, but I am not sure it is worth it. We should instead report it >> to NVIDIA. >> >> Looking at the assembly code for the segfault line, we can find the >> problem >> movslq 52(%rsp), %rcx >> movq 40(%rsp), %rax >> movq 8(%rax,%rcx,8), %rax // Here %rax = &Jentry2, %rcx = i; The >> instruction wrongly calculates Jentry2[2].valaddr as (%rax + >> %rcx*8)+8, which should instead be (%rax + %rcx*16)+8 >> vmovsd %xmm1, 8(%rax) >> vmovsd %xmm0, (%rax) >> >> --Junchao Zhang >> >> >> On Fri, Dec 17, 2021 at 7:58 PM Junchao Zhang >> wrote: >> >>> Hi, Jon, >>> I could reproduce the error exactly. I will have a look. >>> Thanks for reporting. >>> --Junchao Zhang >>> >>> >>> On Fri, Dec 17, 2021 at 2:56 PM Jonathan D. Halverson < >>> halverson at princeton.edu> wrote: >>> >>>> Hello, >>>> >>>> We are unable to build PETSc using the NVIDIA HPC SDK and >>>> --with-scalar-type=complex. Below is our procedure: >>>> >>>> $ module load nvhpc/21.11 >>>> $ module load openmpi/nvhpc-21.11/4.1.2/64 >>>> $ git clone -b release https://gitlab.com/petsc/petsc.git petsc; cd >>>> petsc >>>> $ ./configure --with-debugging=1 --with-scalar-type=complex >>>> PETSC_ARCH=openmpi-power >>>> $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power all >>>> $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power >>>> check >>>> >>>> "make check" fails with a segmentation fault when running ex19. The >>>> fortran test ex5f passes. >>>> >>>> The procedure above fails on x86_64 and POWER both running RHEL8. It >>>> also fails using nvhpc 20.7. >>>> >>>> The procedure above works for "real" instead of "complex". >>>> >>>> A "hello world" MPI code using a complex data type works with our nvhpc >>>> modules. >>>> >>>> The procedure above works successfully when GCC and an Open MPI library >>>> built using GCC is used. >>>> >>>> The only trouble is the combination of PETSc with nvhpc and complex. >>>> Any known issues? >>>> >>>> The build log for the procedure above is here: >>>> >>>> https://tigress-web.princeton.edu/~jdh4/petsc_nvhpc_complex_17dec2021.log >>>> >>>> Jon >>>> >>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Mon Dec 20 07:14:09 2021 From: roland.richter at ntnu.no (Roland Richter) Date: Mon, 20 Dec 2021 14:14:09 +0100 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails Message-ID: Hei, I tried to combine CUDA with superlu_dist in petsc using the following configure-line: /./configure PETSC_ARCH=mpich-complex-linux-gcc-demo --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" --FFLAGS="-mavx2 -march=native -O3" --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 --with-scalar-type=complex --download-suitesparse=1 --with-cuda --with-debugging=0 --with-openmp --download-superlu_dist --force/ but the configure-step fails with several errors correlated with CUDA and superlu_dist, the first one being /cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first use in this function); did you mean ?CUDA_VERSION??// //?? 21 |???? printf("CUDA version:?? v %d\n",CUDART_VERSION);// //????? |???????????????????????????????????? ^~~~~~~~~~~~~~// //????? |???????????????????????????????????? CUDA_VERSION/ Compiling superlu_dist separately works, though (including CUDA). Is there a bug somewhere in the configure-routine? I attached the full configure-log. Thanks! Regards, Roland -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 1244079 bytes Desc: not available URL: From halverson at Princeton.EDU Mon Dec 20 08:19:47 2021 From: halverson at Princeton.EDU (Jonathan D. Halverson) Date: Mon, 20 Dec 2021 14:19:47 +0000 Subject: [petsc-users] NVIDIA HPC SDK and complex data type In-Reply-To: References: Message-ID: Hi Junchao, Thank you very much for your quick work. The simple build procedure now works. Jon ________________________________ From: Junchao Zhang Sent: Sunday, December 19, 2021 6:38 PM To: petsc-users at mcs.anl.gov Cc: Jonathan D. Halverson Subject: Re: [petsc-users] NVIDIA HPC SDK and complex data type Since it will take a while for NVIDIA to fix the bug in their NVCHPC 21.11 (December 2021), I added a workaround to the MR in petsc, https://gitlab.com/petsc/petsc/-/merge_requests/4663 I tested it and it works with debugging (-O0) or no debugging (-O, or -O2). --Junchao Zhang On Sat, Dec 18, 2021 at 7:51 PM Barry Smith > wrote: Yes, Junchao deserves a bounty from NVIDIA for this find. On Dec 18, 2021, at 8:22 PM, Matthew Knepley > wrote: On Sat, Dec 18, 2021 at 7:03 PM Junchao Zhang > wrote: I found it is a NVIDIA C/C++ compiler bug. I can reproduce it with Great find! Matt #include #include #include typedef double _Complex PetscScalar; typedef struct { int row; PetscScalar *valaddr; } MatEntry2; int main(int arc, char** argv) { int i=2; MatEntry2 *Jentry2 = (MatEntry2*)malloc(64*sizeof(MatEntry2)); PetscScalar a=1, b=1; printf("sizeof(MatEntry2)=%lu\n",sizeof(MatEntry2)); Jentry2[2].valaddr = (PetscScalar*)malloc(16*sizeof(PetscScalar)); *(Jentry2[i].valaddr) = a*b; // Segfault free(Jentry2[2].valaddr); free(Jentry2); return 0; } $ nvc -O0 -o test test.c $ ./test sizeof(MatEntry2)=16 Segmentation fault (core dumped) If I change *(Jentry2[i].valaddr) = a*b; to PetscScalar *p = Jentry2[2].valaddr; *p = a*b; Then the code works fine. Using -O0 to -O2 will also avoid this error for this simple test, but not for PETSc. In PETSc, I could apply the above silly trick, but I am not sure it is worth it. We should instead report it to NVIDIA. Looking at the assembly code for the segfault line, we can find the problem movslq 52(%rsp), %rcx movq 40(%rsp), %rax movq 8(%rax,%rcx,8), %rax // Here %rax = &Jentry2, %rcx = i; The instruction wrongly calculates Jentry2[2].valaddr as (%rax + %rcx*8)+8, which should instead be (%rax + %rcx*16)+8 vmovsd %xmm1, 8(%rax) vmovsd %xmm0, (%rax) --Junchao Zhang On Fri, Dec 17, 2021 at 7:58 PM Junchao Zhang > wrote: Hi, Jon, I could reproduce the error exactly. I will have a look. Thanks for reporting. --Junchao Zhang On Fri, Dec 17, 2021 at 2:56 PM Jonathan D. Halverson > wrote: Hello, We are unable to build PETSc using the NVIDIA HPC SDK and --with-scalar-type=complex. Below is our procedure: $ module load nvhpc/21.11 $ module load openmpi/nvhpc-21.11/4.1.2/64 $ git clone -b release https://gitlab.com/petsc/petsc.git petsc; cd petsc $ ./configure --with-debugging=1 --with-scalar-type=complex PETSC_ARCH=openmpi-power $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power all $ make PETSC_DIR=/home/$USER/software/petsc PETSC_ARCH=openmpi-power check "make check" fails with a segmentation fault when running ex19. The fortran test ex5f passes. The procedure above fails on x86_64 and POWER both running RHEL8. It also fails using nvhpc 20.7. The procedure above works for "real" instead of "complex". A "hello world" MPI code using a complex data type works with our nvhpc modules. The procedure above works successfully when GCC and an Open MPI library built using GCC is used. The only trouble is the combination of PETSc with nvhpc and complex. Any known issues? The build log for the procedure above is here: https://tigress-web.princeton.edu/~jdh4/petsc_nvhpc_complex_17dec2021.log Jon -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Dec 20 09:29:09 2021 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 20 Dec 2021 10:29:09 -0500 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: References: Message-ID: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> Please try the branch balay/slu-without-omp-3 It is in MR https://gitlab.com/petsc/petsc/-/merge_requests/4635 > On Dec 20, 2021, at 8:14 AM, Roland Richter wrote: > > Hei, > > I tried to combine CUDA with superlu_dist in petsc using the following configure-line: > > ./configure PETSC_ARCH=mpich-complex-linux-gcc-demo --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" --FFLAGS="-mavx2 -march=native -O3" --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 --with-scalar-type=complex --download-suitesparse=1 --with-cuda --with-debugging=0 --with-openmp --download-superlu_dist --force > > but the configure-step fails with several errors correlated with CUDA and superlu_dist, the first one being > > cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first use in this function); did you mean ?CUDA_VERSION?? > 21 | printf("CUDA version: v %d\n",CUDART_VERSION); > | ^~~~~~~~~~~~~~ > | CUDA_VERSION > > Compiling superlu_dist separately works, though (including CUDA). > > Is there a bug somewhere in the configure-routine? I attached the full configure-log. > > Thanks! > > Regards, > > Roland > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Mon Dec 20 09:50:47 2021 From: roland.richter at ntnu.no (Roland Richter) Date: Mon, 20 Dec 2021 16:50:47 +0100 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> Message-ID: <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> In that case it fails with /~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: fatal error: cublas_v2.h: No such file or directory/ even though this header is available. I assume some header paths are not set correctly? Thanks, regards, Roland Am 20.12.21 um 16:29 schrieb Barry Smith: > > ? Please try the branch?balay/slu-without-omp-3 ?It is in > MR?https://gitlab.com/petsc/petsc/-/merge_requests/4635 > > > >> On Dec 20, 2021, at 8:14 AM, Roland Richter >> wrote: >> >> Hei, >> >> I tried to combine CUDA with superlu_dist in petsc using the >> following configure-line: >> >> /./configure PETSC_ARCH=mpich-complex-linux-gcc-demo >> --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc >> --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx >> --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 >> -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" >> --FFLAGS="-mavx2 -march=native -O3" >> --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ >> --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 >> --with-scalar-type=complex --download-suitesparse=1 --with-cuda >> --with-debugging=0 --with-openmp --download-superlu_dist --force/ >> >> but the configure-step fails with several errors correlated with CUDA >> and superlu_dist, the first one being >> >> /cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first use >> in this function); did you mean ?CUDA_VERSION??// >> //?? 21 |???? printf("CUDA version:?? v %d\n",CUDART_VERSION);// >> //????? |???????????????????????????????????? ^~~~~~~~~~~~~~// >> //????? |???????????????????????????????????? CUDA_VERSION/ >> >> Compiling superlu_dist separately works, though (including CUDA). >> >> Is there a bug somewhere in the configure-routine? I attached the >> full configure-log. >> >> Thanks! >> >> Regards, >> >> Roland >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 1309280 bytes Desc: not available URL: From bsmith at petsc.dev Mon Dec 20 13:38:03 2021 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 20 Dec 2021 14:38:03 -0500 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> Message-ID: <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> Are you sure you have the correct PETSc branch? From configure.log it has Defined "VERSION_GIT" to ""v3.16.2-466-g959e1fce86"" Defined "VERSION_DATE_GIT" to ""2021-12-18 11:17:24 -0600"" Defined "VERSION_BRANCH_GIT" to ""master"" It should have balay/slu-without-omp-3 for the branch. > On Dec 20, 2021, at 10:50 AM, Roland Richter wrote: > > In that case it fails with > > ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: fatal error: cublas_v2.h: No such file or directory > > even though this header is available. I assume some header paths are not set correctly? > > Thanks, > > regards, > > Roland > > Am 20.12.21 um 16:29 schrieb Barry Smith: >> >> Please try the branch balay/slu-without-omp-3 It is in MR https://gitlab.com/petsc/petsc/-/merge_requests/4635 >> >> >> >>> On Dec 20, 2021, at 8:14 AM, Roland Richter > wrote: >>> >>> Hei, >>> >>> I tried to combine CUDA with superlu_dist in petsc using the following configure-line: >>> >>> ./configure PETSC_ARCH=mpich-complex-linux-gcc-demo --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" --FFLAGS="-mavx2 -march=native -O3" --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 --with-scalar-type=complex --download-suitesparse=1 --with-cuda --with-debugging=0 --with-openmp --download-superlu_dist --force >>> >>> but the configure-step fails with several errors correlated with CUDA and superlu_dist, the first one being >>> >>> cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first use in this function); did you mean ?CUDA_VERSION?? >>> 21 | printf("CUDA version: v %d\n",CUDART_VERSION); >>> | ^~~~~~~~~~~~~~ >>> | CUDA_VERSION >>> >>> Compiling superlu_dist separately works, though (including CUDA). >>> >>> Is there a bug somewhere in the configure-routine? I attached the full configure-log. >>> >>> Thanks! >>> >>> Regards, >>> >>> Roland >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Mon Dec 20 13:59:41 2021 From: roland.richter at ntnu.no (Roland Richter) Date: Mon, 20 Dec 2021 20:59:41 +0100 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> Message-ID: I introduced the changes from that patch directly, without checking out. Is that insufficient? Regards, Roland Am 20.12.2021 um 20:38 schrieb Barry Smith: > > ? Are you sure you have the correct PETSc branch? From configure.log > it has > > ? ? ? ? ? ? Defined "VERSION_GIT" to ""v3.16.2-466-g959e1fce86"" > ? ? ? ? ? ? Defined "VERSION_DATE_GIT" to ""2021-12-18 11:17:24 -0600"" > ? ? ? ? ? ? Defined "VERSION_BRANCH_GIT" to ""master"" > > It should have balay/slu-without-omp-3 for the branch. > > > >> On Dec 20, 2021, at 10:50 AM, Roland Richter >> wrote: >> >> In that case it fails with >> >> /~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: >> fatal error: cublas_v2.h: No such file or directory/ >> >> even though this header is available. I assume some header paths are >> not set correctly? >> >> Thanks, >> >> regards, >> >> Roland >> >> Am 20.12.21 um 16:29 schrieb Barry Smith: >>> >>> ? Please try the branch?balay/slu-without-omp-3 ?It is in MR >>> https://gitlab.com/petsc/petsc/-/merge_requests/4635 >>> >>> >>> >>>> On Dec 20, 2021, at 8:14 AM, Roland Richter >>>> wrote: >>>> >>>> Hei, >>>> >>>> I tried to combine CUDA with superlu_dist in petsc using the >>>> following configure-line: >>>> >>>> /./configure PETSC_ARCH=mpich-complex-linux-gcc-demo >>>> --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc >>>> --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx >>>> --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 >>>> -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" >>>> --FFLAGS="-mavx2 -march=native -O3" >>>> --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ >>>> --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 >>>> --with-scalar-type=complex --download-suitesparse=1 --with-cuda >>>> --with-debugging=0 --with-openmp --download-superlu_dist --force/ >>>> >>>> but the configure-step fails with several errors correlated with >>>> CUDA and superlu_dist, the first one being >>>> >>>> /cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first >>>> use in this function); did you mean ?CUDA_VERSION??// >>>> //?? 21 |???? printf("CUDA version:?? v %d\n",CUDART_VERSION);// >>>> //| ^~~~~~~~~~~~~~// >>>> //| CUDA_VERSION/ >>>> >>>> Compiling superlu_dist separately works, though (including CUDA). >>>> >>>> Is there a bug somewhere in the configure-routine? I attached the >>>> full configure-log. >>>> >>>> Thanks! >>>> >>>> Regards, >>>> >>>> Roland >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Dec 20 14:46:12 2021 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 20 Dec 2021 15:46:12 -0500 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> Message-ID: <9E99DFBE-BB73-481B-BBC5-517222080A15@petsc.dev> Hmm, the fix should now be supplying a -DCMAKE_CUDA_FLAGS to this cmake command (line 48 at https://gitlab.com/petsc/petsc/-/merge_requests/4635/diffs) I do not see that flag being set inside the configure.log so I am guessing you didn't get the complete fix. Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON -DTPL_ENABLE_CUDALIB=TRUE -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 -DDEBUGlevel=0 -DPRNTlevel=0" -DUSE_XSDK_DEFAULTS=YES -DTPL_BLAS_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" > On Dec 20, 2021, at 2:59 PM, Roland Richter wrote: > > I introduced the changes from that patch directly, without checking out. Is that insufficient? > > Regards, > > Roland > > Am 20.12.2021 um 20:38 schrieb Barry Smith: >> >> Are you sure you have the correct PETSc branch? From configure.log it has >> >> Defined "VERSION_GIT" to ""v3.16.2-466-g959e1fce86"" >> Defined "VERSION_DATE_GIT" to ""2021-12-18 11:17:24 -0600"" >> Defined "VERSION_BRANCH_GIT" to ""master"" >> >> It should have balay/slu-without-omp-3 for the branch. >> >> >> >>> On Dec 20, 2021, at 10:50 AM, Roland Richter > wrote: >>> >>> In that case it fails with >>> >>> ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: fatal error: cublas_v2.h: No such file or directory >>> >>> even though this header is available. I assume some header paths are not set correctly? >>> >>> Thanks, >>> >>> regards, >>> >>> Roland >>> >>> Am 20.12.21 um 16:29 schrieb Barry Smith: >>>> >>>> Please try the branch balay/slu-without-omp-3 It is in MR https://gitlab.com/petsc/petsc/-/merge_requests/4635 >>>> >>>> >>>> >>>>> On Dec 20, 2021, at 8:14 AM, Roland Richter > wrote: >>>>> >>>>> Hei, >>>>> >>>>> I tried to combine CUDA with superlu_dist in petsc using the following configure-line: >>>>> >>>>> ./configure PETSC_ARCH=mpich-complex-linux-gcc-demo --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" --FFLAGS="-mavx2 -march=native -O3" --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 --with-scalar-type=complex --download-suitesparse=1 --with-cuda --with-debugging=0 --with-openmp --download-superlu_dist --force >>>>> >>>>> but the configure-step fails with several errors correlated with CUDA and superlu_dist, the first one being >>>>> >>>>> cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first use in this function); did you mean ?CUDA_VERSION?? >>>>> 21 | printf("CUDA version: v %d\n",CUDART_VERSION); >>>>> | ^~~~~~~~~~~~~~ >>>>> | CUDA_VERSION >>>>> >>>>> Compiling superlu_dist separately works, though (including CUDA). >>>>> >>>>> Is there a bug somewhere in the configure-routine? I attached the full configure-log. >>>>> >>>>> Thanks! >>>>> >>>>> Regards, >>>>> >>>>> Roland >>>>> >>>>> >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Mon Dec 20 15:46:17 2021 From: roland.richter at ntnu.no (Roland Richter) Date: Mon, 20 Dec 2021 22:46:17 +0100 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: <9E99DFBE-BB73-481B-BBC5-517222080A15@petsc.dev> References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> <9E99DFBE-BB73-481B-BBC5-517222080A15@petsc.dev> Message-ID: <99ba9ea7-5f70-34a9-58fe-27180096c614@ntnu.no> Yes, just checked, I only included the changes above the comment... Will test tomorrow, thanks for the help! Regards, Roland Am 20.12.2021 um 21:46 schrieb Barry Smith: > ? Hmm, the fix should now be supplying a -DCMAKE_CUDA_FLAGS to this > cmake command (line 48 at > https://gitlab.com/petsc/petsc/-/merge_requests/4635/diffs) I do not > see that flag being set inside the configure.log so I am guessing you > didn't get the complete fix. > > > Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc > -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" > -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 > -DCMAKE_BUILD_TYPE=Release > -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" > -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" > -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib > -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" > -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" > -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC > -fopenmp" > -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" > -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" > -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC > -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 > -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" > -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp > -fPIC -std=gnu++17 -fopenmp" > -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" > -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" > -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp > -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 > -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" > -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC > -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" > -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON > -DTPL_ENABLE_CUDALIB=TRUE > -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 > -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse > -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" > -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 > -DDEBUGlevel=0 -DPRNTlevel=0" -DUSE_XSDK_DEFAULTS=YES > -DTPL_BLAS_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 > -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse > -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm > -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -L/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib > -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread > -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 > -L/usr/lib64/gcc/x86_64-suse-linux/11 > -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib > -L/opt/intel/oneapi/vpl/2022.0.0/lib > -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s > -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas > -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 > -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand > -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -L/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib > -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread > -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 > -L/usr/lib64/gcc/x86_64-suse-linux/11 > -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib > -L/opt/intel/oneapi/vpl/2022.0.0/lib > -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s > -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE > -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 > -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" > -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" > > > > >> On Dec 20, 2021, at 2:59 PM, Roland Richter >> wrote: >> >> I introduced the changes from that patch directly, without checking >> out. Is that insufficient? >> >> Regards, >> >> Roland >> >> Am 20.12.2021 um 20:38 schrieb Barry Smith: >>> >>> ? Are you sure you have the correct PETSc branch? From configure.log >>> it has >>> >>> ? ? ? ? ? ? Defined "VERSION_GIT" to ""v3.16.2-466-g959e1fce86"" >>> ? ? ? ? ? ? Defined "VERSION_DATE_GIT" to ""2021-12-18 11:17:24 -0600"" >>> ? ? ? ? ? ? Defined "VERSION_BRANCH_GIT" to ""master"" >>> >>> It should have balay/slu-without-omp-3 for the branch. >>> >>> >>> >>>> On Dec 20, 2021, at 10:50 AM, Roland Richter >>>> wrote: >>>> >>>> In that case it fails with >>>> >>>> /~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: >>>> fatal error: cublas_v2.h: No such file or directory/ >>>> >>>> even though this header is available. I assume some header paths >>>> are not set correctly? >>>> >>>> Thanks, >>>> >>>> regards, >>>> >>>> Roland >>>> >>>> Am 20.12.21 um 16:29 schrieb Barry Smith: >>>>> >>>>> ? Please try the branch?balay/slu-without-omp-3 ?It is in MR >>>>> https://gitlab.com/petsc/petsc/-/merge_requests/4635 >>>>> >>>>> >>>>> >>>>>> On Dec 20, 2021, at 8:14 AM, Roland Richter >>>>>> wrote: >>>>>> >>>>>> Hei, >>>>>> >>>>>> I tried to combine CUDA with superlu_dist in petsc using the >>>>>> following configure-line: >>>>>> >>>>>> /./configure PETSC_ARCH=mpich-complex-linux-gcc-demo >>>>>> --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc >>>>>> --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx >>>>>> --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 >>>>>> -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" >>>>>> --FFLAGS="-mavx2 -march=native -O3" >>>>>> --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ >>>>>> --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 >>>>>> --with-scalar-type=complex --download-suitesparse=1 --with-cuda >>>>>> --with-debugging=0 --with-openmp --download-superlu_dist --force/ >>>>>> >>>>>> but the configure-step fails with several errors correlated with >>>>>> CUDA and superlu_dist, the first one being >>>>>> >>>>>> /cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first >>>>>> use in this function); did you mean ?CUDA_VERSION??// >>>>>> //?? 21 | printf("CUDA version:?? v %d\n",CUDART_VERSION);// >>>>>> //| ^~~~~~~~~~~~~~// >>>>>> //| CUDA_VERSION/ >>>>>> >>>>>> Compiling superlu_dist separately works, though (including CUDA). >>>>>> >>>>>> Is there a bug somewhere in the configure-routine? I attached the >>>>>> full configure-log. >>>>>> >>>>>> Thanks! >>>>>> >>>>>> Regards, >>>>>> >>>>>> Roland >>>>>> >>>>>> >>>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Tue Dec 21 01:24:34 2021 From: roland.richter at ntnu.no (Roland Richter) Date: Tue, 21 Dec 2021 08:24:34 +0100 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: <99ba9ea7-5f70-34a9-58fe-27180096c614@ntnu.no> References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> <9E99DFBE-BB73-481B-BBC5-517222080A15@petsc.dev> <99ba9ea7-5f70-34a9-58fe-27180096c614@ntnu.no> Message-ID: I added/replaced all six lines which are in that commit, getting Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON -DTPL_ENABLE_CUDALIB=TRUE -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 -DDEBUGlevel=0 -DPRNTlevel=0" -DCMAKE_CUDA_FLAGS="-allow-unsupported-compiler -Xcompiler -fPIC -O3 -ccbin /opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx -std=c++17 -gencode arch=compute_61,code=sm_61? -Wno-deprecated-gpu-targets -I"/opt/intel/oneapi/mpi/2021.5.0/include" " -DUSE_XSDK_DEFAULTS=YES -DTPL_BLAS_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" but the compilation still fails with the same error. Am 20.12.21 um 22:46 schrieb Roland Richter: > > Yes, just checked, I only included the changes above the comment... > > Will test tomorrow, thanks for the help! > > Regards, > > Roland > > Am 20.12.2021 um 21:46 schrieb Barry Smith: >> ? Hmm, the fix should now be supplying a -DCMAKE_CUDA_FLAGS to this >> cmake command (line 48 >> at?https://gitlab.com/petsc/petsc/-/merge_requests/4635/diffs) I do >> not see that flag being set inside the configure.log so I am guessing >> you didn't get the complete fix. >> >> >> Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc >> -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" >> -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 >> -DCMAKE_BUILD_TYPE=Release >> -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" >> -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" >> -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib >> -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" >> -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC >> -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 >> -fPIC -fopenmp" >> -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" >> -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" >> -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC >> -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 >> -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" >> -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp >> -fPIC -std=gnu++17 -fopenmp" >> -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" >> -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" >> -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp >> -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 >> -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" >> -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC >> -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" >> -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON >> -DTPL_ENABLE_CUDALIB=TRUE >> -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 >> -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse >> -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" >> -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 >> -DDEBUGlevel=0 -DPRNTlevel=0" -DUSE_XSDK_DEFAULTS=YES >> -DTPL_BLAS_LIBRARIES="-lopenblas >> -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 >> -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand >> -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -L/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib >> -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread >> -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 >> -L/usr/lib64/gcc/x86_64-suse-linux/11 >> -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib >> -L/opt/intel/oneapi/vpl/2022.0.0/lib >> -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >> -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >> -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >> -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >> -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >> -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >> -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >> -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >> -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >> -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >> -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib >> -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib >> -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >> -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >> -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >> -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s >> -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas >> -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 >> -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand >> -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -L/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib >> -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread >> -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 >> -L/usr/lib64/gcc/x86_64-suse-linux/11 >> -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib >> -L/opt/intel/oneapi/vpl/2022.0.0/lib >> -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >> -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >> -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >> -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >> -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >> -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >> -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >> -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >> -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >> -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >> -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib >> -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib >> -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >> -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >> -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >> -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s >> -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE >> -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 >> -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" >> -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" >> >> >> >> >>> On Dec 20, 2021, at 2:59 PM, Roland Richter >>> wrote: >>> >>> I introduced the changes from that patch directly, without checking >>> out. Is that insufficient? >>> >>> Regards, >>> >>> Roland >>> >>> Am 20.12.2021 um 20:38 schrieb Barry Smith: >>>> >>>> ? Are you sure you have the correct PETSc branch? From >>>> configure.log it has >>>> >>>> ? ? ? ? ? ? Defined "VERSION_GIT" to ""v3.16.2-466-g959e1fce86"" >>>> ? ? ? ? ? ? Defined "VERSION_DATE_GIT" to ""2021-12-18 11:17:24 -0600"" >>>> ? ? ? ? ? ? Defined "VERSION_BRANCH_GIT" to ""master"" >>>> >>>> It should have balay/slu-without-omp-3 for the branch. >>>> >>>> >>>> >>>>> On Dec 20, 2021, at 10:50 AM, Roland Richter >>>>> wrote: >>>>> >>>>> In that case it fails with >>>>> >>>>> /~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: >>>>> fatal error: cublas_v2.h: No such file or directory/ >>>>> >>>>> even though this header is available. I assume some header paths >>>>> are not set correctly? >>>>> >>>>> Thanks, >>>>> >>>>> regards, >>>>> >>>>> Roland >>>>> >>>>> Am 20.12.21 um 16:29 schrieb Barry Smith: >>>>>> >>>>>> ? Please try the branch?balay/slu-without-omp-3 ?It is in >>>>>> MR?https://gitlab.com/petsc/petsc/-/merge_requests/4635 >>>>>> >>>>>> >>>>>> >>>>>>> On Dec 20, 2021, at 8:14 AM, Roland Richter >>>>>>> wrote: >>>>>>> >>>>>>> Hei, >>>>>>> >>>>>>> I tried to combine CUDA with superlu_dist in petsc using the >>>>>>> following configure-line: >>>>>>> >>>>>>> /./configure PETSC_ARCH=mpich-complex-linux-gcc-demo >>>>>>> --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc >>>>>>> --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx >>>>>>> --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 >>>>>>> -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" >>>>>>> --FFLAGS="-mavx2 -march=native -O3" >>>>>>> --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ >>>>>>> --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 >>>>>>> --with-scalar-type=complex --download-suitesparse=1 --with-cuda >>>>>>> --with-debugging=0 --with-openmp --download-superlu_dist --force/ >>>>>>> >>>>>>> but the configure-step fails with several errors correlated with >>>>>>> CUDA and superlu_dist, the first one being >>>>>>> >>>>>>> /cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first >>>>>>> use in this function); did you mean ?CUDA_VERSION??// >>>>>>> //?? 21 |???? printf("CUDA version:?? v %d\n",CUDART_VERSION);// >>>>>>> //????? |???????????????????????????????????? ^~~~~~~~~~~~~~// >>>>>>> //????? |???????????????????????????????????? CUDA_VERSION/ >>>>>>> >>>>>>> Compiling superlu_dist separately works, though (including CUDA). >>>>>>> >>>>>>> Is there a bug somewhere in the configure-routine? I attached >>>>>>> the full configure-log. >>>>>>> >>>>>>> Thanks! >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> Roland >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 1352268 bytes Desc: not available URL: From bsmith at petsc.dev Tue Dec 21 12:04:24 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 21 Dec 2021 13:04:24 -0500 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> <9E99DFBE-BB73-481B-BBC5-517222080A15@petsc.dev> <99ba9ea7-5f70-34a9-58fe-27180096c614@ntnu.no> Message-ID: I think the problem comes from the following: > -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 -DDEBUGlevel=0 -DPRNTlevel=0" > which is ok, the location of the CUDA include files is passed in correctly but when the compile is done inside SuperLU_DIST's make cd ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/petsc-build/SRC && /usr/local/cuda-11.5/bin/nvcc -forward-unknown-to-host-compiler -DSUPERLU_DIST_EXPORTS -Dsuperlu_dist_EXPORTS -I~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/petsc-build/SRC -isystem=/usr/local/cuda-11.5/include -DUSE_VENDOR_BLAS -allow-unsupported-compiler -Xcompiler -fPIC -O3 -ccbin /opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx -std=c++17 -gencode arch=compute_61,code=sm_61 -Wno-deprecated-gpu-targets -I/opt/intel/oneapi/mpi/2021.5.0/include -O3 -DNDEBUG --generate-code=arch=compute_61,code=[compute_61,sm_61] -Xcompiler=-fPIC -std=c++11 -MD -MT SRC/CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o -MF CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o.d -x cu -dc ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/superlu_gpu_utils.cu -o CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o The -I/usr/local/cuda-11.5/include has been replaced with -isystem=/usr/local/cuda-11.5/include and I suspect the nvcc doesn't understand it so it ignores it. I assume somewhere inside the CMAKE processing it is replacing the -I with the -isystem= I don't know how one could fix this. I suggest you try installing SuperLU_DIST yourself directly. But I suspect the same problem will appear. It is possible that if you do not use the Intel compilers but use for example the GNU compilers the problem may not occur because cmake may not make the transformation. Barry Googling cmake isystem produces many messages about issues related to isystem and cmake but it would take a life-time to understand them. > On Dec 21, 2021, at 2:24 AM, Roland Richter wrote: > > I added/replaced all six lines which are in that commit, getting > > Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON -DTPL_ENABLE_CUDALIB=TRUE -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 -DDEBUGlevel=0 -DPRNTlevel=0" -DCMAKE_CUDA_FLAGS="-allow-unsupported-compiler -Xcompiler -fPIC -O3 -ccbin /opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx -std=c++17 -gencode arch=compute_61,code=sm_61 -Wno-deprecated-gpu-targets -I"/opt/intel/oneapi/mpi/2021.5.0/include" " -DUSE_XSDK_DEFAULTS=YES -DTPL_BLAS_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" > > but the compilation still fails with the same error. > > Am 20.12.21 um 22:46 schrieb Roland Richter: >> Yes, just checked, I only included the changes above the comment... >> >> Will test tomorrow, thanks for the help! >> >> Regards, >> >> Roland >> >> Am 20.12.2021 um 21:46 schrieb Barry Smith: >>> Hmm, the fix should now be supplying a -DCMAKE_CUDA_FLAGS to this cmake command (line 48 at https://gitlab.com/petsc/petsc/-/merge_requests/4635/diffs ) I do not see that flag being set inside the configure.log so I am guessing you didn't get the complete fix. >>> >>> >>> Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON -DTPL_ENABLE_CUDALIB=TRUE -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 -DDEBUGlevel=0 -DPRNTlevel=0" -DUSE_XSDK_DEFAULTS=YES -DTPL_BLAS_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" >>> >>> >>> >>> >>>> On Dec 20, 2021, at 2:59 PM, Roland Richter > wrote: >>>> >>>> I introduced the changes from that patch directly, without checking out. Is that insufficient? >>>> >>>> Regards, >>>> >>>> Roland >>>> >>>> Am 20.12.2021 um 20:38 schrieb Barry Smith: >>>>> >>>>> Are you sure you have the correct PETSc branch? From configure.log it has >>>>> >>>>> Defined "VERSION_GIT" to ""v3.16.2-466-g959e1fce86"" >>>>> Defined "VERSION_DATE_GIT" to ""2021-12-18 11:17:24 -0600"" >>>>> Defined "VERSION_BRANCH_GIT" to ""master"" >>>>> >>>>> It should have balay/slu-without-omp-3 for the branch. >>>>> >>>>> >>>>> >>>>>> On Dec 20, 2021, at 10:50 AM, Roland Richter > wrote: >>>>>> >>>>>> In that case it fails with >>>>>> >>>>>> ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: fatal error: cublas_v2.h: No such file or directory >>>>>> >>>>>> even though this header is available. I assume some header paths are not set correctly? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> regards, >>>>>> >>>>>> Roland >>>>>> >>>>>> Am 20.12.21 um 16:29 schrieb Barry Smith: >>>>>>> >>>>>>> Please try the branch balay/slu-without-omp-3 It is in MR https://gitlab.com/petsc/petsc/-/merge_requests/4635 >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Dec 20, 2021, at 8:14 AM, Roland Richter > wrote: >>>>>>>> >>>>>>>> Hei, >>>>>>>> >>>>>>>> I tried to combine CUDA with superlu_dist in petsc using the following configure-line: >>>>>>>> >>>>>>>> ./configure PETSC_ARCH=mpich-complex-linux-gcc-demo --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" --FFLAGS="-mavx2 -march=native -O3" --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 --with-scalar-type=complex --download-suitesparse=1 --with-cuda --with-debugging=0 --with-openmp --download-superlu_dist --force >>>>>>>> >>>>>>>> but the configure-step fails with several errors correlated with CUDA and superlu_dist, the first one being >>>>>>>> >>>>>>>> cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first use in this function); did you mean ?CUDA_VERSION?? >>>>>>>> 21 | printf("CUDA version: v %d\n",CUDART_VERSION); >>>>>>>> | ^~~~~~~~~~~~~~ >>>>>>>> | CUDA_VERSION >>>>>>>> >>>>>>>> Compiling superlu_dist separately works, though (including CUDA). >>>>>>>> >>>>>>>> Is there a bug somewhere in the configure-routine? I attached the full configure-log. >>>>>>>> >>>>>>>> Thanks! >>>>>>>> >>>>>>>> Regards, >>>>>>>> >>>>>>>> Roland >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Tue Dec 21 12:26:07 2021 From: roland.richter at ntnu.no (Roland Richter) Date: Tue, 21 Dec 2021 19:26:07 +0100 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> <9E99DFBE-BB73-481B-BBC5-517222080A15@petsc.dev> <99ba9ea7-5f70-34a9-58fe-27180096c614@ntnu.no> Message-ID: <1e6a42b7-683a-9434-2d9a-4c042d3f8c30@ntnu.no> I'm already using GCC, together with Intel MPI, thus that should not be the problem. But I'll take a look if there are ways to fix that problem, or to circumvent it without having to install superlu_dist separately. Thanks! Regards, Roland Am 21.12.2021 um 19:04 schrieb Barry Smith: > > ? I think the problem comes from the following: > >> -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 >> -DDEBUGlevel=0 -DPRNTlevel=0" >> > which is ok, the location of the CUDA include files is passed in > correctly but when the compile is done inside SuperLU_DIST's make > > cd > ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/petsc-build/SRC > && /usr/local/cuda-11.5/bin/nvcc -forward-unknown-to-host-compiler > -DSUPERLU_DIST_EXPORTS -Dsuperlu_dist_EXPORTS > -I~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/petsc-build/SRC > -isystem=/usr/local/cuda-11.5/include -DUSE_VENDOR_BLAS > -allow-unsupported-compiler -Xcompiler -fPIC -O3 -ccbin > /opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx -std=c++17 -gencode > arch=compute_61,code=sm_61 ?-Wno-deprecated-gpu-targets > -I/opt/intel/oneapi/mpi/2021.5.0/include -O3 -DNDEBUG > --generate-code=arch=compute_61,code=[compute_61,sm_61] > -Xcompiler=-fPIC -std=c++11 -MD -MT > SRC/CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o -MF > CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o.d -x cu -dc > ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/superlu_gpu_utils.cu > -o CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o > > The > > ?-I/usr/local/cuda-11.5/include > > has been replaced with > > -isystem=/usr/local/cuda-11.5/include > > and I suspect the nvcc doesn't understand it so it ignores it. I > assume somewhere inside the CMAKE processing it is replacing the -I > with the -isystem= > > I don't know how one could fix this. ?I suggest you try installing > SuperLU_DIST yourself directly. But I suspect the same problem will > appear. > > It is possible that if you do not use the Intel compilers but use for > example the GNU compilers the problem may not occur because cmake may > not make the transformation. > > Barry > > Googling cmake isystem produces many messages about issues related to > isystem and cmake but it would take a life-time to understand them. > > >> On Dec 21, 2021, at 2:24 AM, Roland Richter >> wrote: >> >> I added/replaced all six lines which are in that commit, getting >> >> Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc >> -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" >> -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 >> -DCMAKE_BUILD_TYPE=Release >> -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" >> -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" >> -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib >> -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" >> -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC >> -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 >> -fPIC -fopenmp" >> -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" >> -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" >> -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC >> -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 >> -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" >> -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp >> -fPIC -std=gnu++17 -fopenmp" >> -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" >> -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" >> -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp >> -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 >> -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" >> -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC >> -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" >> -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON >> -DTPL_ENABLE_CUDALIB=TRUE >> -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 >> -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse >> -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" >> -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 >> -DDEBUGlevel=0 -DPRNTlevel=0" >> -DCMAKE_CUDA_FLAGS="-allow-unsupported-compiler -Xcompiler -fPIC -O3 >> -ccbin /opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx -std=c++17 -gencode >> arch=compute_61,code=sm_61 -Wno-deprecated-gpu-targets >> -I"/opt/intel/oneapi/mpi/2021.5.0/include" " -DUSE_XSDK_DEFAULTS=YES >> -DTPL_BLAS_LIBRARIES="-lopenblas >> -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 >> -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand >> -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -L/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib >> -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread >> -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 >> -L/usr/lib64/gcc/x86_64-suse-linux/11 >> -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib >> -L/opt/intel/oneapi/vpl/2022.0.0/lib >> -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >> -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >> -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >> -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >> -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >> -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >> -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >> -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >> -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >> -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >> -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib >> -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib >> -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >> -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >> -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >> -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s >> -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas >> -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 >> -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand >> -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -L/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib >> -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread >> -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 >> -L/usr/lib64/gcc/x86_64-suse-linux/11 >> -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib >> -L/opt/intel/oneapi/vpl/2022.0.0/lib >> -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >> -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >> -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >> -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >> -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >> -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >> -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >> -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >> -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >> -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >> -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib >> -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib >> -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >> -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >> -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >> -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >> -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s >> -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE >> -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 >> -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" >> -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" >> >> but the compilation still fails with the same error. >> >> Am 20.12.21 um 22:46 schrieb Roland Richter: >>> >>> Yes, just checked, I only included the changes above the comment... >>> >>> Will test tomorrow, thanks for the help! >>> >>> Regards, >>> >>> Roland >>> >>> Am 20.12.2021 um 21:46 schrieb Barry Smith: >>>> ? Hmm, the fix should now be supplying a -DCMAKE_CUDA_FLAGS to this >>>> cmake command (line 48 at >>>> https://gitlab.com/petsc/petsc/-/merge_requests/4635/diffs) I do >>>> not see that flag being set inside the configure.log so I am >>>> guessing you didn't get the complete fix. >>>> >>>> >>>> Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc >>>> -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" >>>> -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 >>>> -DCMAKE_BUILD_TYPE=Release >>>> -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" >>>> -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" >>>> -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib >>>> -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" >>>> -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC >>>> -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 >>>> -fPIC -fopenmp" >>>> -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" >>>> -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" >>>> -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC >>>> -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 >>>> -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" >>>> -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp >>>> -fPIC -std=gnu++17 -fopenmp" >>>> -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" >>>> -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" >>>> -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC >>>> -fopenmp -fallow-argument-mismatch" >>>> -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC >>>> -fopenmp -fallow-argument-mismatch" >>>> -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 >>>> -fPIC -fopenmp -fallow-argument-mismatch" >>>> -DCMAKE_EXE_LINKER_FLAGS:STRING=" -fopenmp -fopenmp" >>>> -DBUILD_SHARED_LIBS:BOOL=ON -DTPL_ENABLE_CUDALIB=TRUE >>>> -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 >>>> -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse >>>> -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" >>>> -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 >>>> -DDEBUGlevel=0 -DPRNTlevel=0" -DUSE_XSDK_DEFAULTS=YES >>>> -DTPL_BLAS_LIBRARIES="-lopenblas >>>> -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 >>>> -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand >>>> -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >>>> -L/opt/intel/oneapi/mpi/2021.5.0/lib/release >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib >>>> -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread >>>> -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 >>>> -L/usr/lib64/gcc/x86_64-suse-linux/11 >>>> -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib >>>> -L/opt/intel/oneapi/vpl/2022.0.0/lib >>>> -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >>>> -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >>>> -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >>>> -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >>>> -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >>>> -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >>>> -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >>>> -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >>>> -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >>>> -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >>>> -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >>>> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib >>>> -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib >>>> -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >>>> -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >>>> -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >>>> -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm >>>> -lgcc_s -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas >>>> -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 >>>> -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand >>>> -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >>>> -L/opt/intel/oneapi/mpi/2021.5.0/lib/release >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib >>>> -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread >>>> -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 >>>> -L/usr/lib64/gcc/x86_64-suse-linux/11 >>>> -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib >>>> -L/opt/intel/oneapi/vpl/2022.0.0/lib >>>> -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >>>> -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >>>> -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib >>>> -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >>>> -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >>>> -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >>>> -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >>>> -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib >>>> -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >>>> -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >>>> -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin >>>> -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib >>>> -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib >>>> -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >>>> -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 >>>> -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >>>> -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp >>>> -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release >>>> -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm >>>> -lgcc_s -lquadmath" -Denable_parmetislib=FALSE >>>> -DTPL_ENABLE_PARMETISLIB=FALSE -DXSDK_ENABLE_Fortran=ON >>>> -Denable_tests=0 -Denable_examples=0 >>>> -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" >>>> -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" >>>> >>>> >>>> >>>> >>>>> On Dec 20, 2021, at 2:59 PM, Roland Richter >>>>> wrote: >>>>> >>>>> I introduced the changes from that patch directly, without >>>>> checking out. Is that insufficient? >>>>> >>>>> Regards, >>>>> >>>>> Roland >>>>> >>>>> Am 20.12.2021 um 20:38 schrieb Barry Smith: >>>>>> >>>>>> ? Are you sure you have the correct PETSc branch? From >>>>>> configure.log it has >>>>>> >>>>>> ? ? ? ? ? ? Defined "VERSION_GIT" to ""v3.16.2-466-g959e1fce86"" >>>>>> ? ? ? ? ? ? Defined "VERSION_DATE_GIT" to ""2021-12-18 11:17:24 >>>>>> -0600"" >>>>>> ? ? ? ? ? ? Defined "VERSION_BRANCH_GIT" to ""master"" >>>>>> >>>>>> It should have balay/slu-without-omp-3 for the branch. >>>>>> >>>>>> >>>>>> >>>>>>> On Dec 20, 2021, at 10:50 AM, Roland Richter >>>>>>> wrote: >>>>>>> >>>>>>> In that case it fails with >>>>>>> >>>>>>> /~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: >>>>>>> fatal error: cublas_v2.h: No such file or directory/ >>>>>>> >>>>>>> even though this header is available. I assume some header paths >>>>>>> are not set correctly? >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> regards, >>>>>>> >>>>>>> Roland >>>>>>> >>>>>>> Am 20.12.21 um 16:29 schrieb Barry Smith: >>>>>>>> >>>>>>>> ? Please try the branch?balay/slu-without-omp-3 ?It is in MR >>>>>>>> https://gitlab.com/petsc/petsc/-/merge_requests/4635 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On Dec 20, 2021, at 8:14 AM, Roland Richter >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> Hei, >>>>>>>>> >>>>>>>>> I tried to combine CUDA with superlu_dist in petsc using the >>>>>>>>> following configure-line: >>>>>>>>> >>>>>>>>> /./configure PETSC_ARCH=mpich-complex-linux-gcc-demo >>>>>>>>> --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc >>>>>>>>> --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx >>>>>>>>> --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 >>>>>>>>> -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" >>>>>>>>> --FFLAGS="-mavx2 -march=native -O3" >>>>>>>>> --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ >>>>>>>>> --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 >>>>>>>>> --with-scalar-type=complex --download-suitesparse=1 >>>>>>>>> --with-cuda --with-debugging=0 --with-openmp >>>>>>>>> --download-superlu_dist --force/ >>>>>>>>> >>>>>>>>> but the configure-step fails with several errors correlated >>>>>>>>> with CUDA and superlu_dist, the first one being >>>>>>>>> >>>>>>>>> /cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared >>>>>>>>> (first use in this function); did you mean ?CUDA_VERSION??// >>>>>>>>> //21 | printf("CUDA version:?? v %d\n",CUDART_VERSION);// >>>>>>>>> //|???????????????????????????????????? ^~~~~~~~~~~~~~// >>>>>>>>> //|???????????????????????????????????? CUDA_VERSION/ >>>>>>>>> >>>>>>>>> Compiling superlu_dist separately works, though (including CUDA). >>>>>>>>> >>>>>>>>> Is there a bug somewhere in the configure-routine? I attached >>>>>>>>> the full configure-log. >>>>>>>>> >>>>>>>>> Thanks! >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> >>>>>>>>> Roland >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xsli at lbl.gov Tue Dec 21 12:35:20 2021 From: xsli at lbl.gov (Xiaoye S. Li) Date: Tue, 21 Dec 2021 10:35:20 -0800 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: <1e6a42b7-683a-9434-2d9a-4c042d3f8c30@ntnu.no> References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> <9E99DFBE-BB73-481B-BBC5-517222080A15@petsc.dev> <99ba9ea7-5f70-34a9-58fe-27180096c614@ntnu.no> <1e6a42b7-683a-9434-2d9a-4c042d3f8c30@ntnu.no> Message-ID: Seems -isystem is a gcc option. Can you try not to give "-I/usr/local/cuda-11.5/include" in CUDA_ARCH_FLAGS, see whether nvcc can find the proper include/ directory? Sherry On Tue, Dec 21, 2021 at 10:26 AM Roland Richter wrote: > I'm already using GCC, together with Intel MPI, thus that should not be > the problem. But I'll take a look if there are ways to fix that problem, or > to circumvent it without having to install superlu_dist separately. > > Thanks! > > Regards, > > Roland > Am 21.12.2021 um 19:04 schrieb Barry Smith: > > > I think the problem comes from the following: > > -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 > -DDEBUGlevel=0 -DPRNTlevel=0" > > which is ok, the location of the CUDA include files is passed in correctly > but when the compile is done inside SuperLU_DIST's make > > cd > ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/petsc-build/SRC > && /usr/local/cuda-11.5/bin/nvcc -forward-unknown-to-host-compiler > -DSUPERLU_DIST_EXPORTS -Dsuperlu_dist_EXPORTS > -I~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/petsc-build/SRC > -isystem=/usr/local/cuda-11.5/include -DUSE_VENDOR_BLAS > -allow-unsupported-compiler -Xcompiler -fPIC -O3 -ccbin > /opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx -std=c++17 -gencode > arch=compute_61,code=sm_61 -Wno-deprecated-gpu-targets > -I/opt/intel/oneapi/mpi/2021.5.0/include -O3 -DNDEBUG > --generate-code=arch=compute_61,code=[compute_61,sm_61] -Xcompiler=-fPIC > -std=c++11 -MD -MT SRC/CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o > -MF CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o.d -x cu -dc > ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/ > superlu_gpu_utils.cu -o CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o > > The > > -I/usr/local/cuda-11.5/include > > has been replaced with > > -isystem=/usr/local/cuda-11.5/include > > and I suspect the nvcc doesn't understand it so it ignores it. I assume > somewhere inside the CMAKE processing it is replacing the -I with the > -isystem= > > I don't know how one could fix this. I suggest you try installing > SuperLU_DIST yourself directly. But I suspect the same problem will appear. > > It is possible that if you do not use the Intel compilers but use for > example the GNU compilers the problem may not occur because cmake may not > make the transformation. > > Barry > > Googling cmake isystem produces many messages about issues related to > isystem and cmake but it would take a life-time to understand them. > > > On Dec 21, 2021, at 2:24 AM, Roland Richter > wrote: > > I added/replaced all six lines which are in that commit, getting > > Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc > -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" > -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 > -DCMAKE_BUILD_TYPE=Release > -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" > -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" > -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib > -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" > -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" > -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" > -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" > -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" > -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC > -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 -march=native > -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" > -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC > -std=gnu++17 -fopenmp" > -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" > -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" > -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp > -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 > -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" > -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC > -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" > -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON -DTPL_ENABLE_CUDALIB=TRUE > -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 > -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse > -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" > -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 > -DDEBUGlevel=0 -DPRNTlevel=0" > -DCMAKE_CUDA_FLAGS="-allow-unsupported-compiler -Xcompiler -fPIC -O3 -ccbin > /opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx -std=c++17 -gencode > arch=compute_61,code=sm_61 -Wno-deprecated-gpu-targets > -I"/opt/intel/oneapi/mpi/2021.5.0/include" " -DUSE_XSDK_DEFAULTS=YES > -DTPL_BLAS_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 > -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse > -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ > -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -L/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib > -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread > -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 > -L/usr/lib64/gcc/x86_64-suse-linux/11 > -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib > -L/opt/intel/oneapi/vpl/2022.0.0/lib > -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s > -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas > -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart > -lcufft -lcublas -lcusparse -lcusolver -lcurand > -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -L/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib > -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread > -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 > -L/usr/lib64/gcc/x86_64-suse-linux/11 > -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib > -L/opt/intel/oneapi/vpl/2022.0.0/lib > -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s > -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE > -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 > -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" > -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" > > but the compilation still fails with the same error. > Am 20.12.21 um 22:46 schrieb Roland Richter: > > Yes, just checked, I only included the changes above the comment... > > Will test tomorrow, thanks for the help! > > Regards, > > Roland > Am 20.12.2021 um 21:46 schrieb Barry Smith: > > Hmm, the fix should now be supplying a -DCMAKE_CUDA_FLAGS to this cmake > command (line 48 at > https://gitlab.com/petsc/petsc/-/merge_requests/4635/diffs) I do not see > that flag being set inside the configure.log so I am guessing you didn't > get the complete fix. > > > Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc > -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" > -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 > -DCMAKE_BUILD_TYPE=Release > -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" > -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" > -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib > -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" > -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" > -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" > -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" > -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" > -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC > -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 -march=native > -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" > -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC > -std=gnu++17 -fopenmp" > -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" > -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" > -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp > -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 > -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" > -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC > -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" > -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON -DTPL_ENABLE_CUDALIB=TRUE > -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 > -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse > -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" > -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 > -DDEBUGlevel=0 -DPRNTlevel=0" -DUSE_XSDK_DEFAULTS=YES > -DTPL_BLAS_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 > -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse > -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ > -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -L/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib > -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread > -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 > -L/usr/lib64/gcc/x86_64-suse-linux/11 > -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib > -L/opt/intel/oneapi/vpl/2022.0.0/lib > -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s > -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas > -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart > -lcufft -lcublas -lcusparse -lcusolver -lcurand > -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -L/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib > -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread > -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 > -L/usr/lib64/gcc/x86_64-suse-linux/11 > -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib > -L/opt/intel/oneapi/vpl/2022.0.0/lib > -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib > -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib > -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin > -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib > -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 > -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp > -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release > -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s > -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE > -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 > -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" > -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" > > > > > On Dec 20, 2021, at 2:59 PM, Roland Richter > wrote: > > I introduced the changes from that patch directly, without checking out. > Is that insufficient? > > Regards, > > Roland > Am 20.12.2021 um 20:38 schrieb Barry Smith: > > > Are you sure you have the correct PETSc branch? From configure.log it > has > > Defined "VERSION_GIT" to ""v3.16.2-466-g959e1fce86"" > Defined "VERSION_DATE_GIT" to ""2021-12-18 11:17:24 -0600"" > Defined "VERSION_BRANCH_GIT" to ""master"" > > It should have balay/slu-without-omp-3 for the branch. > > > > On Dec 20, 2021, at 10:50 AM, Roland Richter > wrote: > > In that case it fails with > > *~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: > fatal error: cublas_v2.h: No such file or directory* > > even though this header is available. I assume some header paths are not > set correctly? > > Thanks, > > regards, > > Roland > Am 20.12.21 um 16:29 schrieb Barry Smith: > > > Please try the branch balay/slu-without-omp-3 It is in MR > https://gitlab.com/petsc/petsc/-/merge_requests/4635 > > > > On Dec 20, 2021, at 8:14 AM, Roland Richter > wrote: > > Hei, > > I tried to combine CUDA with superlu_dist in petsc using the following > configure-line: > > *./configure PETSC_ARCH=mpich-complex-linux-gcc-demo > --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc > --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx > --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 > -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" --FFLAGS="-mavx2 > -march=native -O3" --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ > --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 > --with-scalar-type=complex --download-suitesparse=1 --with-cuda > --with-debugging=0 --with-openmp --download-superlu_dist --force* > > but the configure-step fails with several errors correlated with CUDA and > superlu_dist, the first one being > > *cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first use in > this function); did you mean ?CUDA_VERSION??* > * 21 | printf("CUDA version: v %d\n",CUDART_VERSION);* > * | ^~~~~~~~~~~~~~* > * | CUDA_VERSION* > > Compiling superlu_dist separately works, though (including CUDA). > > Is there a bug somewhere in the configure-routine? I attached the full > configure-log. > > Thanks! > > Regards, > > Roland > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 21 13:21:11 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 21 Dec 2021 14:21:11 -0500 Subject: [petsc-users] Combining SUPERLU_DIST with CUDA fails In-Reply-To: References: <2B3C240D-CEBB-4B45-A60E-ECD1F092B058@petsc.dev> <3f258b07-a648-df64-f298-769c399d9732@ntnu.no> <2B1B2836-5107-430A-BDB8-E6165EFE6C65@petsc.dev> <9E99DFBE-BB73-481B-BBC5-517222080A15@petsc.dev> <99ba9ea7-5f70-34a9-58fe-27180096c614@ntnu.no> <1e6a42b7-683a-9434-2d9a-4c042d3f8c30@ntnu.no> Message-ID: <7ABB7756-6978-4202-9003-6887876D4022@petsc.dev> Roland, The quickest way for you to debug is to go into $PETSC_DIR/$PETSC_ARCH/externalpackages/git.superlu_dist/petsc-build and then cut and paste the CMAKE command from the configure.log file (that we pass to SuperLU_DIST). You can then edit the command to try different options (as Sherry suggests first try removing the -I business). After running the cmake command you can then cut and paste the make command from configure.log This interactive approach is much faster for debugging cmake problems then trying to modify PETSc to get a proper cmake command for superlu_dist. Once you figure out the magic juice you can tell us and we can figure out how to add it to PETSc's process for building SuperLU_DIST. Barry > On Dec 21, 2021, at 1:35 PM, Xiaoye S. Li wrote: > > Seems -isystem is a gcc option. > Can you try not to give "-I/usr/local/cuda-11.5/include" in CUDA_ARCH_FLAGS, see whether nvcc can find the proper include/ directory? > > Sherry > > On Tue, Dec 21, 2021 at 10:26 AM Roland Richter > wrote: > I'm already using GCC, together with Intel MPI, thus that should not be the problem. But I'll take a look if there are ways to fix that problem, or to circumvent it without having to install superlu_dist separately. > > Thanks! > > Regards, > > Roland > > Am 21.12.2021 um 19:04 schrieb Barry Smith: >> >> I think the problem comes from the following: >> >>> -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 -DDEBUGlevel=0 -DPRNTlevel=0" >>> >> which is ok, the location of the CUDA include files is passed in correctly but when the compile is done inside SuperLU_DIST's make >> >> cd ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/petsc-build/SRC && /usr/local/cuda-11.5/bin/nvcc -forward-unknown-to-host-compiler -DSUPERLU_DIST_EXPORTS -Dsuperlu_dist_EXPORTS -I~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/petsc-build/SRC -isystem=/usr/local/cuda-11.5/include -DUSE_VENDOR_BLAS -allow-unsupported-compiler -Xcompiler -fPIC -O3 -ccbin /opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx -std=c++17 -gencode arch=compute_61,code=sm_61 -Wno-deprecated-gpu-targets -I/opt/intel/oneapi/mpi/2021.5.0/include -O3 -DNDEBUG --generate-code=arch=compute_61,code=[compute_61,sm_61] -Xcompiler=-fPIC -std=c++11 -MD -MT SRC/CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o -MF CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o.d -x cu -dc ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/superlu_gpu_utils.cu -o CMakeFiles/superlu_dist.dir/superlu_gpu_utils.cu.o >> >> The >> >> -I/usr/local/cuda-11.5/include >> >> has been replaced with >> >> -isystem=/usr/local/cuda-11.5/include >> >> and I suspect the nvcc doesn't understand it so it ignores it. I assume somewhere inside the CMAKE processing it is replacing the -I with the -isystem= >> >> I don't know how one could fix this. I suggest you try installing SuperLU_DIST yourself directly. But I suspect the same problem will appear. >> >> It is possible that if you do not use the Intel compilers but use for example the GNU compilers the problem may not occur because cmake may not make the transformation. >> >> Barry >> >> Googling cmake isystem produces many messages about issues related to isystem and cmake but it would take a life-time to understand them. >> >> >>> On Dec 21, 2021, at 2:24 AM, Roland Richter > wrote: >>> >>> I added/replaced all six lines which are in that commit, getting >>> >>> Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON -DTPL_ENABLE_CUDALIB=TRUE -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 -DDEBUGlevel=0 -DPRNTlevel=0" -DCMAKE_CUDA_FLAGS="-allow-unsupported-compiler -Xcompiler -fPIC -O3 -ccbin /opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx -std=c++17 -gencode arch=compute_61,code=sm_61 -Wno-deprecated-gpu-targets -I"/opt/intel/oneapi/mpi/2021.5.0/include" " -DUSE_XSDK_DEFAULTS=YES -DTPL_BLAS_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" >>> >>> but the compilation still fails with the same error. >>> >>> Am 20.12.21 um 22:46 schrieb Roland Richter: >>>> Yes, just checked, I only included the changes above the comment... >>>> >>>> Will test tomorrow, thanks for the help! >>>> >>>> Regards, >>>> >>>> Roland >>>> >>>> Am 20.12.2021 um 21:46 schrieb Barry Smith: >>>>> Hmm, the fix should now be supplying a -DCMAKE_CUDA_FLAGS to this cmake command (line 48 at https://gitlab.com/petsc/petsc/-/merge_requests/4635/diffs ) I do not see that flag being set inside the configure.log so I am guessing you didn't get the complete fix. >>>>> >>>>> >>>>> Executing: /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/opt/petsc -DCMAKE_INSTALL_NAME_DIR:STRING="/opt/petsc/lib" -DCMAKE_INSTALL_LIBDIR:STRING="lib" -DCMAKE_VERBOSE_MAKEFILE=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DMPI_C_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc" -DCMAKE_AR=/usr/bin/ar -DCMAKE_RANLIB=/usr/bin/ranlib -DCMAKE_C_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_C_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp" -DCMAKE_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DMPI_CXX_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx" -DCMAKE_CXX_FLAGS:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_CXX_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fopenmp -fPIC -std=gnu++17 -fopenmp" -DCMAKE_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DMPI_Fortran_COMPILER="/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc" -DCMAKE_Fortran_FLAGS:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_DEBUG:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_Fortran_FLAGS_RELEASE:STRING="-mavx2 -march=native -O3 -fPIC -fopenmp -fallow-argument-mismatch" -DCMAKE_EXE_LINKER_FLAGS:STRING=" -fopenmp -fopenmp" -DBUILD_SHARED_LIBS:BOOL=ON -DTPL_ENABLE_CUDALIB=TRUE -DTPL_CUDA_LIBRARIES="-Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda" -DCUDA_ARCH_FLAGS="-I/usr/local/cuda-11.5/include -arch=sm_61 -DDEBUGlevel=0 -DPRNTlevel=0" -DUSE_XSDK_DEFAULTS=YES -DTPL_BLAS_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -DTPL_LAPACK_LIBRARIES="-lopenblas -Wl,-rpath,/usr/local/cuda-11.5/lib64 -L/usr/local/cuda-11.5/lib64 -lcudart -lcufft -lcublas -lcusparse -lcusolver -lcurand -L/usr/local/cuda-11.5/lib64/stubs -lcuda -lm -lstdc++ -ldl -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -L/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -L/opt/intel/oneapi/mpi/2021.5.0/lib -lmpifort -lmpi -lrt -lpthread -lgfortran -lm -Wl,-rpath,/usr/lib64/gcc/x86_64-suse-linux/11 -L/usr/lib64/gcc/x86_64-suse-linux/11 -Wl,-rpath,/opt/intel/oneapi/vpl/2022.0.0/lib -L/opt/intel/oneapi/vpl/2022.0.0/lib -Wl,-rpath,/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -L/opt/intel/oneapi/tbb/2021.5.0/lib/intel64/gcc4.8 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.5.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -L/opt/intel/oneapi/mkl/2022.0.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -L/opt/intel/oneapi/ipp/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -L/opt/intel/oneapi/ippcp/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -L/opt/intel/oneapi/dnnl/2022.0.1/cpu_dpcpp_gpu_dpcpp/lib -Wl,-rpath,/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -L/opt/intel/oneapi/dal/2021.5.1/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.0.1/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.0.1/linux/lib -L/opt/intel/oneapi/compiler/2022.0.1/linux/lib -Wl,-rpath,/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -L/opt/intel/oneapi/clck/2021.5.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -L/opt/intel/oneapi/ccl/2021.5.0/lib/cpu_gpu_dpcpp -Wl,-rpath,/usr/x86_64-suse-linux/lib -L/usr/x86_64-suse-linux/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.5.0/lib -lgfortran -lm -lgcc_s -lquadmath" -Denable_parmetislib=FALSE -DTPL_ENABLE_PARMETISLIB=FALSE -DXSDK_ENABLE_Fortran=ON -Denable_tests=0 -Denable_examples=0 -DMPI_C_COMPILE_FLAGS:STRING="" -DMPI_C_INCLUDE_PATH:STRING="" -DMPI_C_HEADER_DIR:STRING="" -DMPI_C_LIBRARIES:STRING="" >>>>> >>>>> >>>>> >>>>> >>>>>> On Dec 20, 2021, at 2:59 PM, Roland Richter > wrote: >>>>>> >>>>>> I introduced the changes from that patch directly, without checking out. Is that insufficient? >>>>>> >>>>>> Regards, >>>>>> >>>>>> Roland >>>>>> >>>>>> Am 20.12.2021 um 20:38 schrieb Barry Smith: >>>>>>> >>>>>>> Are you sure you have the correct PETSc branch? From configure.log it has >>>>>>> >>>>>>> Defined "VERSION_GIT" to ""v3.16.2-466-g959e1fce86"" >>>>>>> Defined "VERSION_DATE_GIT" to ""2021-12-18 11:17:24 -0600"" >>>>>>> Defined "VERSION_BRANCH_GIT" to ""master"" >>>>>>> >>>>>>> It should have balay/slu-without-omp-3 for the branch. >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Dec 20, 2021, at 10:50 AM, Roland Richter > wrote: >>>>>>>> >>>>>>>> In that case it fails with >>>>>>>> >>>>>>>> ~/Downloads/git-files/petsc/mpich-complex-linux-gcc-demo/externalpackages/git.superlu_dist/SRC/cublas_utils.h:22:10: fatal error: cublas_v2.h: No such file or directory >>>>>>>> >>>>>>>> even though this header is available. I assume some header paths are not set correctly? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> regards, >>>>>>>> >>>>>>>> Roland >>>>>>>> >>>>>>>> Am 20.12.21 um 16:29 schrieb Barry Smith: >>>>>>>>> >>>>>>>>> Please try the branch balay/slu-without-omp-3 It is in MR https://gitlab.com/petsc/petsc/-/merge_requests/4635 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> On Dec 20, 2021, at 8:14 AM, Roland Richter > wrote: >>>>>>>>>> >>>>>>>>>> Hei, >>>>>>>>>> >>>>>>>>>> I tried to combine CUDA with superlu_dist in petsc using the following configure-line: >>>>>>>>>> >>>>>>>>>> ./configure PETSC_ARCH=mpich-complex-linux-gcc-demo --CC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicc --CXX=/opt/intel/oneapi/mpi/2021.5.0/bin/mpicxx --FC=/opt/intel/oneapi/mpi/2021.5.0/bin/mpifc --CFLAGS="-mavx2 -march=native -O3" --CXXFLAGS="-mavx2 -march=native -O3" --FFLAGS="-mavx2 -march=native -O3" --CUDAFLAGS=-allow-unsupported-compiler --CUDA-CXX=g++ --prefix=/opt/petsc --with-blaslapack=1 --with-mpi=1 --with-scalar-type=complex --download-suitesparse=1 --with-cuda --with-debugging=0 --with-openmp --download-superlu_dist --force >>>>>>>>>> >>>>>>>>>> but the configure-step fails with several errors correlated with CUDA and superlu_dist, the first one being >>>>>>>>>> >>>>>>>>>> cublas_utils.c:21:37: error: ?CUDART_VERSION? undeclared (first use in this function); did you mean ?CUDA_VERSION?? >>>>>>>>>> 21 | printf("CUDA version: v %d\n",CUDART_VERSION); >>>>>>>>>> | ^~~~~~~~~~~~~~ >>>>>>>>>> | CUDA_VERSION >>>>>>>>>> >>>>>>>>>> Compiling superlu_dist separately works, though (including CUDA). >>>>>>>>>> >>>>>>>>>> Is there a bug somewhere in the configure-routine? I attached the full configure-log. >>>>>>>>>> >>>>>>>>>> Thanks! >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> >>>>>>>>>> Roland >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Dec 24 16:56:40 2021 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 24 Dec 2021 17:56:40 -0500 Subject: [petsc-users] Nullspaces In-Reply-To: References: Message-ID: <7AA299EF-4757-4C0C-8CC4-E4857DBD4EE1@petsc.dev> I tried your code but it appears the ./gyroid_solution.txt contains a vector of all zeros. Is this intended? Actually VecDuplicateVecs() does not copy the values in the vector so your nsp[0] will contain the zero vector anyways. Would you be able to send the data that indicates what rows of the vector are associated with each subdomain? For example a vector with all 1s on the first domain and all 2s on the second domain? I think with this one should be able to construct the 2 dimensional null space. Barry > On Dec 16, 2021, at 11:09 AM, Marco Cisternino wrote: > > Hello Matthew, > as promised I prepared a minimal (112960 rows. I?m not able to produce anything smaller than this and triggering the issue) example of the behavior I was talking about some days ago. > What I did is to produce matrix, right hand side and initial solution of the linear system. > > As I told you before, this linear system is the discretization of the pressure equation of a predictor-corrector method for NS equations in the framework of finite volume method. > This case has homogeneous Neumann boundary conditions. Computational domain has two independent and separated sub-domains. > I discretize the weak formulation and I divide every row of the linear system by the volume of the relative cell. > The underlying mesh is not uniform, therefore cells have different volumes. > The issue I?m going to explain does not show up if the mesh is uniform, same volume for all the cells. > > I usually build the null space sub-domain by sub-domain with > MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, &nullspace); > Where nConstants = 2 and constants contains two normalized arrays with constant values on degrees of freedom relative to the associated sub-domain and zeros elsewhere. > > However, as a test I tried the constant over the whole domain using 2 alternatives that should produce the same null space: > MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); > Vec* nsp; > VecDuplicateVecs(solution, 1, &nsp); > VecSet(nsp[0],1.0); > VecNormalize(nsp[0], nullptr); > MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, nsp, &nullspace); > > Once I created the null space I test it using: > MatNullSpaceTest(nullspace, m_A, &isNullSpaceValid); > > The case 1 pass the test while case 2 don?t. > > I have a small code for matrix loading, null spaces creation and testing. > Unfortunately I cannot implement a small code able to produce that linear system. > > As attachment you can find an archive containing the matrix, the initial solution (used to manually build the null space) and the rhs (not used in the test code) in binary format. > You can also find the testing code in the same archive. > I used petsc 3.12(gcc+openMPI) and petsc 3.15.2(intelOneAPI) same results. > If the attachment is not delivered, I can share a link to it. > > Thanks for any help. > > Marco Cisternino > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > ______________________ > Optimad Engineering Srl > Via Bligny 5, Torino, Italia. > +3901119719782 > www.optimad.it > > From: Marco Cisternino > > Sent: marted? 7 dicembre 2021 19:36 > To: Matthew Knepley > > Cc: petsc-users > > Subject: Re: [petsc-users] Nullspaces > > I will, as soon as possible... > > Scarica Outlook per Android > From: Matthew Knepley > > Sent: Tuesday, December 7, 2021 7:25:43 PM > To: Marco Cisternino > > Cc: petsc-users > > Subject: Re: [petsc-users] Nullspaces > > On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino > wrote: > Good morning, > > I?m still struggling with the Poisson equation with Neumann BCs. > > I discretize the equation by finite volume method and I divide every line of the linear system by the volume of the cell. I could avoid this division, but I?m trying to understand. > > My mesh is not uniform, i.e. cells have different volumes (it is an octree mesh). > > Moreover, in my computational domain there are 2 separated sub-domains. > > I build the null space and then I use MatNullSpaceTest to check it. > > > > If I do this: > > MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); > > It works > > > This produces the normalized constant vector. > > If I do this: > > Vec nsp; > > VecDuplicate(m_rhs, &nsp); > > VecSet(nsp,1.0); > > VecNormalize(nsp, nullptr); > > MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); > > It does not work > > > This is also the normalized constant vector. > > So you are saying that these two vectors give different results with MatNullSpaceTest()? > Something must be wrong in the code. Can you send a minimal example of this? I will go > through and debug it. > > Thanks, > > Matt > > Probably, I have wrong expectations, but should not it be the same? > > > > Thanks > > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > ______________________ > > Optimad Engineering Srl > > Via Bligny 5, Torino, Italia. > +3901119719782 > www.optimad.it > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sat Dec 25 07:59:18 2021 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 25 Dec 2021 08:59:18 -0500 Subject: [petsc-users] Nullspaces In-Reply-To: References: Message-ID: If "triggering the issue" requires a substantial mesh, that makes me think there is a logic bug somewhere. Maybe use valgrid. Also you say you divide by the cell volume. Maybe I am not understanding this but that is basically diagonal scaling and that will change the null space (ie, not a constant anymore) On Thu, Dec 16, 2021 at 11:11 AM Marco Cisternino < marco.cisternino at optimad.it> wrote: > Hello Matthew, > > as promised I prepared a minimal (112960 rows. I?m not able to produce > anything smaller than this and triggering the issue) example of the > behavior I was talking about some days ago. > > What I did is to produce matrix, right hand side and initial solution of > the linear system. > > > > As I told you before, this linear system is the discretization of the > pressure equation of a predictor-corrector method for NS equations in the > framework of finite volume method. > > This case has homogeneous Neumann boundary conditions. Computational > domain has two independent and separated sub-domains. > > I discretize the weak formulation and I divide every row of the linear > system by the volume of the relative cell. > > The underlying mesh is not uniform, therefore cells have different > volumes. > > The issue I?m going to explain does not show up if the mesh is uniform, > same volume for all the cells. > > > > I usually build the null space sub-domain by sub-domain with > > MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, > &nullspace); > > Where nConstants = 2 and constants contains two normalized arrays with > constant values on degrees of freedom relative to the associated sub-domain > and zeros elsewhere. > > > > However, as a test I tried the constant over the whole domain using 2 > alternatives that should produce the same null space: > > 1. MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, > &nullspace); > 2. Vec* nsp; > > VecDuplicateVecs(solution, 1, &nsp); > > VecSet(nsp[0],1.0); > > VecNormalize(nsp[0], nullptr); > > MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, nsp, &nullspace); > > > > Once I created the null space I test it using: > > MatNullSpaceTest(nullspace, m_A, &isNullSpaceValid); > > > > The case 1 pass the test while case 2 don?t. > > > > I have a small code for matrix loading, null spaces creation and testing. > > Unfortunately I cannot implement a small code able to produce that linear > system. > > > > As attachment you can find an archive containing the matrix, the initial > solution (used to manually build the null space) and the rhs (not used in > the test code) in binary format. > > You can also find the testing code in the same archive. > > I used petsc 3.12(gcc+openMPI) and petsc 3.15.2(intelOneAPI) same results. > > If the attachment is not delivered, I can share a link to it. > > > > Thanks for any help. > > > > Marco Cisternino > > > > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > > ______________________ > > Optimad Engineering Srl > > Via Bligny 5, Torino, Italia. > +3901119719782 > www.optimad.it > > > > *From:* Marco Cisternino > *Sent:* marted? 7 dicembre 2021 19:36 > *To:* Matthew Knepley > *Cc:* petsc-users > *Subject:* Re: [petsc-users] Nullspaces > > > > I will, as soon as possible... > > > > Scarica Outlook per Android > ------------------------------ > > *From:* Matthew Knepley > *Sent:* Tuesday, December 7, 2021 7:25:43 PM > *To:* Marco Cisternino > *Cc:* petsc-users > *Subject:* Re: [petsc-users] Nullspaces > > > > On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino < > marco.cisternino at optimad.it> wrote: > > Good morning, > > I?m still struggling with the Poisson equation with Neumann BCs. > > I discretize the equation by finite volume method and I divide every line > of the linear system by the volume of the cell. I could avoid this > division, but I?m trying to understand. > > My mesh is not uniform, i.e. cells have different volumes (it is an octree > mesh). > > Moreover, in my computational domain there are 2 separated sub-domains. > > I build the null space and then I use MatNullSpaceTest to check it. > > > > If I do this: > > MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, &nullspace); > > It works > > > > This produces the normalized constant vector. > > > > If I do this: > > Vec nsp; > > VecDuplicate(m_rhs, &nsp); > > VecSet(nsp,1.0); > > VecNormalize(nsp, nullptr); > > MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, &nsp, &nullspace); > > It does not work > > > > This is also the normalized constant vector. > > > > So you are saying that these two vectors give different results with > MatNullSpaceTest()? > > Something must be wrong in the code. Can you send a minimal example of > this? I will go > > through and debug it. > > > > Thanks, > > > > Matt > > > > Probably, I have wrong expectations, but should not it be the same? > > > > Thanks > > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > > ______________________ > > Optimad Engineering Srl > > Via Bligny 5, Torino, Italia. > +3901119719782 > www.optimad.it > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lulu.liu at kaust.edu.sa Sat Dec 25 23:48:28 2021 From: lulu.liu at kaust.edu.sa (Lulu Liu) Date: Sun, 26 Dec 2021 13:48:28 +0800 Subject: [petsc-users] The exact Jacobian for ASPIN In-Reply-To: <36253D17-53C5-4CCB-B969-3AB698B4822A@petsc.dev> References: <36253D17-53C5-4CCB-B969-3AB698B4822A@petsc.dev> Message-ID: Dear Barry, Yes, J_{I} is the typo, and it should be J_{j}. The exact Jacobian of the preconditioned Jacobian is J_exact=\sum_{j=1}^{N}J_{j}(x-T_{i})J(x-T_{j}) (1) It seems that PETSc uses the approximate Jacobian, J_exact=\sum_{j=1}^{N}J_{j}(x-T_{i})J(x-\sum_{j=1}^{N}T_{j}) (2) In the RASPEN algorithm, it requires the exact Jacobian of ASPIN in the equation (1). Currently, the J_{j}(x-T_{i}) is only computed on each processor in PETSc, how can I get the global term J(x-T_{j})(j=1,2,..N)? Thanks. On Tue, Nov 9, 2021 at 11:18 AM Barry Smith wrote: > > Lulu, > > Sorry for not responding quicker. This question will take a little > research to figure out exactly what is needed, I am not sure it is > possible. > > I note below you have a J_{I} is that intentional? If so what does it > mean, or is a typo that should be J_{j} as in the next equation? > > Barry > > > On Nov 6, 2021, at 7:19 AM, Lulu Liu wrote: > > Hi, > For ASPIN, the local problem F_{j}(x-T_{j})=0 (j=1,2,...N) is solved. The > exact Jacobian of the preconditioned Jacobian is > J_exact=\sum_{j=1}^{N}J_{I}(x-T_{i})J(x-T_{j}). > > It seems that PETSc uses the approximate Jacobian, > J_exact=\sum_{j=1}^{N}J_{j}(x-T_{i})J(x-\sum_{j=1}^{N}T_{j}). > > I want to implement RASPEN, which requires the exact Jacobian of ASPIN. Is there > any easy way to compute J(x-T_{j}) (j=1,2,..N)? How can I get the global > vectors like x-T_{j} ? PETSc only provides the vector x-T_{j} on the > subdomain now. > > Thanks very much! > > -- > Best wishes, > Lulu Liu > > > ------------------------------ > This message and its contents, including attachments are intended solely > for the original recipient. If you are not the intended recipient or have > received this message in error, please notify me immediately and delete > this message from your computer system. Any unauthorized use or > distribution is prohibited. Please consider the environment before printing > this email. > > > -- Best wishes, Lulu Liu Applied Mathematics and Computational Science King Abdullah University of Science and Technology Tel??966?0544701599 -- This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From FERRANJ2 at my.erau.edu Wed Dec 29 16:12:53 2021 From: FERRANJ2 at my.erau.edu (Ferrand, Jesus A.) Date: Wed, 29 Dec 2021 22:12:53 +0000 Subject: [petsc-users] DM misuse causes massive memory leak? Message-ID: Dear PETSc Team: I have a question about DM and PetscSection. Say I import a mesh (for FEM purposes) and create a DMPlex for it. I then use PetscSections to set degrees of freedom per "point" (by point I mean vertices, lines, faces, and cells). I then use PetscSectionGetStorageSize() to get the size of the global stiffness matrix (K) needed for my FEM problem. One last detail, this K I populate inside a rather large loop using an element stiffness matrix function of my own. Instead of using DMCreateMatrix(), I manually created a Mat using MatCreate(), MatSetType(), MatSetSizes(), and MatSetUp(). I come to find that said loop is painfully slow when I use the manually created matrix, but 20x faster when I use the Mat coming out of DMCreateMatrix(). My question is then: Is the manual Mat a noob mistake and is it somehow creating a memory leak with K? Just in case it's something else I'm attaching the code. The loop that populates K is between lines 221 and 278. Anything related to DM, DMPlex, and PetscSection is between lines 117 and 180. Machine Type: HP Laptop C-compiler: Gnu C OS: Ubuntu 20.04 PETSc version: 3.16.0 MPI Implementation: MPICH Hope you all had a Merry Christmas and that you will have a happy and productive New Year. :D Sincerely: J.A. Ferrand Embry-Riddle Aeronautical University - Daytona Beach FL M.Sc. Aerospace Engineering | May 2022 B.Sc. Aerospace Engineering B.Sc. Computational Mathematics Sigma Gamma Tau Tau Beta Pi Honors Program Phone: (386)-843-1829 Email(s): ferranj2 at my.erau.edu jesus.ferrand at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gmshBACKUP2.c Type: text/x-csrc Size: 34421 bytes Desc: gmshBACKUP2.c URL: From jed at jedbrown.org Wed Dec 29 16:55:09 2021 From: jed at jedbrown.org (Jed Brown) Date: Wed, 29 Dec 2021 15:55:09 -0700 Subject: [petsc-users] DM misuse causes massive memory leak? In-Reply-To: References: Message-ID: <87tueraunm.fsf@jedbrown.org> "Ferrand, Jesus A." writes: > Dear PETSc Team: > > I have a question about DM and PetscSection. Say I import a mesh (for FEM purposes) and create a DMPlex for it. I then use PetscSections to set degrees of freedom per "point" (by point I mean vertices, lines, faces, and cells). I then use PetscSectionGetStorageSize() to get the size of the global stiffness matrix (K) needed for my FEM problem. One last detail, this K I populate inside a rather large loop using an element stiffness matrix function of my own. Instead of using DMCreateMatrix(), I manually created a Mat using MatCreate(), MatSetType(), MatSetSizes(), and MatSetUp(). I come to find that said loop is painfully slow when I use the manually created matrix, but 20x faster when I use the Mat coming out of DMCreateMatrix(). The sparse matrix hasn't been preallocated, which forces the data structure to do a lot of copies (as bad as O(n^2) complexity). DMCreateMatrix() preallocates for you. https://petsc.org/release/docs/manual/performance/#memory-allocation-for-sparse-matrix-assembly https://petsc.org/release/docs/manual/mat/#sec-matsparse > My question is then: Is the manual Mat a noob mistake and is it somehow creating a memory leak with K? Just in case it's something else I'm attaching the code. The loop that populates K is between lines 221 and 278. Anything related to DM, DMPlex, and PetscSection is between lines 117 and 180. > > Machine Type: HP Laptop > C-compiler: Gnu C > OS: Ubuntu 20.04 > PETSc version: 3.16.0 > MPI Implementation: MPICH > > Hope you all had a Merry Christmas and that you will have a happy and productive New Year. :D > > > Sincerely: > > J.A. Ferrand > > Embry-Riddle Aeronautical University - Daytona Beach FL > > M.Sc. Aerospace Engineering | May 2022 > > B.Sc. Aerospace Engineering > > B.Sc. Computational Mathematics > > > > Sigma Gamma Tau > > Tau Beta Pi > > Honors Program > > > > Phone: (386)-843-1829 > > Email(s): ferranj2 at my.erau.edu > > jesus.ferrand at gmail.com > //REFERENCE: https://github.com/FreeFem/FreeFem-sources/blob/master/plugin/mpi/PETSc-code.hpp > #include > static char help[] = "Imports a Gmsh mesh with boundary conditions and solves the elasticity equation.\n" > "Option prefix = opt_.\n"; > > struct preKE{//Preallocation before computing KE > Mat matB, > matBTCB; > //matKE; > PetscInt x_insert[3], > y_insert[3], > z_insert[3], > m,//Looping variables. > sizeKE,//size of the element stiffness matrix. > N,//Number of nodes in element. > x_in,y_in,z_in; //LI to index B matrix. > PetscReal J[3][3],//Jacobian matrix. > invJ[3][3],//Inverse of the Jacobian matrix. > detJ,//Determinant of the Jacobian. > dX[3], > dY[3], > dZ[3], > minor00, > minor01, > minor02,//Determinants of minors in a 3x3 matrix. > dPsidX, dPsidY, dPsidZ,//Shape function derivatives w.r.t global coordinates. > weight,//Multiplier of quadrature weights. > *dPsidXi,//Derivatives of shape functions w.r.t. Xi. > *dPsidEta,//Derivatives of shape functions w.r.t. Eta. > *dPsidZeta;//Derivatives of shape functions w.r.t Zeta. > PetscErrorCode ierr; > };//end struct. > > //Function declarations. > extern PetscErrorCode tetra4(PetscScalar*, PetscScalar*, PetscScalar*,struct preKE*, Mat*, Mat*); > extern PetscErrorCode ConstitutiveMatrix(Mat*,const char*,PetscInt); > extern PetscErrorCode InitializeKEpreallocation(struct preKE*,const char*); > > PetscErrorCode PetscViewerVTKWriteFunction(PetscObject vec,PetscViewer viewer){ > PetscErrorCode ierr; > ierr = VecView((Vec)vec,viewer); CHKERRQ(ierr); > return ierr; > } > > > > > int main(int argc, char **args){ > //DEFINITIONS OF PETSC's DMPLEX LINGO: > //POINT: A topology element (cell, face, edge, or vertex). > //CHART: It an interval from 0 to the number of "points." (the range of admissible linear indices) > //STRATUM: A subset of the "chart" which corresponds to all "points" at a given "level." > //LEVEL: This is either a "depth" or a "height". > //HEIGHT: Dimensionality of an element measured from 0D to 3D. Heights: cell = 0, face = 1, edge = 2, vertex = 3. > //DEPTH: Dimensionality of an element measured from 3D to 0D. Depths: cell = 3, face = 2, edge = 1, vertex = 0; > //CLOSURE: *of an element is the collection of all other elements that define it.I.e., the closure of a surface is the collection of vertices and edges that make it up. > //STAR: > //STANDARD LABELS: These are default tags that DMPlex has for its topology. ("depth") > PetscErrorCode ierr;//Error tracking variable. > DM dm;//Distributed memory object (useful for managing grids.) > DMLabel physicalgroups;//Identifies user-specified tags in gmsh (to impose BC's). > DMPolytopeType celltype;//When looping through cells, determines its type (tetrahedron, pyramid, hexahedron, etc.) > PetscSection s; > KSP ksp;//Krylov Sub-Space (linear solver object) > Mat K,//Global stiffness matrix (Square, assume unsymmetric). > KE,//Element stiffness matrix (Square, assume unsymmetric). > matC;//Constitutive matrix. > Vec XYZ,//Coordinate vector, contains spatial locations of mesh's vertices (NOTE: This vector self-destroys!). > U,//Displacement vector. > F;//Load Vector. > PetscViewer XYZviewer,//Viewer object to output mesh to ASCII format. > XYZpUviewer; //Viewer object to output displacements to ASCII format. > PetscBool interpolate = PETSC_TRUE,//Instructs Gmsh importer whether to generate faces and edges (Needed when using P2 or higher elements). > useCone = PETSC_TRUE,//Instructs "DMPlexGetTransitiveClosure()" whether to extract the closure or the star. > dirichletBC = PETSC_FALSE,//For use when assembling the K matrix. > neumannBC = PETSC_FALSE,//For use when assembling the F vector. > saveASCII = PETSC_FALSE,//Whether to save results in ASCII format. > saveVTK = PETSC_FALSE;//Whether to save results as VTK format. > PetscInt nc,//number of cells. (PETSc lingo for "elements") > nv,//number of vertices. (PETSc lingo for "nodes") > nf,//number of faces. (PETSc lingo for "surfaces") > ne,//number of edges. (PETSc lingo for "lines") > pStart,//starting LI of global elements. > pEnd,//ending LI of all elements. > cStart,//starting LI for cells global arrangement. > cEnd,//ending LI for cells in global arrangement. > vStart,//starting LI for vertices in global arrangement. > vEnd,//ending LI for vertices in global arrangement. > fStart,//starting LI for faces in global arrangement. > fEnd,//ending LI for faces in global arrangement. > eStart,//starting LI for edges in global arrangement. > eEnd,//ending LI for edges in global arrangement. > sizeK,//Size of the element stiffness matrix. > ii,jj,kk,//Dedicated looping variables. > indexXYZ,//Variable to access the elements of XYZ vector. > indexK,//Variable to access the elements of the U and F vectors (can reference rows and colums of K matrix.) > *closure = PETSC_NULL,//Pointer to the closure elements of a cell. > size_closure,//Size of the closure of a cell. > dim,//Dimension of the mesh. > //*edof,//Linear indices of dof's inside the K matrix. > dof = 3,//Degrees of freedom per node. > cells=0, edges=0, vertices=0, faces=0,//Topology counters when looping through cells. > pinXcode=10, pinZcode=11,forceZcode=12;//Gmsh codes to extract relevant "Face Sets." > PetscReal //*x_el,//Pointer to a vector that will store the x-coordinates of an element's vertices. > //*y_el,//Pointer to a vector that will store the y-coordinates of an element's vertices. > //*z_el,//Pointer to a vector that will store the z-coordinates of an element's vertices. > *xyz_el,//Pointer to xyz array in the XYZ vector. > traction = -10, > *KEdata, > t1,t2; //time keepers. > const char *gmshfile = "TopOptmeshfine2.msh";//Name of gmsh file to import. > > ierr = PetscInitialize(&argc,&args,NULL,help); if(ierr) return ierr; //And the machine shall work.... > > //MESH IMPORT================================================================= > //IMPORTANT NOTE: Gmsh only creates "cells" and "vertices," it does not create the "faces" or "edges." > //Gmsh probably can generate them, must figure out how to. > t1 = MPI_Wtime(); > ierr = DMPlexCreateGmshFromFile(PETSC_COMM_WORLD,gmshfile,interpolate,&dm); CHKERRQ(ierr);//Read Gmsh file and generate the DMPlex. > ierr = DMGetDimension(dm, &dim); CHKERRQ(ierr);//1-D, 2-D, or 3-D > ierr = DMPlexGetChart(dm, &pStart, &pEnd); CHKERRQ(ierr);//Extracts linear indices of cells, vertices, faces, and edges. > ierr = DMGetCoordinatesLocal(dm,&XYZ); CHKERRQ(ierr);//Extracts coordinates from mesh.(Contiguous storage: [x0,y0,z0,x1,y1,z1,...]) > ierr = VecGetArray(XYZ,&xyz_el); CHKERRQ(ierr);//Get pointer to vector's data. > t2 = MPI_Wtime(); > PetscPrintf(PETSC_COMM_WORLD,"Mesh Import time: %10f\n",t2-t1); > DMView(dm,PETSC_VIEWER_STDOUT_WORLD); > > //IMPORTANT NOTE: PETSc assumes that vertex-cell meshes are 2D even if they were 3D, so its ordering changes. > //Cells remain at height 0, but vertices move to height 1 from height 3. To prevent this from becoming an issue > //the "interpolate" variable is set to PETSC_TRUE to tell the mesh importer to generate faces and edges. > //PETSc, therefore, technically does additional meshing. Gotta figure out how to get this from Gmsh directly. > ierr = DMPlexGetDepthStratum(dm,3, &cStart, &cEnd);//Get LI of cells. > ierr = DMPlexGetDepthStratum(dm,2, &fStart, &fEnd);//Get LI of faces > ierr = DMPlexGetDepthStratum(dm,1, &eStart, &eEnd);//Get LI of edges. > ierr = DMPlexGetDepthStratum(dm,0, &vStart, &vEnd);//Get LI of vertices. > ierr = DMGetStratumSize(dm,"depth", 3, &nc);//Get number of cells. > ierr = DMGetStratumSize(dm,"depth", 2, &nf);//Get number of faces. > ierr = DMGetStratumSize(dm,"depth", 1, &ne);//Get number of edges. > ierr = DMGetStratumSize(dm,"depth", 0, &nv);//Get number of vertices. > /* > PetscPrintf(PETSC_COMM_WORLD,"global start = %10d\t global end = %10d\n",pStart,pEnd); > PetscPrintf(PETSC_COMM_WORLD,"#cells = %10d\t i = %10d\t i < %10d\n",nc,cStart,cEnd); > PetscPrintf(PETSC_COMM_WORLD,"#faces = %10d\t i = %10d\t i < %10d\n",nf,fStart,fEnd); > PetscPrintf(PETSC_COMM_WORLD,"#edges = %10d\t i = %10d\t i < %10d\n",ne,eStart,eEnd); > PetscPrintf(PETSC_COMM_WORLD,"#vertices = %10d\t i = %10d\t i < %10d\n",nv,vStart,vEnd); > */ > //MESH IMPORT================================================================= > > //NOTE: This section extremely hardcoded right now. > //Current setup would only support P1 meshes. > //MEMORY ALLOCATION ========================================================== > ierr = PetscSectionCreate(PETSC_COMM_WORLD, &s); CHKERRQ(ierr); > //The chart is akin to a contiguous memory storage allocation. Each chart entry is associated > //with a "thing," could be a vertex, face, cell, or edge, or anything else. > ierr = PetscSectionSetChart(s, pStart, pEnd); CHKERRQ(ierr); > //For each "thing" in the chart, additional room can be made. This is helpful for associating > //nodes to multiple degrees of freedom. These commands help associate nodes with > for(ii = cStart; ii < cEnd; ii++){//Cell loop. > ierr = PetscSectionSetDof(s, ii, 0);CHKERRQ(ierr);}//NOTE: Currently no dof's associated with cells. > for(ii = fStart; ii < fEnd; ii++){//Face loop. > ierr = PetscSectionSetDof(s, ii, 0);CHKERRQ(ierr);}//NOTE: Currently no dof's associated with faces. > for(ii = vStart; ii < vEnd; ii++){//Vertex loop. > ierr = PetscSectionSetDof(s, ii, dof);CHKERRQ(ierr);}//Sets x, y, and z displacements as dofs. > for(ii = eStart; ii < eEnd; ii++){//Edge loop > ierr = PetscSectionSetDof(s, ii, 0);CHKERRQ(ierr);}//NOTE: Currently no dof's associated with edges. > ierr = PetscSectionSetUp(s); CHKERRQ(ierr); > ierr = PetscSectionGetStorageSize(s,&sizeK);CHKERRQ(ierr);//Determine the size of the global stiffness matrix. > ierr = DMSetLocalSection(dm,s); CHKERRQ(ierr);//Associate the PetscSection with the DM object. > //PetscErrorCode DMCreateGlobalVector(DM dm,Vec *vec) > //ierr = DMCreateGlobalVector(dm,&U); CHKERRQ(ierr); > PetscSectionDestroy(&s); > //PetscPrintf(PETSC_COMM_WORLD,"sizeK = %10d\n",sizeK); > > //OBJECT SETUP================================================================ > //Global stiffness matrix. > //PetscErrorCode DMCreateMatrix(DM dm,Mat *mat) > > //This makes the loop fast. > ierr = DMCreateMatrix(dm,&K); > > //This makes the loop uber slow. > //ierr = MatCreate(PETSC_COMM_WORLD,&K); CHKERRQ(ierr); > //ierr = MatSetType(K,MATAIJ); CHKERRQ(ierr);// Global stiffness matrix set to some sparse type. > //ierr = MatSetSizes(K,PETSC_DECIDE,PETSC_DECIDE,sizeK,sizeK); CHKERRQ(ierr); > //ierr = MatSetUp(K); CHKERRQ(ierr); > > //Displacement vector. > ierr = VecCreate(PETSC_COMM_WORLD,&U); CHKERRQ(ierr); > ierr = VecSetType(U,VECSTANDARD); CHKERRQ(ierr);// Global stiffness matrix set to some sparse type. > ierr = VecSetSizes(U,PETSC_DECIDE,sizeK); CHKERRQ(ierr); > > //Load vector. > ierr = VecCreate(PETSC_COMM_WORLD,&F); CHKERRQ(ierr); > ierr = VecSetType(F,VECSTANDARD); CHKERRQ(ierr);// Global stiffness matrix set to some sparse type. > ierr = VecSetSizes(F,PETSC_DECIDE,sizeK); CHKERRQ(ierr); > //OBJECT SETUP================================================================ > > //WARNING: This loop is currently hardcoded for P1 elements only! Must Figure > //out a clever way to modify to accomodate Pn (n>1) elements. > > //BEGIN GLOBAL STIFFNESS MATRIX BUILDER======================================= > t1 = MPI_Wtime(); > > //PREALLOCATIONS============================================================== > ierr = ConstitutiveMatrix(&matC,"isotropic",0); CHKERRQ(ierr); > struct preKE preKEtetra4; > ierr = InitializeKEpreallocation(&preKEtetra4,"tetra4"); CHKERRQ(ierr); > ierr = MatCreate(PETSC_COMM_WORLD,&KE); CHKERRQ(ierr); //SEQUENTIAL > ierr = MatSetSizes(KE,PETSC_DECIDE,PETSC_DECIDE,12,12); CHKERRQ(ierr); //SEQUENTIAL > ierr = MatSetType(KE,MATDENSE); CHKERRQ(ierr); //SEQUENTIAL > ierr = MatSetUp(KE); CHKERRQ(ierr); > PetscReal x_tetra4[4], y_tetra4[4],z_tetra4[4], > x_hex8[8], y_hex8[8],z_hex8[8], > *x,*y,*z; > PetscInt *EDOF,edof_tetra4[12],edof_hex8[24]; > DMPolytopeType previous = DM_POLYTOPE_UNKNOWN; > //PREALLOCATIONS============================================================== > > > > for(ii=cStart;ii ierr = DMPlexGetTransitiveClosure(dm, ii, useCone, &size_closure, &closure); CHKERRQ(ierr); > ierr = DMPlexGetCellType(dm, ii, &celltype); CHKERRQ(ierr); > //IMPORTANT NOTE: MOST OF THIS LOOP SHOULD BE INCLUDED IN THE KE3D function. > if(previous != celltype){ > //PetscPrintf(PETSC_COMM_WORLD,"run \n"); > if(celltype == DM_POLYTOPE_TETRAHEDRON){ > x = x_tetra4; > y = y_tetra4; > z = z_tetra4; > EDOF = edof_tetra4; > }//end if. > else if(celltype == DM_POLYTOPE_HEXAHEDRON){ > x = x_hex8; > y = y_hex8; > z = z_hex8; > EDOF = edof_hex8; > }//end else if. > } > previous = celltype; > > //PetscPrintf(PETSC_COMM_WORLD,"Cell # %4i\t",ii); > cells=0; > edges=0; > vertices=0; > faces=0; > kk = 0; > for(jj=0;jj<(2*size_closure);jj+=2){//Scan the closure of the current cell. > //Use information from the DM's strata to determine composition of cell_ii. > if(vStart <= closure[jj] && closure[jj]< vEnd){//Check for vertices. > //PetscPrintf(PETSC_COMM_WORLD,"%5i\t",closure[jj]); > indexXYZ = dim*(closure[jj]-vStart);//Linear index of x-coordinate in the xyz_el array. > > *(x+vertices) = xyz_el[indexXYZ]; > *(y+vertices) = xyz_el[indexXYZ+1];//Extract Y-coordinates of the current vertex. > *(z+vertices) = xyz_el[indexXYZ+2];//Extract Y-coordinates of the current vertex. > *(EDOF + kk) = indexXYZ; > *(EDOF + kk+1) = indexXYZ+1; > *(EDOF + kk+2) = indexXYZ+2; > kk+=3; > vertices++;//Update vertex counter. > }//end if > else if(eStart <= closure[jj] && closure[jj]< eEnd){//Check for edge ID's > edges++; > }//end else ifindexK > else if(fStart <= closure[jj] && closure[jj]< fEnd){//Check for face ID's > faces++; > }//end else if > else if(cStart <= closure[jj] && closure[jj]< cEnd){//Check for cell ID's > cells++; > }//end else if > }//end "jj" loop. > ierr = tetra4(x,y,z,&preKEtetra4,&matC,&KE); CHKERRQ(ierr); //Generate the element stiffness matrix for this cell. > ierr = MatDenseGetArray(KE,&KEdata); CHKERRQ(ierr); > ierr = MatSetValues(K,12,EDOF,12,EDOF,KEdata,ADD_VALUES); CHKERRQ(ierr);//WARNING: HARDCODED FOR TETRAHEDRAL P1 ELEMENTS ONLY !!!!!!!!!!!!!!!!!!!!!!! > ierr = MatDenseRestoreArray(KE,&KEdata); CHKERRQ(ierr); > ierr = DMPlexRestoreTransitiveClosure(dm, ii,useCone, &size_closure, &closure); CHKERRQ(ierr); > }//end "ii" loop. > ierr = MatAssemblyBegin(K,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatAssemblyEnd(K,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > //ierr = MatView(K,PETSC_VIEWER_DRAW_WORLD); CHKERRQ(ierr); > //END GLOBAL STIFFNESS MATRIX BUILDER=========================================== > t2 = MPI_Wtime(); > PetscPrintf(PETSC_COMM_WORLD,"K build time: %10f\n",t2-t1); > > > > > > > > > t1 = MPI_Wtime(); > //BEGIN BOUNDARY CONDITION ENFORCEMENT========================================== > IS TrianglesIS, physicalsurfaceID;//, VerticesIS; > PetscInt numsurfvals, > //numRows, > dof_offset,numTri; > const PetscInt *surfvals, > //*pinZID, > *TriangleID; > PetscScalar diag =1; > PetscReal area,force; > //NOTE: Petsc can read/assign labels. Eeach label may posses multiple "values." > //These values act as tags within a tag. > //IMPORTANT NOTE: The below line needs a safety. If a mesh that does not feature > //face sets is imported, the code in its current state will crash!!!. This is currently > //hardcoded for the test mesh. > ierr = DMGetLabel(dm, "Face Sets", &physicalgroups); CHKERRQ(ierr);//Inspects Physical surface groups defined by gmsh (if any). > ierr = DMLabelGetValueIS(physicalgroups, &physicalsurfaceID); CHKERRQ(ierr);//Gets the physical surface ID's defined in gmsh (as specified in the .geo file). > ierr = ISGetIndices(physicalsurfaceID,&surfvals); CHKERRQ(ierr);//Get a pointer to the actual surface values. > ierr = DMLabelGetNumValues(physicalgroups, &numsurfvals); CHKERRQ(ierr);//Gets the number of different values that the label assigns. > for(ii=0;ii //PetscPrintf(PETSC_COMM_WORLD,"Values = %5i\n",surfvals[ii]); > //PROBLEM: The surface values are hardcoded in the gmsh file. We need to adopt standard "codes" > //that we can give to users when they make their meshes so that this code recognizes the Type > // of boundary conditions that are to be imposed. > if(surfvals[ii] == pinXcode){ > dof_offset = 0; > dirichletBC = PETSC_TRUE; > }//end if. > else if(surfvals[ii] == pinZcode){ > dof_offset = 2; > dirichletBC = PETSC_TRUE; > }//end else if. > else if(surfvals[ii] == forceZcode){ > dof_offset = 2; > neumannBC = PETSC_TRUE; > }//end else if. > > ierr = DMLabelGetStratumIS(physicalgroups, surfvals[ii], &TrianglesIS); CHKERRQ(ierr);//Get the ID's (as an IS) of the surfaces belonging to value 11. > //PROBLEM: DMPlexGetConeRecursiveVertices returns an array with repeated node ID's. For each repetition, the lines that enforce BC's unnecessarily re-run. > ierr = ISGetSize(TrianglesIS,&numTri); CHKERRQ(ierr); > ierr = ISGetIndices(TrianglesIS,&TriangleID); CHKERRQ(ierr);//Get a pointer to the actual surface values. > for(kk=0;kk ierr = DMPlexGetTransitiveClosure(dm, TriangleID[kk], useCone, &size_closure, &closure); CHKERRQ(ierr); > if(neumannBC){ > ierr = DMPlexComputeCellGeometryFVM(dm, TriangleID[kk], &area,PETSC_NULL,PETSC_NULL); CHKERRQ(ierr); > force = traction*area/3;//WARNING: The 3 here is hardcoded for a purely tetrahedral mesh only!!!!!!!!!! > } > for(jj=0;jj<(2*size_closure);jj+=2){ > //PetscErrorCode DMPlexComputeCellGeometryFVM(DM dm, PetscInt cell, PetscReal *vol, PetscReal centroid[], PetscReal normal[]) > if(vStart <= closure[jj] && closure[jj]< vEnd){//Check for vertices. > indexK = dof*(closure[jj] - vStart) + dof_offset; //Compute the dof ID's in the K matrix. > if(dirichletBC){//Boundary conditions requiring an edit of K matrix. > ierr = MatZeroRows(K,1,&indexK,diag,NULL,NULL); CHKERRQ(ierr); > }//end if. > else if(neumannBC){//Boundary conditions requiring an edit of RHS vector. > ierr = VecSetValue(F,indexK,force,ADD_VALUES); CHKERRQ(ierr); > }// end else if. > }//end if. > }//end "jj" loop. > ierr = DMPlexRestoreTransitiveClosure(dm, closure[jj],useCone, &size_closure, &closure); CHKERRQ(ierr); > }//end "kk" loop. > ierr = ISRestoreIndices(TrianglesIS,&TriangleID); CHKERRQ(ierr); > > /* > ierr = DMPlexGetConeRecursiveVertices(dm, TrianglesIS, &VerticesIS); CHKERRQ(ierr);//Get the ID's (as an IS) of the vertices that make up the surfaces of value 11. > ierr = ISGetSize(VerticesIS,&numRows); CHKERRQ(ierr);//Get number of flagged vertices (this includes repeated indices for faces that share nodes). > ierr = ISGetIndices(VerticesIS,&pinZID); CHKERRQ(ierr);//Get a pointer to the actual surface values. > if(dirichletBC){//Boundary conditions requiring an edit of K matrix. > for(kk=0;kk indexK = 3*(pinZID[kk] - vStart) + dof_offset; //Compute the dof ID's in the K matrix. (NOTE: the 3* ishardcoded for 3 degrees of freedom, tie this to a variable in the FUTURE.) > ierr = MatZeroRows(K,1,&indexK,diag,NULL,NULL); CHKERRQ(ierr); > }//end "kk" loop. > }//end if. > else if(neumannBC){//Boundary conditions requiring an edit of RHS vector. > for(kk=0;kk indexK = 3*(pinZID[kk] - vStart) + dof_offset; > ierr = VecSetValue(F,indexK,traction,INSERT_VALUES); CHKERRQ(ierr); > }//end "kk" loop. > }// end else if. > ierr = ISRestoreIndices(VerticesIS,&pinZID); CHKERRQ(ierr); > */ > dirichletBC = PETSC_FALSE; > neumannBC = PETSC_FALSE; > }//end "ii" loop. > ierr = ISRestoreIndices(physicalsurfaceID,&surfvals); CHKERRQ(ierr); > //ierr = ISRestoreIndices(VerticesIS,&pinZID); CHKERRQ(ierr); > ierr = ISDestroy(&physicalsurfaceID); CHKERRQ(ierr); > //ierr = ISDestroy(&VerticesIS); CHKERRQ(ierr); > ierr = ISDestroy(&TrianglesIS); CHKERRQ(ierr); > //END BOUNDARY CONDITION ENFORCEMENT============================================ > t2 = MPI_Wtime(); > PetscPrintf(PETSC_COMM_WORLD,"BC imposition time: %10f\n",t2-t1); > > /* > PetscInt kk = 0; > for(ii=vStart;ii kk++; > PetscPrintf(PETSC_COMM_WORLD,"Vertex #%4i\t x = %10.9f\ty = %10.9f\tz = %10.9f\n",ii,xyz_el[3*kk],xyz_el[3*kk+1],xyz_el[3*kk+2]); > }// end "ii" loop. > */ > > t1 = MPI_Wtime(); > //SOLVER======================================================================== > ierr = KSPCreate(PETSC_COMM_WORLD,&ksp); CHKERRQ(ierr); > ierr = KSPSetOperators(ksp,K,K); CHKERRQ(ierr); > ierr = KSPSetFromOptions(ksp); CHKERRQ(ierr); > ierr = KSPSolve(ksp,F,U); CHKERRQ(ierr); > t2 = MPI_Wtime(); > //ierr = KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > //SOLVER======================================================================== > t2 = MPI_Wtime(); > PetscPrintf(PETSC_COMM_WORLD,"Solver time: %10f\n",t2-t1); > ierr = VecRestoreArray(XYZ,&xyz_el); CHKERRQ(ierr);//Get pointer to vector's data. > > //BEGIN MAX/MIN DISPLACEMENTS=================================================== > IS ISux,ISuy,ISuz; > Vec UX,UY,UZ; > PetscReal UXmax,UYmax,UZmax,UXmin,UYmin,UZmin; > ierr = ISCreateStride(PETSC_COMM_WORLD,nv,0,3,&ISux); CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_WORLD,nv,1,3,&ISuy); CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_WORLD,nv,2,3,&ISuz); CHKERRQ(ierr); > > //PetscErrorCode VecGetSubVector(Vec X,IS is,Vec *Y) > ierr = VecGetSubVector(U,ISux,&UX); CHKERRQ(ierr); > ierr = VecGetSubVector(U,ISuy,&UY); CHKERRQ(ierr); > ierr = VecGetSubVector(U,ISuz,&UZ); CHKERRQ(ierr); > > //PetscErrorCode VecMax(Vec x,PetscInt *p,PetscReal *val) > ierr = VecMax(UX,PETSC_NULL,&UXmax); CHKERRQ(ierr); > ierr = VecMax(UY,PETSC_NULL,&UYmax); CHKERRQ(ierr); > ierr = VecMax(UZ,PETSC_NULL,&UZmax); CHKERRQ(ierr); > > ierr = VecMin(UX,PETSC_NULL,&UXmin); CHKERRQ(ierr); > ierr = VecMin(UY,PETSC_NULL,&UYmin); CHKERRQ(ierr); > ierr = VecMin(UZ,PETSC_NULL,&UZmin); CHKERRQ(ierr); > > PetscPrintf(PETSC_COMM_WORLD,"%10f\t <= ux <= %10f\n",UXmin,UXmax); > PetscPrintf(PETSC_COMM_WORLD,"%10f\t <= uy <= %10f\n",UYmin,UYmax); > PetscPrintf(PETSC_COMM_WORLD,"%10f\t <= uz <= %10f\n",UZmin,UZmax); > > > > > //BEGIN OUTPUT SOLUTION========================================================= > if(saveASCII){ > ierr = PetscViewerASCIIOpen(PETSC_COMM_WORLD,"XYZ.txt",&XYZviewer); > ierr = VecView(XYZ,XYZviewer); CHKERRQ(ierr); > ierr = PetscViewerASCIIOpen(PETSC_COMM_WORLD,"U.txt",&XYZpUviewer); > ierr = VecView(U,XYZpUviewer); CHKERRQ(ierr); > PetscViewerDestroy(&XYZviewer); PetscViewerDestroy(&XYZpUviewer); > > }//end if. > if(saveVTK){ > const char *meshfile = "starting_mesh.vtk", > *deformedfile = "deformed_mesh.vtk"; > ierr = PetscViewerVTKOpen(PETSC_COMM_WORLD,meshfile,FILE_MODE_WRITE,&XYZviewer); CHKERRQ(ierr); > //PetscErrorCode DMSetAuxiliaryVec(DM dm, DMLabel label, PetscInt value, Vec aux) > DMLabel UXlabel,UYlabel, UZlabel; > //PetscErrorCode DMLabelCreate(MPI_Comm comm, const char name[], DMLabel *label) > ierr = DMLabelCreate(PETSC_COMM_WORLD, "X-Displacement", &UXlabel); CHKERRQ(ierr); > ierr = DMLabelCreate(PETSC_COMM_WORLD, "Y-Displacement", &UYlabel); CHKERRQ(ierr); > ierr = DMLabelCreate(PETSC_COMM_WORLD, "Z-Displacement", &UZlabel); CHKERRQ(ierr); > ierr = DMSetAuxiliaryVec(dm,UXlabel, 1, UX); CHKERRQ(ierr); > ierr = DMSetAuxiliaryVec(dm,UYlabel, 1, UY); CHKERRQ(ierr); > ierr = DMSetAuxiliaryVec(dm,UZlabel, 1, UZ); CHKERRQ(ierr); > //PetscErrorCode PetscViewerVTKAddField(PetscViewer viewer,PetscObject dm,PetscErrorCode (*PetscViewerVTKWriteFunction)(PetscObject,PetscViewer),PetscInt fieldnum,PetscViewerVTKFieldType fieldtype,PetscBool checkdm,PetscObject vec) > > > > //ierr = PetscViewerVTKAddField(XYZviewer, dm,PetscErrorCode (*PetscViewerVTKWriteFunction)(Vec,PetscViewer),PETSC_DEFAULT,PETSC_VTK_POINT_FIELD,PETSC_FALSE,UX); > ierr = PetscViewerVTKAddField(XYZviewer, (PetscObject)dm,&PetscViewerVTKWriteFunction,PETSC_DEFAULT,PETSC_VTK_POINT_FIELD,PETSC_FALSE,(PetscObject)UX); > > > ierr = DMPlexVTKWriteAll((PetscObject)dm, XYZviewer); CHKERRQ(ierr); > ierr = VecAXPY(XYZ,1,U); CHKERRQ(ierr);//Add displacement field to the mesh coordinates to deform. > ierr = PetscViewerVTKOpen(PETSC_COMM_WORLD,deformedfile,FILE_MODE_WRITE,&XYZpUviewer); CHKERRQ(ierr); > ierr = DMPlexVTKWriteAll((PetscObject)dm, XYZpUviewer); CHKERRQ(ierr);// > PetscViewerDestroy(&XYZviewer); PetscViewerDestroy(&XYZpUviewer); > > }//end else if. > else{ > ierr = PetscPrintf(PETSC_COMM_WORLD,"No output format specified! Files not saved.\n"); CHKERRQ(ierr); > }//end else. > > > //END OUTPUT SOLUTION=========================================================== > VecDestroy(&UX); ISDestroy(&ISux); > VecDestroy(&UY); ISDestroy(&ISuy); > VecDestroy(&UZ); ISDestroy(&ISuz); > //END MAX/MIN DISPLACEMENTS===================================================== > > //CLEANUP===================================================================== > DMDestroy(&dm); > KSPDestroy(&ksp); > MatDestroy(&K); MatDestroy(&KE); MatDestroy(&matC); //MatDestroy(preKEtetra4.matB); MatDestroy(preKEtetra4.matBTCB); > VecDestroy(&U); VecDestroy(&F); > > //DMLabelDestroy(&physicalgroups);//Destroyig the DM destroys the label. > //CLEANUP===================================================================== > //PetscErrorCode PetscMallocDump(FILE *fp) > //ierr = PetscMallocDump(NULL); > return PetscFinalize();//And the machine shall rest.... > }//end main. > > PetscErrorCode tetra4(PetscScalar* X,PetscScalar* Y, PetscScalar* Z,struct preKE *P, Mat* matC, Mat* KE){ > //INPUTS: > //X: Global X coordinates of the elemental nodes. > //Y: Global Y coordinates of the elemental nodes. > //Z: Global Z coordinates of the elemental nodes. > //J: Jacobian matrix. > //invJ: Inverse Jacobian matrix. > PetscErrorCode ierr; > //For current quadrature point, get dPsi/dXi_i Xi_i = {Xi,Eta,Zeta} > /* > P->dPsidXi[0] = +1.; P->dPsidEta[0] = 0.0; P->dPsidZeta[0] = 0.0; > P->dPsidXi[1] = 0.0; P->dPsidEta[1] = +1.; P->dPsidZeta[1] = 0.0; > P->dPsidXi[2] = 0.0; P->dPsidEta[2] = 0.0; P->dPsidZeta[2] = +1.; > P->dPsidXi[3] = -1.; P->dPsidEta[3] = -1.; P->dPsidZeta[3] = -1.; > */ > //Populate the Jacobian matrix. > P->J[0][0] = X[0] - X[3]; > P->J[0][1] = Y[0] - Y[3]; > P->J[0][2] = Z[0] - Z[3]; > P->J[1][0] = X[1] - X[3]; > P->J[1][1] = Y[1] - Y[3]; > P->J[1][2] = Z[1] - Z[3]; > P->J[2][0] = X[2] - X[3]; > P->J[2][1] = Y[2] - Y[3]; > P->J[2][2] = Z[2] - Z[3]; > > //Determinant of the 3x3 Jacobian. (Expansion along 1st row). > P->minor00 = P->J[1][1]*P->J[2][2] - P->J[2][1]*P->J[1][2];//Reuse when finding InvJ. > P->minor01 = P->J[1][0]*P->J[2][2] - P->J[2][0]*P->J[1][2];//Reuse when finding InvJ. > P->minor02 = P->J[1][0]*P->J[2][1] - P->J[2][0]*P->J[1][1];//Reuse when finding InvJ. > P->detJ = P->J[0][0]*P->minor00 - P->J[0][1]*P->minor01 + P->J[0][2]*P->minor02; > //Inverse of the 3x3 Jacobian > P->invJ[0][0] = +P->minor00/P->detJ;//Reuse precomputed minor. > P->invJ[0][1] = -(P->J[0][1]*P->J[2][2] - P->J[0][2]*P->J[2][1])/P->detJ; > P->invJ[0][2] = +(P->J[0][1]*P->J[1][2] - P->J[1][1]*P->J[0][2])/P->detJ; > P->invJ[1][0] = -P->minor01/P->detJ;//Reuse precomputed minor. > P->invJ[1][1] = +(P->J[0][0]*P->J[2][2] - P->J[0][2]*P->J[2][0])/P->detJ; > P->invJ[1][2] = -(P->J[0][0]*P->J[1][2] - P->J[1][0]*P->J[0][2])/P->detJ; > P->invJ[2][0] = +P->minor02/P->detJ;//Reuse precomputed minor. > P->invJ[2][1] = -(P->J[0][0]*P->J[2][1] - P->J[0][1]*P->J[2][0])/P->detJ; > P->invJ[2][2] = +(P->J[0][0]*P->J[1][1] - P->J[0][1]*P->J[1][0])/P->detJ; > > //*****************STRAIN MATRIX (B)************************************** > for(P->m=0;P->mN;P->m++){//Scan all shape functions. > > P->x_in = 0 + P->m*3;//Every 3rd column starting at 0 > P->y_in = P->x_in +1;//Every 3rd column starting at 1 > P->z_in = P->y_in +1;//Every 3rd column starting at 2 > > P->dX[0] = P->invJ[0][0]*P->dPsidXi[P->m] + P->invJ[0][1]*P->dPsidEta[P->m] + P->invJ[0][2]*P->dPsidZeta[P->m]; > P->dY[0] = P->invJ[1][0]*P->dPsidXi[P->m] + P->invJ[1][1]*P->dPsidEta[P->m] + P->invJ[1][2]*P->dPsidZeta[P->m]; > P->dZ[0] = P->invJ[2][0]*P->dPsidXi[P->m] + P->invJ[2][1]*P->dPsidEta[P->m] + P->invJ[2][2]*P->dPsidZeta[P->m]; > > P->dX[1] = P->dZ[0]; P->dX[2] = P->dY[0]; > P->dY[1] = P->dZ[0]; P->dY[2] = P->dX[0]; > P->dZ[1] = P->dX[0]; P->dZ[2] = P->dY[0]; > > ierr = MatSetValues(P->matB,3,P->x_insert,1,&(P->x_in),P->dX,INSERT_VALUES); CHKERRQ(ierr); > ierr = MatSetValues(P->matB,3,P->y_insert,1,&(P->y_in),P->dY,INSERT_VALUES); CHKERRQ(ierr); > ierr = MatSetValues(P->matB,3,P->z_insert,1,&(P->z_in),P->dZ,INSERT_VALUES); CHKERRQ(ierr); > > }//end "m" loop. > ierr = MatAssemblyBegin(P->matB,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatAssemblyEnd(P->matB,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > //*****************STRAIN MATRIX (B)************************************** > > //Compute the matrix product B^t*C*B, scale it by the quadrature weights and add to KE. > P->weight = -P->detJ/6; > > ierr = MatZeroEntries(*KE); CHKERRQ(ierr); > ierr = MatPtAP(*matC,P->matB,MAT_INITIAL_MATRIX,PETSC_DEFAULT,&(P->matBTCB));CHKERRQ(ierr); > ierr = MatScale(P->matBTCB,P->weight); CHKERRQ(ierr); > ierr = MatAssemblyBegin(P->matBTCB,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatAssemblyEnd(P->matBTCB,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatAXPY(*KE,1,P->matBTCB,DIFFERENT_NONZERO_PATTERN); CHKERRQ(ierr);//Add contribution of current quadrature point to KE. > > //ierr = MatPtAP(*matC,P->matB,MAT_INITIAL_MATRIX,PETSC_DEFAULT,KE);CHKERRQ(ierr); > //ierr = MatScale(*KE,P->weight); CHKERRQ(ierr); > > ierr = MatAssemblyBegin(*KE,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatAssemblyEnd(*KE,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > > //Cleanup > return ierr; > }//end tetra4. > > PetscErrorCode ConstitutiveMatrix(Mat *matC,const char* type,PetscInt materialID){ > PetscErrorCode ierr; > PetscBool isotropic = PETSC_FALSE, > orthotropic = PETSC_FALSE; > //PetscErrorCode PetscStrcmp(const char a[],const char b[],PetscBool *flg) > ierr = PetscStrcmp(type,"isotropic",&isotropic); > ierr = PetscStrcmp(type,"orthotropic",&orthotropic); > ierr = MatCreate(PETSC_COMM_WORLD,matC); CHKERRQ(ierr); > ierr = MatSetSizes(*matC,PETSC_DECIDE,PETSC_DECIDE,6,6); CHKERRQ(ierr); > ierr = MatSetType(*matC,MATAIJ); CHKERRQ(ierr); > ierr = MatSetUp(*matC); CHKERRQ(ierr); > > if(isotropic){ > PetscReal E,nu, M,L,vals[3]; > switch(materialID){ > case 0://Hardcoded properties for isotropic material #0 > E = 200; > nu = 1./3; > break; > case 1://Hardcoded properties for isotropic material #1 > E = 96; > nu = 1./3; > break; > }//end switch. > M = E/(2*(1+nu)),//Lame's constant 1 ("mu"). > L = E*nu/((1+nu)*(1-2*nu));//Lame's constant 2 ("lambda"). > //PetscErrorCode MatSetValues(Mat mat,PetscInt m,const PetscInt idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar v[],InsertMode addv) > PetscInt idxn[3] = {0,1,2}; > vals[0] = L+2*M; vals[1] = L; vals[2] = vals[1]; > ierr = MatSetValues(*matC,1,&idxn[0],3,idxn,vals,INSERT_VALUES); CHKERRQ(ierr); > vals[1] = vals[0]; vals[0] = vals[2]; > ierr = MatSetValues(*matC,1,&idxn[1],3,idxn,vals,INSERT_VALUES); CHKERRQ(ierr); > vals[2] = vals[1]; vals[1] = vals[0]; > ierr = MatSetValues(*matC,1,&idxn[2],3,idxn,vals,INSERT_VALUES); CHKERRQ(ierr); > ierr = MatSetValue(*matC,3,3,M,INSERT_VALUES); CHKERRQ(ierr); > ierr = MatSetValue(*matC,4,4,M,INSERT_VALUES); CHKERRQ(ierr); > ierr = MatSetValue(*matC,5,5,M,INSERT_VALUES); CHKERRQ(ierr); > }//end if. > /* > else if(orthotropic){ > switch(materialID){ > case 0: > break; > case 1: > break; > }//end switch. > }//end else if. > */ > ierr = MatAssemblyBegin(*matC,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatAssemblyEnd(*matC,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > //MatView(*matC,0); > return ierr; > }//End ConstitutiveMatrix > > PetscErrorCode InitializeKEpreallocation(struct preKE *P,const char* type){ > PetscErrorCode ierr; > PetscBool istetra4 = PETSC_FALSE, > ishex8 = PETSC_FALSE; > ierr = PetscStrcmp(type,"tetra4",&istetra4); CHKERRQ(ierr); > ierr = PetscStrcmp(type,"hex8",&ishex8); CHKERRQ(ierr); > if(istetra4){ > P->sizeKE = 12; > P->N = 4; > }//end if. > else if(ishex8){ > P->sizeKE = 24; > P->N = 8; > }//end else if. > > > P->x_insert[0] = 0; P->x_insert[1] = 3; P->x_insert[2] = 5; > P->y_insert[0] = 1; P->y_insert[1] = 4; P->y_insert[2] = 5; > P->z_insert[0] = 2; P->z_insert[1] = 3; P->z_insert[2] = 4; > //Allocate memory for the differentiated shape function vectors. > ierr = PetscMalloc1(P->N,&(P->dPsidXi)); CHKERRQ(ierr); > ierr = PetscMalloc1(P->N,&(P->dPsidEta)); CHKERRQ(ierr); > ierr = PetscMalloc1(P->N,&(P->dPsidZeta)); CHKERRQ(ierr); > > P->dPsidXi[0] = +1.; P->dPsidEta[0] = 0.0; P->dPsidZeta[0] = 0.0; > P->dPsidXi[1] = 0.0; P->dPsidEta[1] = +1.; P->dPsidZeta[1] = 0.0; > P->dPsidXi[2] = 0.0; P->dPsidEta[2] = 0.0; P->dPsidZeta[2] = +1.; > P->dPsidXi[3] = -1.; P->dPsidEta[3] = -1.; P->dPsidZeta[3] = -1.; > > > //Strain matrix. > ierr = MatCreate(PETSC_COMM_WORLD,&(P->matB)); CHKERRQ(ierr); > ierr = MatSetSizes(P->matB,PETSC_DECIDE,PETSC_DECIDE,6,P->sizeKE); CHKERRQ(ierr);//Hardcoded > ierr = MatSetType(P->matB,MATAIJ); CHKERRQ(ierr); > ierr = MatSetUp(P->matB); CHKERRQ(ierr); > > //Contribution matrix. > ierr = MatCreate(PETSC_COMM_WORLD,&(P->matBTCB)); CHKERRQ(ierr); > ierr = MatSetSizes(P->matBTCB,PETSC_DECIDE,PETSC_DECIDE,P->sizeKE,P->sizeKE); CHKERRQ(ierr); > ierr = MatSetType(P->matBTCB,MATAIJ); CHKERRQ(ierr); > ierr = MatSetUp(P->matBTCB); CHKERRQ(ierr); > > //Element stiffness matrix. > //ierr = MatCreateSeqDense(PETSC_COMM_SELF,12,12,NULL,&KE); CHKERRQ(ierr); //PARALLEL > > return ierr; > } From jeremy at seamplex.com Thu Dec 30 07:05:50 2021 From: jeremy at seamplex.com (Jeremy Theler) Date: Thu, 30 Dec 2021 10:05:50 -0300 Subject: [petsc-users] DM misuse causes massive memory leak? In-Reply-To: References: Message-ID: <2608670e65f4cc3923ed102bd75444bd17dd224c.camel@seamplex.com> Hola Jes?s You were lucky you got 20x. See https://petsc.org/release/docs/manualpages/Mat/MatSeqAIJSetPreallocation.html https://petsc.org/release/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html Quote: "For large problems you MUST preallocate memory or you will get TERRIBLE performance, see the users' manual chapter on matrices. " -- jeremy On Wed, 2021-12-29 at 22:12 +0000, Ferrand, Jesus A. wrote: > Dear PETSc Team: > > I have a question about DM andPetscSection. Say I import a mesh (for > FEM purposes) and create aDMPlex for it. I then usePetscSections to > set degrees of freedom per "point" (by point I mean vertices, lines, > faces, and cells). I then usePetscSectionGetStorageSize() to get the > size of the global stiffness matrix (K) needed for my FEM problem. > One last detail, this K I populate inside a rather large loop using > an element stiffness matrix function of my own. Instead of > usingDMCreateMatrix(), I manually created aMat using MatCreate(), > MatSetType(),MatSetSizes(), and MatSetUp(). I come to find that said > loop is painfully slow when I use the manually created matrix, but > 20x faster when I use the Mat coming out ofDMCreateMatrix(). > > My question is then: Is the manual Mat a noob mistake and is it > somehow creating a memory leak with K? Just in case it's something > else I'm attaching the code. The loop that populates K is between > lines 221 and 278. Anything related to DM, DMPlex, and PetscSection > is between lines 117 and 180. > > Machine Type: HP Laptop > C-compiler: Gnu C > OS: Ubuntu 20.04 > PETSc version: 3.16.0 > MPI Implementation: MPICH > > Hope you all had a Merry Christmas and that you will have a happy and > productive New Year. :D > > Sincerely: > J.A. Ferrand > Embry-Riddle Aeronautical University - Daytona Beach FL > M.Sc. Aerospace Engineering | May 2022 > B.Sc. Aerospace Engineering > B.Sc. Computational Mathematics > ? > Sigma Gamma Tau? > Tau Beta Pi > Honors Program > ? > Phone: (386)-843-1829 > Email(s): ferranj2 at my.erau.edu > ? ?jesus.ferrand at gmail.com