From matteo.semplice at uninsubria.it Thu Jul 1 03:04:26 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Thu, 1 Jul 2021 10:04:26 +0200 Subject: [petsc-users] MatNest with Shell blocks for multipysics Message-ID: Hi. We are designing a PETSc application that will employ a SNES solver on a multiphysics problem whose jacobian will have a 2x2 block form, say A=[A00,A01;A10,A11]. We already have code for the top left block A_00 (a MatShell and a related Shell preconditioner) that we wish to reuse. We could implement the other blocks as Shells or assembled matrices. We'd like also to compare our method with existing ones, so we'd like to be quite flexible in the choice of KSP and PC within the SNES. (To this end, implementing an assembled version of the A00 and the other blocks would be easy) I am assuming that, in order to have one or more shell blocks, the full jacobian should be a nested matrix, and I am wondering what is the best way to design the code. We are going to use DMDA's to manipulate Vecs for both variable sets, so the DMComposite approach of https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex28.c.html is intriguing, but I have read in the comments that it has issues with MatNest type. My next guess would be to create the four submatrices ahead and then insert them in a MatNest, like in the Stokes example of https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html. However, in order to have shell blocks I guess it is almost mandatory to have the matrix partitioned among cpus as the Vecs are and I don't understand how Vecs end up being partitioned in ex70. We could - create a DMComposite and create the Vecs with it - get the local sizes of the Vecs and subVecs for the two variable groups - create the matrix as in ex70, using the shell type where/when needed, but instead of ???? MatSetSizes(matblock, NULL, NULL, globalRows, globalCols) call ???? MatSetSizes(matblock, localRows, localCols, NULL, NULL) using the local sizes of the subvectors. Does this sound a viable approach? Or do you have some different suggestions? Thanks ??? Matteo From varunhiremath at gmail.com Thu Jul 1 04:37:34 2021 From: varunhiremath at gmail.com (Varun Hiremath) Date: Thu, 1 Jul 2021 02:37:34 -0700 Subject: [petsc-users] SLEPc: smallest eigenvalues Message-ID: Hi All, I am trying to compute the smallest eigenvalues of a generalized system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a shell matrix with a custom matmult function) however, the matrix B is explicitly known so I compute inv(B)*A within the shell matrix and solve inv(B)*A*x = lambda*x. To compute the smallest eigenvalues it is recommended to solve the inverted system, but since matrix A is not explicitly known I can't invert the system. Moreover, the size of the system can be really big, and with the default Krylov solver, it is extremely slow. So is there a better way for me to compute the smallest eigenvalues of this system? Thanks, Varun -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Jul 1 04:43:54 2021 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 1 Jul 2021 11:43:54 +0200 Subject: [petsc-users] SLEPc: smallest eigenvalues In-Reply-To: References: Message-ID: Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the pair (A,B). But this will likely be slow as well, unless you can provide a good preconditioner. Jose > El 1 jul 2021, a las 11:37, Varun Hiremath escribi?: > > Hi All, > > I am trying to compute the smallest eigenvalues of a generalized system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a shell matrix with a custom matmult function) however, the matrix B is explicitly known so I compute inv(B)*A within the shell matrix and solve inv(B)*A*x = lambda*x. > > To compute the smallest eigenvalues it is recommended to solve the inverted system, but since matrix A is not explicitly known I can't invert the system. Moreover, the size of the system can be really big, and with the default Krylov solver, it is extremely slow. So is there a better way for me to compute the smallest eigenvalues of this system? > > Thanks, > Varun From varunhiremath at gmail.com Thu Jul 1 04:58:47 2021 From: varunhiremath at gmail.com (Varun Hiremath) Date: Thu, 1 Jul 2021 02:58:47 -0700 Subject: [petsc-users] SLEPc: smallest eigenvalues In-Reply-To: References: Message-ID: Sorry, no both A and B are general sparse matrices (non-hermitian). So is there anything else I could try? On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman wrote: > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the > pair (A,B). But this will likely be slow as well, unless you can provide a > good preconditioner. > > Jose > > > > El 1 jul 2021, a las 11:37, Varun Hiremath > escribi?: > > > > Hi All, > > > > I am trying to compute the smallest eigenvalues of a generalized system > A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a > shell matrix with a custom matmult function) however, the matrix B is > explicitly known so I compute inv(B)*A within the shell matrix and solve > inv(B)*A*x = lambda*x. > > > > To compute the smallest eigenvalues it is recommended to solve the > inverted system, but since matrix A is not explicitly known I can't invert > the system. Moreover, the size of the system can be really big, and with > the default Krylov solver, it is extremely slow. So is there a better way > for me to compute the smallest eigenvalues of this system? > > > > Thanks, > > Varun > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Jul 1 06:08:23 2021 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 1 Jul 2021 13:08:23 +0200 Subject: [petsc-users] SLEPc: smallest eigenvalues In-Reply-To: References: Message-ID: <179BDB69-1EC0-4334-A964-ABE29E33EFF8@dsic.upv.es> Smallest eigenvalue in magnitude or real part? > El 1 jul 2021, a las 11:58, Varun Hiremath escribi?: > > Sorry, no both A and B are general sparse matrices (non-hermitian). So is there anything else I could try? > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman wrote: > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the pair (A,B). But this will likely be slow as well, unless you can provide a good preconditioner. > > Jose > > > > El 1 jul 2021, a las 11:37, Varun Hiremath escribi?: > > > > Hi All, > > > > I am trying to compute the smallest eigenvalues of a generalized system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a shell matrix with a custom matmult function) however, the matrix B is explicitly known so I compute inv(B)*A within the shell matrix and solve inv(B)*A*x = lambda*x. > > > > To compute the smallest eigenvalues it is recommended to solve the inverted system, but since matrix A is not explicitly known I can't invert the system. Moreover, the size of the system can be really big, and with the default Krylov solver, it is extremely slow. So is there a better way for me to compute the smallest eigenvalues of this system? > > > > Thanks, > > Varun > From varunhiremath at gmail.com Thu Jul 1 06:23:16 2021 From: varunhiremath at gmail.com (Varun Hiremath) Date: Thu, 1 Jul 2021 04:23:16 -0700 Subject: [petsc-users] SLEPc: smallest eigenvalues In-Reply-To: <179BDB69-1EC0-4334-A964-ABE29E33EFF8@dsic.upv.es> References: <179BDB69-1EC0-4334-A964-ABE29E33EFF8@dsic.upv.es> Message-ID: I'm solving for the smallest eigenvalues in magnitude. Though is it cheaper to solve smallest in real part, as that might also work in my case? Thanks for your help. On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman wrote: > Smallest eigenvalue in magnitude or real part? > > > > El 1 jul 2021, a las 11:58, Varun Hiremath > escribi?: > > > > Sorry, no both A and B are general sparse matrices (non-hermitian). So > is there anything else I could try? > > > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman wrote: > > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the > pair (A,B). But this will likely be slow as well, unless you can provide a > good preconditioner. > > > > Jose > > > > > > > El 1 jul 2021, a las 11:37, Varun Hiremath > escribi?: > > > > > > Hi All, > > > > > > I am trying to compute the smallest eigenvalues of a generalized > system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using > a shell matrix with a custom matmult function) however, the matrix B is > explicitly known so I compute inv(B)*A within the shell matrix and solve > inv(B)*A*x = lambda*x. > > > > > > To compute the smallest eigenvalues it is recommended to solve the > inverted system, but since matrix A is not explicitly known I can't invert > the system. Moreover, the size of the system can be really big, and with > the default Krylov solver, it is extremely slow. So is there a better way > for me to compute the smallest eigenvalues of this system? > > > > > > Thanks, > > > Varun > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Jul 1 06:29:19 2021 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 1 Jul 2021 13:29:19 +0200 Subject: [petsc-users] SLEPc: smallest eigenvalues In-Reply-To: References: <179BDB69-1EC0-4334-A964-ABE29E33EFF8@dsic.upv.es> Message-ID: <5B1750B3-E05F-45D7-929B-A5CF816B4A75@dsic.upv.es> For smallest real parts one could adapt ex34.c, but it is going to be costly https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html Also, if eigenvalues are clustered around the origin, convergence may still be very slow. It is a tough problem, unless you are able to compute a good preconditioner of A (no need to compute the exact inverse). Jose > El 1 jul 2021, a las 13:23, Varun Hiremath escribi?: > > I'm solving for the smallest eigenvalues in magnitude. Though is it cheaper to solve smallest in real part, as that might also work in my case? Thanks for your help. > > On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman wrote: > Smallest eigenvalue in magnitude or real part? > > > > El 1 jul 2021, a las 11:58, Varun Hiremath escribi?: > > > > Sorry, no both A and B are general sparse matrices (non-hermitian). So is there anything else I could try? > > > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman wrote: > > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the pair (A,B). But this will likely be slow as well, unless you can provide a good preconditioner. > > > > Jose > > > > > > > El 1 jul 2021, a las 11:37, Varun Hiremath escribi?: > > > > > > Hi All, > > > > > > I am trying to compute the smallest eigenvalues of a generalized system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a shell matrix with a custom matmult function) however, the matrix B is explicitly known so I compute inv(B)*A within the shell matrix and solve inv(B)*A*x = lambda*x. > > > > > > To compute the smallest eigenvalues it is recommended to solve the inverted system, but since matrix A is not explicitly known I can't invert the system. Moreover, the size of the system can be really big, and with the default Krylov solver, it is extremely slow. So is there a better way for me to compute the smallest eigenvalues of this system? > > > > > > Thanks, > > > Varun > > > From varunhiremath at gmail.com Thu Jul 1 06:36:27 2021 From: varunhiremath at gmail.com (Varun Hiremath) Date: Thu, 1 Jul 2021 04:36:27 -0700 Subject: [petsc-users] SLEPc: smallest eigenvalues In-Reply-To: <5B1750B3-E05F-45D7-929B-A5CF816B4A75@dsic.upv.es> References: <179BDB69-1EC0-4334-A964-ABE29E33EFF8@dsic.upv.es> <5B1750B3-E05F-45D7-929B-A5CF816B4A75@dsic.upv.es> Message-ID: Thanks. I actually do have a 1st order approximation of matrix A, that I can explicitly compute and also invert. Can I use that matrix as preconditioner to speed things up? Is there some example that explains how to setup and call SLEPc for this scenario? On Thu, Jul 1, 2021, 4:29 AM Jose E. Roman wrote: > For smallest real parts one could adapt ex34.c, but it is going to be > costly > https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html > Also, if eigenvalues are clustered around the origin, convergence may > still be very slow. > > It is a tough problem, unless you are able to compute a good > preconditioner of A (no need to compute the exact inverse). > > Jose > > > > El 1 jul 2021, a las 13:23, Varun Hiremath > escribi?: > > > > I'm solving for the smallest eigenvalues in magnitude. Though is it > cheaper to solve smallest in real part, as that might also work in my case? > Thanks for your help. > > > > On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman wrote: > > Smallest eigenvalue in magnitude or real part? > > > > > > > El 1 jul 2021, a las 11:58, Varun Hiremath > escribi?: > > > > > > Sorry, no both A and B are general sparse matrices (non-hermitian). So > is there anything else I could try? > > > > > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman > wrote: > > > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on > the pair (A,B). But this will likely be slow as well, unless you can > provide a good preconditioner. > > > > > > Jose > > > > > > > > > > El 1 jul 2021, a las 11:37, Varun Hiremath > escribi?: > > > > > > > > Hi All, > > > > > > > > I am trying to compute the smallest eigenvalues of a generalized > system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using > a shell matrix with a custom matmult function) however, the matrix B is > explicitly known so I compute inv(B)*A within the shell matrix and solve > inv(B)*A*x = lambda*x. > > > > > > > > To compute the smallest eigenvalues it is recommended to solve the > inverted system, but since matrix A is not explicitly known I can't invert > the system. Moreover, the size of the system can be really big, and with > the default Krylov solver, it is extremely slow. So is there a better way > for me to compute the smallest eigenvalues of this system? > > > > > > > > Thanks, > > > > Varun > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Jul 1 06:45:57 2021 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 1 Jul 2021 13:45:57 +0200 Subject: [petsc-users] SLEPc: smallest eigenvalues In-Reply-To: References: <179BDB69-1EC0-4334-A964-ABE29E33EFF8@dsic.upv.es> <5B1750B3-E05F-45D7-929B-A5CF816B4A75@dsic.upv.es> Message-ID: <7031EC8B-A238-45AD-B4C2-FA8988022864@dsic.upv.es> Then I would try Davidson methods https://doi.org/10.1145/2543696 You can also try Krylov-Schur with "inexact" shift-and-invert, for instance, with preconditioned BiCGStab or GMRES, see section 3.4.1 of the users manual. In both cases, you have to pass matrix A in the call to EPSSetOperators() and the preconditioner matrix via STSetPreconditionerMat() - note this function was introduced in version 3.15. Jose > El 1 jul 2021, a las 13:36, Varun Hiremath escribi?: > > Thanks. I actually do have a 1st order approximation of matrix A, that I can explicitly compute and also invert. Can I use that matrix as preconditioner to speed things up? Is there some example that explains how to setup and call SLEPc for this scenario? > > On Thu, Jul 1, 2021, 4:29 AM Jose E. Roman wrote: > For smallest real parts one could adapt ex34.c, but it is going to be costly https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html > Also, if eigenvalues are clustered around the origin, convergence may still be very slow. > > It is a tough problem, unless you are able to compute a good preconditioner of A (no need to compute the exact inverse). > > Jose > > > > El 1 jul 2021, a las 13:23, Varun Hiremath escribi?: > > > > I'm solving for the smallest eigenvalues in magnitude. Though is it cheaper to solve smallest in real part, as that might also work in my case? Thanks for your help. > > > > On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman wrote: > > Smallest eigenvalue in magnitude or real part? > > > > > > > El 1 jul 2021, a las 11:58, Varun Hiremath escribi?: > > > > > > Sorry, no both A and B are general sparse matrices (non-hermitian). So is there anything else I could try? > > > > > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman wrote: > > > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the pair (A,B). But this will likely be slow as well, unless you can provide a good preconditioner. > > > > > > Jose > > > > > > > > > > El 1 jul 2021, a las 11:37, Varun Hiremath escribi?: > > > > > > > > Hi All, > > > > > > > > I am trying to compute the smallest eigenvalues of a generalized system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a shell matrix with a custom matmult function) however, the matrix B is explicitly known so I compute inv(B)*A within the shell matrix and solve inv(B)*A*x = lambda*x. > > > > > > > > To compute the smallest eigenvalues it is recommended to solve the inverted system, but since matrix A is not explicitly known I can't invert the system. Moreover, the size of the system can be really big, and with the default Krylov solver, it is extremely slow. So is there a better way for me to compute the smallest eigenvalues of this system? > > > > > > > > Thanks, > > > > Varun > > > > > > From varunhiremath at gmail.com Thu Jul 1 07:01:58 2021 From: varunhiremath at gmail.com (Varun Hiremath) Date: Thu, 1 Jul 2021 05:01:58 -0700 Subject: [petsc-users] SLEPc: smallest eigenvalues In-Reply-To: <7031EC8B-A238-45AD-B4C2-FA8988022864@dsic.upv.es> References: <179BDB69-1EC0-4334-A964-ABE29E33EFF8@dsic.upv.es> <5B1750B3-E05F-45D7-929B-A5CF816B4A75@dsic.upv.es> <7031EC8B-A238-45AD-B4C2-FA8988022864@dsic.upv.es> Message-ID: Thank you very much for these suggestions! We are currently using version 3.12, so I'll try to update to the latest version and try your suggestions. Let me get back to you, thanks! On Thu, Jul 1, 2021, 4:45 AM Jose E. Roman wrote: > Then I would try Davidson methods https://doi.org/10.1145/2543696 > You can also try Krylov-Schur with "inexact" shift-and-invert, for > instance, with preconditioned BiCGStab or GMRES, see section 3.4.1 of the > users manual. > > In both cases, you have to pass matrix A in the call to EPSSetOperators() > and the preconditioner matrix via STSetPreconditionerMat() - note this > function was introduced in version 3.15. > > Jose > > > > > El 1 jul 2021, a las 13:36, Varun Hiremath > escribi?: > > > > Thanks. I actually do have a 1st order approximation of matrix A, that I > can explicitly compute and also invert. Can I use that matrix as > preconditioner to speed things up? Is there some example that explains how > to setup and call SLEPc for this scenario? > > > > On Thu, Jul 1, 2021, 4:29 AM Jose E. Roman wrote: > > For smallest real parts one could adapt ex34.c, but it is going to be > costly > https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html > > Also, if eigenvalues are clustered around the origin, convergence may > still be very slow. > > > > It is a tough problem, unless you are able to compute a good > preconditioner of A (no need to compute the exact inverse). > > > > Jose > > > > > > > El 1 jul 2021, a las 13:23, Varun Hiremath > escribi?: > > > > > > I'm solving for the smallest eigenvalues in magnitude. Though is it > cheaper to solve smallest in real part, as that might also work in my case? > Thanks for your help. > > > > > > On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman wrote: > > > Smallest eigenvalue in magnitude or real part? > > > > > > > > > > El 1 jul 2021, a las 11:58, Varun Hiremath > escribi?: > > > > > > > > Sorry, no both A and B are general sparse matrices (non-hermitian). > So is there anything else I could try? > > > > > > > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman > wrote: > > > > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on > the pair (A,B). But this will likely be slow as well, unless you can > provide a good preconditioner. > > > > > > > > Jose > > > > > > > > > > > > > El 1 jul 2021, a las 11:37, Varun Hiremath < > varunhiremath at gmail.com> escribi?: > > > > > > > > > > Hi All, > > > > > > > > > > I am trying to compute the smallest eigenvalues of a generalized > system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using > a shell matrix with a custom matmult function) however, the matrix B is > explicitly known so I compute inv(B)*A within the shell matrix and solve > inv(B)*A*x = lambda*x. > > > > > > > > > > To compute the smallest eigenvalues it is recommended to solve the > inverted system, but since matrix A is not explicitly known I can't invert > the system. Moreover, the size of the system can be really big, and with > the default Krylov solver, it is extremely slow. So is there a better way > for me to compute the smallest eigenvalues of this system? > > > > > > > > > > Thanks, > > > > > Varun > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pjool at mek.dtu.dk Thu Jul 1 07:20:04 2021 From: pjool at mek.dtu.dk (=?iso-8859-1?Q?Peder_J=F8rgensgaard_Olesen?=) Date: Thu, 1 Jul 2021 12:20:04 +0000 Subject: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process In-Reply-To: <87y2arpg8a.fsf@jedbrown.org> References: <93521e6acde64da2af7c415ceee9273c@mek.dtu.dk> <61644b633c624282a29f4e0ea80e61c7@mek.dtu.dk> <3285e9ab0ba941e583998a3bb7a5c67c@mek.dtu.dk>, <87y2arpg8a.fsf@jedbrown.org> Message-ID: Dear Jed I'm not really sure what it is you're asking (that's on me, still a rookie in the field), but I'll try to describe what I've done: Each process is assigned an indexed subset of the tasks (the tasks are of constant size), and, for each task index, the relevant data is scattered as a SEQVEC to the process (this is done for all processes in each step, using an adaption of the code in Matt's link). This way each process only receives just the data it needs to complete the task. While I'm currently working with very moderate size data sets I'll eventually need to handle something rather more massive, so I want to economize memory where possible and give each process only the data it needs. Med venlig hilsen / Best regards Peder ________________________________ Fra: Jed Brown Sendt: 30. juni 2021 16:41:25 Til: Peder J?rgensgaard Olesen; Junchao Zhang Cc: petsc-users at mcs.anl.gov Emne: Re: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process Peder J?rgensgaard Olesen via petsc-users writes: > I'm distributing a set of independent tasks over different processes, so I'm afraid sending everything to the zeroth process would rather thoroughly defeat the purpose of what I'm doing. It sounds like you're going to run this a bunch of times. Does it have to be sequential (gather to one rank at a time) or can it be an alltoall? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Jul 1 07:42:25 2021 From: jed at jedbrown.org (Jed Brown) Date: Thu, 01 Jul 2021 06:42:25 -0600 Subject: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process In-Reply-To: References: <93521e6acde64da2af7c415ceee9273c@mek.dtu.dk> <61644b633c624282a29f4e0ea80e61c7@mek.dtu.dk> <3285e9ab0ba941e583998a3bb7a5c67c@mek.dtu.dk> <87y2arpg8a.fsf@jedbrown.org> Message-ID: <87pmw2nr2m.fsf@jedbrown.org> Peder J?rgensgaard Olesen writes: > Each process is assigned an indexed subset of the tasks (the tasks are of constant size), and, for each task index, the relevant data is scattered as a SEQVEC to the process (this is done for all processes in each step, using an adaption of the code in Matt's link). This way each process only receives just the data it needs to complete the task. While I'm currently working with very moderate size data sets I'll eventually need to handle something rather more massive, so I want to economize memory where possible and give each process only the data it needs. >From the sounds of it, this pattern ultimately boils down to MPI_Gather being called P times where P is the size of the communicator. This will work okay when P is small, but it's much less efficient than calling MPI_Alltoall (or MPI_Alltoallv), which you can do by creating one PetscSF that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You can see an example. https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151 From junchao.zhang at gmail.com Thu Jul 1 09:38:29 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Thu, 1 Jul 2021 09:38:29 -0500 Subject: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process In-Reply-To: <87pmw2nr2m.fsf@jedbrown.org> References: <93521e6acde64da2af7c415ceee9273c@mek.dtu.dk> <61644b633c624282a29f4e0ea80e61c7@mek.dtu.dk> <3285e9ab0ba941e583998a3bb7a5c67c@mek.dtu.dk> <87y2arpg8a.fsf@jedbrown.org> <87pmw2nr2m.fsf@jedbrown.org> Message-ID: Peder, PETSCSF_PATTERN_ALLTOALL only supports MPI_Alltoall (not Alltoallv), and is only used by petsc internally at few places. I suggest you can go with Matt's approach. After it solves your problem, you can distill an example to demo the communication pattern. Then we can see how to efficiently support that in petsc. Thanks. --Junchao Zhang On Thu, Jul 1, 2021 at 7:42 AM Jed Brown wrote: > Peder J?rgensgaard Olesen writes: > > > Each process is assigned an indexed subset of the tasks (the tasks are > of constant size), and, for each task index, the relevant data is scattered > as a SEQVEC to the process (this is done for all processes in each step, > using an adaption of the code in Matt's link). This way each process only > receives just the data it needs to complete the task. While I'm currently > working with very moderate size data sets I'll eventually need to handle > something rather more massive, so I want to economize memory where possible > and give each process only the data it needs. > > From the sounds of it, this pattern ultimately boils down to MPI_Gather > being called P times where P is the size of the communicator. This will > work okay when P is small, but it's much less efficient than calling > MPI_Alltoall (or MPI_Alltoallv), which you can do by creating one PetscSF > that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You > can see an example. > > > https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Jul 1 10:52:07 2021 From: jed at jedbrown.org (Jed Brown) Date: Thu, 01 Jul 2021 09:52:07 -0600 Subject: [petsc-users] MatNest with Shell blocks for multipysics In-Reply-To: References: Message-ID: <87h7heniag.fsf@jedbrown.org> Matteo Semplice writes: > Hi. > > We are designing a PETSc application that will employ a SNES solver on a > multiphysics problem whose jacobian will have a 2x2 block form, say > A=[A00,A01;A10,A11]. We already have code for the top left block A_00 (a > MatShell and a related Shell preconditioner) that we wish to reuse. We > could implement the other blocks as Shells or assembled matrices. We'd > like also to compare our method with existing ones, so we'd like to be > quite flexible in the choice of KSP and PC within the SNES. (To this > end, implementing an assembled version of the A00 and the other blocks > would be easy) > > I am assuming that, in order to have one or more shell blocks, the full > jacobian should be a nested matrix, and I am wondering what is the best > way to design the code. > > We are going to use DMDA's to manipulate Vecs for both variable sets, so > the DMComposite approach of > https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex28.c.html > is intriguing, but I have read in the comments that it has issues with > MatNest type. I think ex28 is better organization of code. You can DMCreateMatrix() and then set types/preallocation for off-diagonal blocks of the MatNest. I think the comment is unclear and not quite what was intended and originally worked (which was to assemble the off-diagonal blocks despite bad preallocation). https://gitlab.com/petsc/petsc/-/commit/6bdeb4dbc27a59cf9af4930e08bd1f9937e47c2d https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex28.c.html#line410 Note that if you're using DMDA and have collocated fields, you can skip all this complexity. And if you have a scattered discretization, consider DMStag. ex28 is showing how to solve a coupled problem where there is no suitable structure to convey the relation between discretizations. > My next guess would be to create the four submatrices ahead and then > insert them in a MatNest, like in the Stokes example of > https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html. > However, in order to have shell blocks I guess it is almost mandatory to > have the matrix partitioned among cpus as the Vecs are and I don't > understand how Vecs end up being partitioned in ex70. From matteo.semplice at uninsubria.it Thu Jul 1 11:44:05 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Thu, 1 Jul 2021 18:44:05 +0200 Subject: [petsc-users] MatNest with Shell blocks for multipysics In-Reply-To: <87h7heniag.fsf@jedbrown.org> References: <87h7heniag.fsf@jedbrown.org> Message-ID: <15b032eb-e6e7-66c5-f35a-fe871234c6fe@uninsubria.it> Il 01/07/21 17:52, Jed Brown ha scritto: > I think ex28 is better organization of code. You can DMCreateMatrix() and then set types/preallocation for off-diagonal blocks of the MatNest. I think the comment is unclear and not quite what was intended and originally worked (which was to assemble the off-diagonal blocks despite bad preallocation). > > https://gitlab.com/petsc/petsc/-/commit/6bdeb4dbc27a59cf9af4930e08bd1f9937e47c2d > > https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex28.c.html#line410 Thanks! Yesterday I was unable to make it work, but I'll have another go with ex28 then... > Note that if you're using DMDA and have collocated fields, you can skip all this complexity. And if you have a scattered discretization, consider DMStag. ex28 is showing how to solve a coupled problem where there is no suitable structure to convey the relation between discretizations. The current discretization is in fact colocated with the variables on the same grid, and we might stick to that for a while. However, one key point in the design is that the jacobian will be [A00,A01;A10,A11] and we already have a taylor-made shell preconditioner for A00 and A00 implemented as a shell matrix; the preconditioner for the full Jacobian will be to neglect the A10 block and do a block-triangular solve inverting A00 approximately with the shell preconditioner. I do not understand how creating a DMDA with n0+n1 dofs will let me easily reuse my shell preconditioner code on the top-left block. Matteo From knepley at gmail.com Thu Jul 1 14:36:40 2021 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 1 Jul 2021 14:36:40 -0500 Subject: [petsc-users] MatNest with Shell blocks for multipysics In-Reply-To: <15b032eb-e6e7-66c5-f35a-fe871234c6fe@uninsubria.it> References: <87h7heniag.fsf@jedbrown.org> <15b032eb-e6e7-66c5-f35a-fe871234c6fe@uninsubria.it> Message-ID: On Thu, Jul 1, 2021 at 11:44 AM Matteo Semplice < matteo.semplice at uninsubria.it> wrote: > Il 01/07/21 17:52, Jed Brown ha scritto: > > I think ex28 is better organization of code. You can DMCreateMatrix() > and then set types/preallocation for off-diagonal blocks of the MatNest. I > think the comment is unclear and not quite what was intended and originally > worked (which was to assemble the off-diagonal blocks despite bad > preallocation). > > > > > https://gitlab.com/petsc/petsc/-/commit/6bdeb4dbc27a59cf9af4930e08bd1f9937e47c2d > > > > > https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex28.c.html#line410 > > Thanks! Yesterday I was unable to make it work, but I'll have another go > with ex28 then... > > > Note that if you're using DMDA and have collocated fields, you can skip > all this complexity. And if you have a scattered discretization, consider > DMStag. ex28 is showing how to solve a coupled problem where there is no > suitable structure to convey the relation between discretizations. > > The current discretization is in fact colocated with the variables on > the same grid, and we might stick to that for a while. > > However, one key point in the design is that the jacobian will be > [A00,A01;A10,A11] and we already have a taylor-made shell preconditioner > for A00 and A00 implemented as a shell matrix; the preconditioner for > the full Jacobian will be to neglect the A10 block and do a > block-triangular solve inverting A00 approximately with the shell > preconditioner. > Okay, if that is the case, then you should use DMDA to layout the Jacobian, with DMCreateMatrix(), and then PCFIELDSPLIT to preconditioner, since it will automatically split things into your two pieces, and you can use your custom PC for A00 and multiplicative to get your upper triangular PC, or you could use a Schur complement to see if stronger coupling was more effective. Thanks, Matt > I do not understand how creating a DMDA with n0+n1 dofs will let me > easily reuse my shell preconditioner code on the top-left block. > > Matteo > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 1 14:42:19 2021 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 1 Jul 2021 14:42:19 -0500 Subject: [petsc-users] MatNest with Shell blocks for multipysics In-Reply-To: <15b032eb-e6e7-66c5-f35a-fe871234c6fe@uninsubria.it> References: <87h7heniag.fsf@jedbrown.org> <15b032eb-e6e7-66c5-f35a-fe871234c6fe@uninsubria.it> Message-ID: <68848A0B-21F2-4129-8D7E-3847387E5916@petsc.dev> > On Jul 1, 2021, at 11:44 AM, Matteo Semplice wrote: > > Il 01/07/21 17:52, Jed Brown ha scritto: >> I think ex28 is better organization of code. You can DMCreateMatrix() and then set types/preallocation for off-diagonal blocks of the MatNest. I think the comment is unclear and not quite what was intended and originally worked (which was to assemble the off-diagonal blocks despite bad preallocation). >> >> https://gitlab.com/petsc/petsc/-/commit/6bdeb4dbc27a59cf9af4930e08bd1f9937e47c2d >> >> https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex28.c.html#line410 > > Thanks! Yesterday I was unable to make it work, but I'll have another go with ex28 then... > >> Note that if you're using DMDA and have collocated fields, you can skip all this complexity. And if you have a scattered discretization, consider DMStag. ex28 is showing how to solve a coupled problem where there is no suitable structure to convey the relation between discretizations. > > The current discretization is in fact colocated with the variables on the same grid, and we might stick to that for a while. > > However, one key point in the design is that the jacobian will be [A00,A01;A10,A11] and we already have a taylor-made shell preconditioner for A00 and A00 implemented as a shell matrix; the preconditioner for the full Jacobian will be to neglect the A10 block and do a block-triangular solve inverting A00 approximately with the shell preconditioner. > > I do not understand how creating a DMDA with n0+n1 dofs will let me easily reuse my shell preconditioner code on the top-left block. PCFIELDSPLIT (and friends) do not order the dof by block, rather they "pull out" the required pieces of the vector (using IS's) when needed. Your shell preconditioner will just operate on the "pulled out" vectors. If you use DMDAVecGetArray etc in your shell preconditioner you can create an auxiliary DMDA of that smaller dof to still be able to use the DMDAVecGetArray constructs. You definitely should use an "all dof" DMDA to define your entire problem and not try to "glue" together vectors using DMComposites or other such things. Taking apart is much easier in parallel computing then putting together. Barry > > Matteo > > From matteo.semplice at uninsubria.it Thu Jul 1 16:10:32 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Thu, 1 Jul 2021 23:10:32 +0200 Subject: [petsc-users] MatNest with Shell blocks for multipysics In-Reply-To: <68848A0B-21F2-4129-8D7E-3847387E5916@petsc.dev> References: <87h7heniag.fsf@jedbrown.org> <15b032eb-e6e7-66c5-f35a-fe871234c6fe@uninsubria.it> <68848A0B-21F2-4129-8D7E-3847387E5916@petsc.dev> Message-ID: <1b8d4b0a-bb4d-7e70-ddbc-adb487b23672@uninsubria.it> Thank you, Matthew and Barry! I can now see a way forward. Il 01/07/21 21:42, Barry Smith ha scritto: >> I do not understand how creating a DMDA with n0+n1 dofs will let me easily reuse my shell preconditioner code on the top-left block. > PCFIELDSPLIT (and friends) do not order the dof by block, rather they "pull out" the required pieces of the vector (using IS's) when needed. Your shell preconditioner will just operate on the "pulled out" vectors. If you use DMDAVecGetArray etc in your shell preconditioner you can create an auxiliary DMDA of that smaller dof to still be able to use the DMDAVecGetArray constructs. Just to be sure: - I create a DMDA with n0+n1 dof per node - the jacobian will be associated to this DMDA. (It is not crucial, but can this be a shell matrix?) - I create a multiplicative PCfieldsplit, assign the correct n0 and n1 fields to each split (and get the IS for the splits via PCFieldSplitGetIS, shuld I need them) - the routine A00PCApply for the shell preconditioner of the A00 block, will see a Vec which is really a subvector with n0 dofs per node. In order to use DMDA semantics on this one, I create a DMDA with n0 dofs using DMDACreateCompatibleDMDA and then VecGetArrayDOFS using the smaller DMDA? Best ??? Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 1 16:56:39 2021 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 1 Jul 2021 16:56:39 -0500 Subject: [petsc-users] MatNest with Shell blocks for multipysics In-Reply-To: <1b8d4b0a-bb4d-7e70-ddbc-adb487b23672@uninsubria.it> References: <87h7heniag.fsf@jedbrown.org> <15b032eb-e6e7-66c5-f35a-fe871234c6fe@uninsubria.it> <68848A0B-21F2-4129-8D7E-3847387E5916@petsc.dev> <1b8d4b0a-bb4d-7e70-ddbc-adb487b23672@uninsubria.it> Message-ID: <13EAA2F3-90E1-486D-BCC8-BD4FE829677E@petsc.dev> Sounds good. Yes the outer most Jacobian can be shell (or even nest but then I think you need to "build" it yourself, I don't think the DMDA will give back an appropriate nest matrix.) > On Jul 1, 2021, at 4:10 PM, Matteo Semplice wrote: > > Thank you, Matthew and Barry! > > I can now see a way forward. > > Il 01/07/21 21:42, Barry Smith ha scritto: >>> I do not understand how creating a DMDA with n0+n1 dofs will let me easily reuse my shell preconditioner code on the top-left block. >> PCFIELDSPLIT (and friends) do not order the dof by block, rather they "pull out" the required pieces of the vector (using IS's) when needed. Your shell preconditioner will just operate on the "pulled out" vectors. If you use DMDAVecGetArray etc in your shell preconditioner you can create an auxiliary DMDA of that smaller dof to still be able to use the DMDAVecGetArray constructs. > Just to be sure: > - I create a DMDA with n0+n1 dof per node > > - the jacobian will be associated to this DMDA. (It is not crucial, but can this be a shell matrix?) > > - I create a multiplicative PCfieldsplit, assign the correct n0 and n1 fields to each split (and get the IS for the splits via PCFieldSplitGetIS, shuld I need them) > > - the routine A00PCApply for the shell preconditioner of the A00 block, will see a Vec which is really a subvector with n0 dofs per node. In order to use DMDA semantics on this one, I create a DMDA with n0 dofs using DMDACreateCompatibleDMDA and then VecGetArrayDOFS using the smaller DMDA? > > Best > > Matteo > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jekozdon at nps.edu Thu Jul 1 23:25:42 2021 From: jekozdon at nps.edu (Kozdon, Jeremy (CIV)) Date: Fri, 2 Jul 2021 04:25:42 +0000 Subject: [petsc-users] PETSc with Julia Binary Builder Message-ID: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> I have been talking with Boris Kaus and Patrick Sanan about trying to revive the Julia PETSc interface wrappers. One of the first things to get going is to use Julia's binary builder [1] to wrap more scalar, real, and int type builds of the PETSc library; the current distribution is just Real, double, Int32. I've been working on a PR for this [2] but have been running into some build issues on some architectures [3]. I doubt that anyone here is an expert with Julia's binary builder system, but I was wondering if anyone who is better with the PETSc build system can see anything obvious from the configure.log [4] that might help me sort out what's going on. This exact script worked on 2020-08-20 [5] to build the libraries, se something has obviously changed with either the Julia build system and/or one (or more!) of the dependency binaries. For those that don't know, Julia's binary builder system essentially allows users to download binaries directly from the web for any system that the Julia Programing language distributes binaries for. So a (desktop) user can get MPI, PETSc, etc. without the headache of having to build anything from scratch; obviously on clusters you would still want to use system MPIs and what not. ---- [1] https://github.com/JuliaPackaging/BinaryBuilder.jl [2] https://github.com/JuliaPackaging/Yggdrasil/pull/3249 [3] https://github.com/JuliaPackaging/Yggdrasil/pull/3249#issuecomment-872698681 [4] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log [5] https://github.com/JuliaBinaryWrappers/PETSc_jll.jl/releases/tag/PETSc-v3.13.4%2B0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Fri Jul 2 02:05:26 2021 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Fri, 2 Jul 2021 09:05:26 +0200 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> Message-ID: <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> As you mention in [4], the proximate cause of the configure failure is this link error [8]: Naively, that looks like a problem to be resolved at the level of the C++ compiler and MPI. Unless there are wrinkles of this build process that I don't understand (likely), this [6] looks non-standard to me: includedir="${prefix}/include" ... ./configure --prefix=${prefix} \ ... -with-mpi-include="${includedir}" \ ... Is it possible to configure using --with-mpi-dir, instead of the separate --with-mpi-include and --with-mpi-lib commands? As an aside, maybe Satish can say more, but I'm not sure if it's advisable to override variables in the make command [7]. [8] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log-L7795 [6] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L45 [7] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L55 > Am 02.07.2021 um 06:25 schrieb Kozdon, Jeremy (CIV) : > > I have been talking with Boris Kaus and Patrick Sanan about trying to revive the Julia PETSc interface wrappers. One of the first things to get going is to use Julia's binary builder [1] to wrap more scalar, real, and int type builds of the PETSc library; the current distribution is just Real, double, Int32. I've been working on a PR for this [2] but have been running into some build issues on some architectures [3]. > > I doubt that anyone here is an expert with Julia's binary builder system, but I was wondering if anyone who is better with the PETSc build system can see anything obvious from the configure.log [4] that might help me sort out what's going on. > > This exact script worked on 2020-08-20 [5] to build the libraries, se something has obviously changed with either the Julia build system and/or one (or more!) of the dependency binaries. > > For those that don't know, Julia's binary builder system essentially allows users to download binaries directly from the web for any system that the Julia Programing language distributes binaries for. So a (desktop) user can get MPI, PETSc, etc. without the headache of having to build anything from scratch; obviously on clusters you would still want to use system MPIs and what not. > > ---- > > [1] https://github.com/JuliaPackaging/BinaryBuilder.jl > [2] https://github.com/JuliaPackaging/Yggdrasil/pull/3249 > [3] https://github.com/JuliaPackaging/Yggdrasil/pull/3249#issuecomment-872698681 > [4] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log > [5] https://github.com/JuliaBinaryWrappers/PETSc_jll.jl/releases/tag/PETSc-v3.13.4%2B0 From pjool at mek.dtu.dk Fri Jul 2 04:06:58 2021 From: pjool at mek.dtu.dk (=?iso-8859-1?Q?Peder_J=F8rgensgaard_Olesen?=) Date: Fri, 2 Jul 2021 09:06:58 +0000 Subject: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process In-Reply-To: References: <93521e6acde64da2af7c415ceee9273c@mek.dtu.dk> <61644b633c624282a29f4e0ea80e61c7@mek.dtu.dk> <3285e9ab0ba941e583998a3bb7a5c67c@mek.dtu.dk> <87y2arpg8a.fsf@jedbrown.org> <87pmw2nr2m.fsf@jedbrown.org>, Message-ID: Matt's method seems to work well, though instead of editing the actual function I put the relevant parts directly into my code. I made the small example attached here. I might look into Star Forests at some point, though it's not really touched upon in the manual (I will probably take a look at your paper, https://arxiv.org/abs/2102.13018). Med venlig hilsen / Best regards Peder ________________________________ Fra: Junchao Zhang Sendt: 1. juli 2021 16:38:29 Til: Jed Brown Cc: Peder J?rgensgaard Olesen; petsc-users at mcs.anl.gov Emne: Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process Peder, PETSCSF_PATTERN_ALLTOALL only supports MPI_Alltoall (not Alltoallv), and is only used by petsc internally at few places. I suggest you can go with Matt's approach. After it solves your problem, you can distill an example to demo the communication pattern. Then we can see how to efficiently support that in petsc. Thanks. --Junchao Zhang On Thu, Jul 1, 2021 at 7:42 AM Jed Brown > wrote: Peder J?rgensgaard Olesen > writes: > Each process is assigned an indexed subset of the tasks (the tasks are of constant size), and, for each task index, the relevant data is scattered as a SEQVEC to the process (this is done for all processes in each step, using an adaption of the code in Matt's link). This way each process only receives just the data it needs to complete the task. While I'm currently working with very moderate size data sets I'll eventually need to handle something rather more massive, so I want to economize memory where possible and give each process only the data it needs. >From the sounds of it, this pattern ultimately boils down to MPI_Gather being called P times where P is the size of the communicator. This will work okay when P is small, but it's much less efficient than calling MPI_Alltoall (or MPI_Alltoallv), which you can do by creating one PetscSF that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You can see an example. https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scatter_demo.c Type: text/x-csrc Size: 1957 bytes Desc: scatter_demo.c URL: From stefano.zampini at gmail.com Fri Jul 2 10:03:51 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Fri, 2 Jul 2021 17:03:51 +0200 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> Message-ID: Patrick Should this be fixed in PETSc build system? https://github.com/JuliaPackaging/Yggdrasil/blob/master/P/PETSc/bundled/patches/petsc_name_mangle.patch > On Jul 2, 2021, at 9:05 AM, Patrick Sanan wrote: > > As you mention in [4], the proximate cause of the configure failure is this link error [8]: > > Naively, that looks like a problem to be resolved at the level of the C++ compiler and MPI. > > Unless there are wrinkles of this build process that I don't understand (likely), this [6] looks non-standard to me: > > includedir="${prefix}/include" > ... > ./configure --prefix=${prefix} \ > ... > -with-mpi-include="${includedir}" \ > ... > > > Is it possible to configure using --with-mpi-dir, instead of the separate --with-mpi-include and --with-mpi-lib commands? > > > As an aside, maybe Satish can say more, but I'm not sure if it's advisable to override variables in the make command [7]. > > [8] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log-L7795 > [6] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L45 > [7] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L55 > > >> Am 02.07.2021 um 06:25 schrieb Kozdon, Jeremy (CIV) : >> >> I have been talking with Boris Kaus and Patrick Sanan about trying to revive the Julia PETSc interface wrappers. One of the first things to get going is to use Julia's binary builder [1] to wrap more scalar, real, and int type builds of the PETSc library; the current distribution is just Real, double, Int32. I've been working on a PR for this [2] but have been running into some build issues on some architectures [3]. >> >> I doubt that anyone here is an expert with Julia's binary builder system, but I was wondering if anyone who is better with the PETSc build system can see anything obvious from the configure.log [4] that might help me sort out what's going on. >> >> This exact script worked on 2020-08-20 [5] to build the libraries, se something has obviously changed with either the Julia build system and/or one (or more!) of the dependency binaries. >> >> For those that don't know, Julia's binary builder system essentially allows users to download binaries directly from the web for any system that the Julia Programing language distributes binaries for. So a (desktop) user can get MPI, PETSc, etc. without the headache of having to build anything from scratch; obviously on clusters you would still want to use system MPIs and what not. >> >> ---- >> >> [1] https://github.com/JuliaPackaging/BinaryBuilder.jl >> [2] https://github.com/JuliaPackaging/Yggdrasil/pull/3249 >> [3] https://github.com/JuliaPackaging/Yggdrasil/pull/3249#issuecomment-872698681 >> [4] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log >> [5] https://github.com/JuliaBinaryWrappers/PETSc_jll.jl/releases/tag/PETSc-v3.13.4%2B0 > From knepley at gmail.com Fri Jul 2 10:04:33 2021 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 2 Jul 2021 10:04:33 -0500 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> Message-ID: On Fri, Jul 2, 2021 at 2:05 AM Patrick Sanan wrote: > As you mention in [4], the proximate cause of the configure failure is > this link error [8]: > That missing function was introduced in GCC 7.0, and is there only for i686, not x86_64. This looks like a bad GCC install to me. Matt > Naively, that looks like a problem to be resolved at the level of the C++ > compiler and MPI. > > Unless there are wrinkles of this build process that I don't understand > (likely), this [6] looks non-standard to me: > > includedir="${prefix}/include" > ... > ./configure --prefix=${prefix} \ > ... > -with-mpi-include="${includedir}" \ > ... > > > Is it possible to configure using --with-mpi-dir, instead of the separate > --with-mpi-include and --with-mpi-lib commands? > > > As an aside, maybe Satish can say more, but I'm not sure if it's advisable > to override variables in the make command [7]. > > [8] > https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log-L7795 > [6] > https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L45 > [7] > https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L55 > > > > Am 02.07.2021 um 06:25 schrieb Kozdon, Jeremy (CIV) : > > > > I have been talking with Boris Kaus and Patrick Sanan about trying to > revive the Julia PETSc interface wrappers. One of the first things to get > going is to use Julia's binary builder [1] to wrap more scalar, real, and > int type builds of the PETSc library; the current distribution is just > Real, double, Int32. I've been working on a PR for this [2] but have been > running into some build issues on some architectures [3]. > > > > I doubt that anyone here is an expert with Julia's binary builder > system, but I was wondering if anyone who is better with the PETSc build > system can see anything obvious from the configure.log [4] that might help > me sort out what's going on. > > > > This exact script worked on 2020-08-20 [5] to build the libraries, se > something has obviously changed with either the Julia build system and/or > one (or more!) of the dependency binaries. > > > > For those that don't know, Julia's binary builder system essentially > allows users to download binaries directly from the web for any system that > the Julia Programing language distributes binaries for. So a (desktop) user > can get MPI, PETSc, etc. without the headache of having to build anything > from scratch; obviously on clusters you would still want to use system MPIs > and what not. > > > > ---- > > > > [1] https://github.com/JuliaPackaging/BinaryBuilder.jl > > [2] https://github.com/JuliaPackaging/Yggdrasil/pull/3249 > > [3] > https://github.com/JuliaPackaging/Yggdrasil/pull/3249#issuecomment-872698681 > > [4] > https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log > > [5] > https://github.com/JuliaBinaryWrappers/PETSc_jll.jl/releases/tag/PETSc-v3.13.4%2B0 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Jul 2 10:24:58 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 2 Jul 2021 10:24:58 -0500 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> Message-ID: On Fri, 2 Jul 2021, Patrick Sanan wrote: > As you mention in [4], the proximate cause of the configure failure is this link error [8]: > > Naively, that looks like a problem to be resolved at the level of the C++ compiler and MPI. > > Unless there are wrinkles of this build process that I don't understand (likely), this [6] looks non-standard to me: > > includedir="${prefix}/include" > ... > ./configure --prefix=${prefix} \ > ... > -with-mpi-include="${includedir}" \ > ... > > > Is it possible to configure using --with-mpi-dir, instead of the separate --with-mpi-include and --with-mpi-lib commands? Well --with-mpi-dir is preferable if using mpicc/mpif90 etc from that location. Otherwise - if one really need to use native compilers [aka gcc/gfortran] - its appropriate to use --with-mpi-include/with-mpi-lib options. Best to verify if the correct values are used from 'mpicc -show' [or equivalent] for this install --with-mpi-lib="[/workspace/destdir/lib/libmpi.so,/workspace/destdir/lib/libmpifort.so]" This list appears to be in the wrong order - but then - the order usually doesn't matter for shared library usage. Satish > > > As an aside, maybe Satish can say more, but I'm not sure if it's advisable to override variables in the make command [7]. > > [8] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log-L7795 > [6] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L45 > [7] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L55 > > > > Am 02.07.2021 um 06:25 schrieb Kozdon, Jeremy (CIV) : > > > > I have been talking with Boris Kaus and Patrick Sanan about trying to revive the Julia PETSc interface wrappers. One of the first things to get going is to use Julia's binary builder [1] to wrap more scalar, real, and int type builds of the PETSc library; the current distribution is just Real, double, Int32. I've been working on a PR for this [2] but have been running into some build issues on some architectures [3]. > > > > I doubt that anyone here is an expert with Julia's binary builder system, but I was wondering if anyone who is better with the PETSc build system can see anything obvious from the configure.log [4] that might help me sort out what's going on. > > > > This exact script worked on 2020-08-20 [5] to build the libraries, se something has obviously changed with either the Julia build system and/or one (or more!) of the dependency binaries. > > > > For those that don't know, Julia's binary builder system essentially allows users to download binaries directly from the web for any system that the Julia Programing language distributes binaries for. So a (desktop) user can get MPI, PETSc, etc. without the headache of having to build anything from scratch; obviously on clusters you would still want to use system MPIs and what not. > > > > ---- > > > > [1] https://github.com/JuliaPackaging/BinaryBuilder.jl > > [2] https://github.com/JuliaPackaging/Yggdrasil/pull/3249 > > [3] https://github.com/JuliaPackaging/Yggdrasil/pull/3249#issuecomment-872698681 > > [4] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log > > [5] https://github.com/JuliaBinaryWrappers/PETSc_jll.jl/releases/tag/PETSc-v3.13.4%2B0 > From balay at mcs.anl.gov Fri Jul 2 10:35:38 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 2 Jul 2021 10:35:38 -0500 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> Message-ID: On Fri, 2 Jul 2021, Matthew Knepley wrote: > On Fri, Jul 2, 2021 at 2:05 AM Patrick Sanan > wrote: > > > As you mention in [4], the proximate cause of the configure failure is > > this link error [8]: > > > > That missing function was introduced in GCC 7.0, and is there only for > i686, not x86_64. This looks like a bad GCC install to me. >>>>>>> Checking for program /opt/bin/i686-linux-gnu-libgfortran3-cxx11/cc...found Executing: cc -o /tmp/petsc-wfp3a1w4/config.setCompilers/conftest /tmp/petsc-wfp3a1w4/config.setCompilers/conftest.o -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /opt/i686-linux-gnu/bin/../lib/gcc/i686-linux-gnu/6.1.0/../../../../i686-linux-gnu/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status Running Executable WITHOUT threads to time it out Executing: cc --version stdout: i686-linux-gnu-gcc (GCC) 6.1.0 Checking for program /opt/bin/i686-linux-gnu-libgfortran3-cxx11/c++...found /workspace/destdir/lib/libstdc++.so: undefined reference to `__divmoddi4 at GCC_7.0.0' <<<<<< Yeah - its strange that there is a reference to @GCC_7.0.0 symbol from /workspace/destdir/lib/libstdc++.so - which appears to be gcc-6.1.0 install. And I'm confused by multiple paths - so its not clear if they all belong to the same compiler install. /opt/bin/i686-linux-gnu-libgfortran3-cxx11 /opt/i686-linux-gnu/bin/../lib/gcc/i686-linux-gnu/6.1.0/../../../../i686-linux-gnu/bin/ -> /opt/i686-linux-gnu/i686-linux-gnu/bin/ /workspace/destdir/lib/ So yeah - the compiler install is likely broken. Something to try is --with-cxx=0 Satish > > Matt > > > > Naively, that looks like a problem to be resolved at the level of the C++ > > compiler and MPI. > > > > Unless there are wrinkles of this build process that I don't understand > > (likely), this [6] looks non-standard to me: > > > > includedir="${prefix}/include" > > ... > > ./configure --prefix=${prefix} \ > > ... > > -with-mpi-include="${includedir}" \ > > ... > > > > > > Is it possible to configure using --with-mpi-dir, instead of the separate > > --with-mpi-include and --with-mpi-lib commands? > > > > > > As an aside, maybe Satish can say more, but I'm not sure if it's advisable > > to override variables in the make command [7]. > > > > [8] > > https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log-L7795 > > [6] > > https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L45 > > [7] > > https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L55 > > > > > > > Am 02.07.2021 um 06:25 schrieb Kozdon, Jeremy (CIV) : > > > > > > I have been talking with Boris Kaus and Patrick Sanan about trying to > > revive the Julia PETSc interface wrappers. One of the first things to get > > going is to use Julia's binary builder [1] to wrap more scalar, real, and > > int type builds of the PETSc library; the current distribution is just > > Real, double, Int32. I've been working on a PR for this [2] but have been > > running into some build issues on some architectures [3]. > > > > > > I doubt that anyone here is an expert with Julia's binary builder > > system, but I was wondering if anyone who is better with the PETSc build > > system can see anything obvious from the configure.log [4] that might help > > me sort out what's going on. > > > > > > This exact script worked on 2020-08-20 [5] to build the libraries, se > > something has obviously changed with either the Julia build system and/or > > one (or more!) of the dependency binaries. > > > > > > For those that don't know, Julia's binary builder system essentially > > allows users to download binaries directly from the web for any system that > > the Julia Programing language distributes binaries for. So a (desktop) user > > can get MPI, PETSc, etc. without the headache of having to build anything > > from scratch; obviously on clusters you would still want to use system MPIs > > and what not. > > > > > > ---- > > > > > > [1] https://github.com/JuliaPackaging/BinaryBuilder.jl > > > [2] https://github.com/JuliaPackaging/Yggdrasil/pull/3249 > > > [3] > > https://github.com/JuliaPackaging/Yggdrasil/pull/3249#issuecomment-872698681 > > > [4] > > https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log > > > [5] > > https://github.com/JuliaBinaryWrappers/PETSc_jll.jl/releases/tag/PETSc-v3.13.4%2B0 > > > > > > From jekozdon at nps.edu Fri Jul 2 10:54:14 2021 From: jekozdon at nps.edu (Kozdon, Jeremy (CIV)) Date: Fri, 2 Jul 2021 15:54:14 +0000 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> Message-ID: <167CD2A2-1C3A-4CA4-B737-C97E8607B733@nps.edu> Thanks for all the feedback! Digging a bit deeper in the dependencies and it seems that the compiler have been updated but MPICH has not been rebuilt since then. Wondering if this is causing some of the issues. Going to try to manually rebuilt MPICH to see if that help. > On Jul 2, 2021, at 8:35 AM, Satish Balay via petsc-users wrote: > > So yeah - the compiler install is likely broken. Something to try is --with-cxx=0 For wrapping in Julia I would assume that the Fortran, Python, or c++ interfaces are not needed. Python is already off, is much lost by disabling Fortran and cxx in this case? Also, I happened to just be looking at the PETSc website and saw that note about not sending install questions for the users email list. My bad! Sorry all. From balay at mcs.anl.gov Fri Jul 2 11:03:04 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 2 Jul 2021 11:03:04 -0500 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: <167CD2A2-1C3A-4CA4-B737-C97E8607B733@nps.edu> References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> <167CD2A2-1C3A-4CA4-B737-C97E8607B733@nps.edu> Message-ID: On Fri, 2 Jul 2021, Kozdon, Jeremy (CIV) wrote: > Thanks for all the feedback! > > Digging a bit deeper in the dependencies and it seems that the compiler have been updated but MPICH has not been rebuilt since then. Wondering if this is causing some of the issues. Going to try to manually rebuilt MPICH to see if that help. BTW: I noticed: > /workspace/destdir/lib/libstdc++.so: undefined reference to `__divmoddi4 at GCC_7.0.0' --prefix=/workspace/destdir --with-blaslapack-lib=/workspace/destdir/lib/libopenblas.so --with-mpi-include=/workspace/destdir/include So /workspace/destdir has a dupliate compile install - that is conflicting with /opt/bin/i686-linux-gnu-libgfortran3-cxx11/c++? And what compiler is used to build blas/mpi in this location? And the 'updated compiler' you refer to is in /workspace/destdir - and you shouldn't be using /opt/bin/i686-linux-gnu-libgfortran3-cxx11 ? > > > On Jul 2, 2021, at 8:35 AM, Satish Balay via petsc-users wrote: > > > > So yeah - the compiler install is likely broken. Something to try is --with-cxx=0 > > For wrapping in Julia I would assume that the Fortran, Python, or c++ interfaces are not needed. Python is already off, is much lost by disabling Fortran and cxx in this case? Likely not - but then you get fortran dependency from blas [-lgfortran - which can be easily specified to configure - if needed]. And if building PETSc with some externalpackages - like mumps/hypre - that require fortran or c++ compilers. > > Also, I happened to just be looking at the PETSc website and saw that note about not sending install questions for the users email list. My bad! Sorry all. Its fine to send install issues here. That recommendation [use petsc-maint] is primarily to avoid flooding mailing list users with huge configure.log attachments. Satish From bsmith at petsc.dev Fri Jul 2 11:45:51 2021 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 2 Jul 2021 11:45:51 -0500 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> Message-ID: <72BADF3A-29E6-4661-8A8B-D3933E0EDFD5@petsc.dev> > On Jul 2, 2021, at 10:03 AM, Stefano Zampini wrote: > > Patrick > > Should this be fixed in PETSc build system? https://github.com/JuliaPackaging/Yggdrasil/blob/master/P/PETSc/bundled/patches/petsc_name_mangle.patch This line worries me. How does the get work here? If the first argument is a list while the second seems to be a (n empty) string then what will the [0] do with the empty string? It seems possible the use of get in Configure has to be double checked everywhere it is used to ensure consistency between whether a list or some other type is returned. Perhaps a custom "get" is needed for configure to make the code clean and always handle lists etc properly? Barry > >> On Jul 2, 2021, at 9:05 AM, Patrick Sanan wrote: >> >> As you mention in [4], the proximate cause of the configure failure is this link error [8]: >> >> Naively, that looks like a problem to be resolved at the level of the C++ compiler and MPI. >> >> Unless there are wrinkles of this build process that I don't understand (likely), this [6] looks non-standard to me: >> >> includedir="${prefix}/include" >> ... >> ./configure --prefix=${prefix} \ >> ... >> -with-mpi-include="${includedir}" \ >> ... >> >> >> Is it possible to configure using --with-mpi-dir, instead of the separate --with-mpi-include and --with-mpi-lib commands? >> >> >> As an aside, maybe Satish can say more, but I'm not sure if it's advisable to override variables in the make command [7]. >> >> [8] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log-L7795 >> [6] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L45 >> [7] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-build_tarballs-jl-L55 >> >> >>> Am 02.07.2021 um 06:25 schrieb Kozdon, Jeremy (CIV) : >>> >>> I have been talking with Boris Kaus and Patrick Sanan about trying to revive the Julia PETSc interface wrappers. One of the first things to get going is to use Julia's binary builder [1] to wrap more scalar, real, and int type builds of the PETSc library; the current distribution is just Real, double, Int32. I've been working on a PR for this [2] but have been running into some build issues on some architectures [3]. >>> >>> I doubt that anyone here is an expert with Julia's binary builder system, but I was wondering if anyone who is better with the PETSc build system can see anything obvious from the configure.log [4] that might help me sort out what's going on. >>> >>> This exact script worked on 2020-08-20 [5] to build the libraries, se something has obviously changed with either the Julia build system and/or one (or more!) of the dependency binaries. >>> >>> For those that don't know, Julia's binary builder system essentially allows users to download binaries directly from the web for any system that the Julia Programing language distributes binaries for. So a (desktop) user can get MPI, PETSc, etc. without the headache of having to build anything from scratch; obviously on clusters you would still want to use system MPIs and what not. >>> >>> ---- >>> >>> [1] https://github.com/JuliaPackaging/BinaryBuilder.jl >>> [2] https://github.com/JuliaPackaging/Yggdrasil/pull/3249 >>> [3] https://github.com/JuliaPackaging/Yggdrasil/pull/3249#issuecomment-872698681 >>> [4] https://gist.github.com/jkozdon/c161fb15f2df23c3fbc0a5a095887ef8#file-configure-log >>> [5] https://github.com/JuliaBinaryWrappers/PETSc_jll.jl/releases/tag/PETSc-v3.13.4%2B0 >> > From jekozdon at nps.edu Fri Jul 2 15:29:05 2021 From: jekozdon at nps.edu (Kozdon, Jeremy (CIV)) Date: Fri, 2 Jul 2021 20:29:05 +0000 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> <167CD2A2-1C3A-4CA4-B737-C97E8607B733@nps.edu> Message-ID: <10782CD0-C458-48A3-8424-B5FE2E931208@nps.edu> So it seems that if I rollback some of the compiler updates the current build recipe works, so it seems that the upstream dependency that sets the compiler libraries needs fixing. Thanks for all your help. Once I get this sorted, I will look into some of the other build weirdness that folks have pointed out. > On Jul 2, 2021, at 9:03 AM, Satish Balay wrote: > > On Fri, 2 Jul 2021, Kozdon, Jeremy (CIV) wrote: > >> Thanks for all the feedback! >> >> Digging a bit deeper in the dependencies and it seems that the compiler have been updated but MPICH has not been rebuilt since then. Wondering if this is causing some of the issues. Going to try to manually rebuilt MPICH to see if that help. > > BTW: I noticed: > >> /workspace/destdir/lib/libstdc++.so: undefined reference to `__divmoddi4 at GCC_7.0.0' > > --prefix=/workspace/destdir > --with-blaslapack-lib=/workspace/destdir/lib/libopenblas.so > --with-mpi-include=/workspace/destdir/include > > So /workspace/destdir has a dupliate compile install - that is conflicting with /opt/bin/i686-linux-gnu-libgfortran3-cxx11/c++? > > And what compiler is used to build blas/mpi in this location? > > And the 'updated compiler' you refer to is in /workspace/destdir - and you shouldn't be using /opt/bin/i686-linux-gnu-libgfortran3-cxx11 ? > > >> >>> On Jul 2, 2021, at 8:35 AM, Satish Balay via petsc-users wrote: >>> >>> So yeah - the compiler install is likely broken. Something to try is --with-cxx=0 >> >> For wrapping in Julia I would assume that the Fortran, Python, or c++ interfaces are not needed. Python is already off, is much lost by disabling Fortran and cxx in this case? > > Likely not - but then you get fortran dependency from blas [-lgfortran - which can be easily specified to configure - if needed]. And if building PETSc with some externalpackages - like mumps/hypre - that require fortran or c++ compilers. > >> >> Also, I happened to just be looking at the PETSc website and saw that note about not sending install questions for the users email list. My bad! Sorry all. > > Its fine to send install issues here. That recommendation [use petsc-maint] is primarily to avoid flooding mailing list users with huge configure.log attachments. > > Satish > From jekozdon at nps.edu Fri Jul 2 15:46:23 2021 From: jekozdon at nps.edu (Kozdon, Jeremy (CIV)) Date: Fri, 2 Jul 2021 20:46:23 +0000 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: <72BADF3A-29E6-4661-8A8B-D3933E0EDFD5@petsc.dev> References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> <72BADF3A-29E6-4661-8A8B-D3933E0EDFD5@petsc.dev> Message-ID: <60578B1E-1613-441F-B87F-776231C05CE3@nps.edu> BTW: > On Jul 2, 2021, at 9:45 AM, Barry Smith wrote: > > NPS WARNING: *external sender* verify before acting. > > >> On Jul 2, 2021, at 10:03 AM, Stefano Zampini wrote: >> >> Patrick >> >> Should this be fixed in PETSc build system? https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FJuliaPackaging%2FYggdrasil%2Fblob%2Fmaster%2FP%2FPETSc%2Fbundled%2Fpatches%2Fpetsc_name_mangle.patch&data=04%7C01%7Cjekozdon%40nps.edu%7Ce30e522e78d643e61c7d08d93d78ddf7%7C6d936231a51740ea9199f7578963378e%7C0%7C0%7C637608411608120561%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=q1T617kzxD9HsSoXQTl7N%2FRi%2B%2BoKHbOgZRy7KEnmZKY%3D&reserved=0 > > This line worries me. > > How does the get work here? If the first argument is a list while the second seems to be a (n empty) string then what will the [0] do with the empty string? It seems possible the use of get in Configure has to be double checked everywhere it is used to ensure consistency between whether a list or some other type is returned. Perhaps a custom "get" is needed for configure to make the code clean and always handle lists etc properly? > > Barry For the Julia build it's unclear to me why this was put in the original build script since configure option is --with-blaslapack-suffix="" which seems to just be the default anyway(?) From knepley at gmail.com Fri Jul 2 17:08:26 2021 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 2 Jul 2021 17:08:26 -0500 Subject: [petsc-users] PETSc with Julia Binary Builder In-Reply-To: <60578B1E-1613-441F-B87F-776231C05CE3@nps.edu> References: <0E3C0D6B-69D9-4D8A-A9B9-7F735F54B178@nps.edu> <83A80374-7985-4D09-B4B4-C4B028640B8D@gmail.com> <72BADF3A-29E6-4661-8A8B-D3933E0EDFD5@petsc.dev> <60578B1E-1613-441F-B87F-776231C05CE3@nps.edu> Message-ID: On Fri, Jul 2, 2021 at 3:46 PM Kozdon, Jeremy (CIV) wrote: > BTW: > > > On Jul 2, 2021, at 9:45 AM, Barry Smith wrote: > > > > NPS WARNING: *external sender* verify before acting. > > > > > >> On Jul 2, 2021, at 10:03 AM, Stefano Zampini > wrote: > >> > >> Patrick > >> > >> Should this be fixed in PETSc build system? > https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FJuliaPackaging%2FYggdrasil%2Fblob%2Fmaster%2FP%2FPETSc%2Fbundled%2Fpatches%2Fpetsc_name_mangle.patch&data=04%7C01%7Cjekozdon%40nps.edu%7Ce30e522e78d643e61c7d08d93d78ddf7%7C6d936231a51740ea9199f7578963378e%7C0%7C0%7C637608411608120561%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=q1T617kzxD9HsSoXQTl7N%2FRi%2B%2BoKHbOgZRy7KEnmZKY%3D&reserved=0 > > > > This line worries me. > > > > How does the get work here? If the first argument is a list while the > second seems to be a (n empty) string then what will the [0] do with the > empty string? It seems possible the use of get in Configure has to be > double checked everywhere it is used to ensure consistency between whether > a list or some other type is returned. Perhaps a custom "get" is needed for > configure to make the code clean and always handle lists etc properly? > > > > Barry > > For the Julia build it's unclear to me why this was put in the original > build script since configure option is > > --with-blaslapack-suffix="" > > which seems to just be the default anyway(?) Yes, that is right. Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Fri Jul 2 21:42:48 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Fri, 2 Jul 2021 21:42:48 -0500 Subject: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process In-Reply-To: References: <93521e6acde64da2af7c415ceee9273c@mek.dtu.dk> <61644b633c624282a29f4e0ea80e61c7@mek.dtu.dk> <3285e9ab0ba941e583998a3bb7a5c67c@mek.dtu.dk> <87y2arpg8a.fsf@jedbrown.org> <87pmw2nr2m.fsf@jedbrown.org> Message-ID: Peder, Your example scatters a parallel vector to a sequential vector on one rank. It is a pattern like MPI_Gatherv. I want to see how you scatter parallel vectors to sequential vectors on every rank. --Junchao Zhang On Fri, Jul 2, 2021 at 4:07 AM Peder J?rgensgaard Olesen wrote: > Matt's method seems to work well, though instead of editing the actual > function I put the relevant parts directly into my code. I made the small > example attached here. > > > I might look into Star Forests at some point, though it's not really > touched upon in the manual (I will probably take a look at your paper, > https://arxiv.org/abs/2102.13018). > > > Med venlig hilsen / Best regards > > Peder > ------------------------------ > *Fra:* Junchao Zhang > *Sendt:* 1. juli 2021 16:38:29 > *Til:* Jed Brown > *Cc:* Peder J?rgensgaard Olesen; petsc-users at mcs.anl.gov > *Emne:* Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on > non-zeroth process > > Peder, > PETSCSF_PATTERN_ALLTOALL only supports MPI_Alltoall (not Alltoallv), and > is only used by petsc internally at few places. > I suggest you can go with Matt's approach. After it solves your problem, > you can distill an example to demo the communication pattern. Then we can > see how to efficiently support that in petsc. > > Thanks. > --Junchao Zhang > > > On Thu, Jul 1, 2021 at 7:42 AM Jed Brown wrote: > >> Peder J?rgensgaard Olesen writes: >> >> > Each process is assigned an indexed subset of the tasks (the tasks are >> of constant size), and, for each task index, the relevant data is scattered >> as a SEQVEC to the process (this is done for all processes in each step, >> using an adaption of the code in Matt's link). This way each process only >> receives just the data it needs to complete the task. While I'm currently >> working with very moderate size data sets I'll eventually need to handle >> something rather more massive, so I want to economize memory where possible >> and give each process only the data it needs. >> >> From the sounds of it, this pattern ultimately boils down to MPI_Gather >> being called P times where P is the size of the communicator. This will >> work okay when P is small, but it's much less efficient than calling >> MPI_Alltoall (or MPI_Alltoallv), which you can do by creating one PetscSF >> that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You >> can see an example. >> >> >> https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pjool at mek.dtu.dk Sat Jul 3 13:27:47 2021 From: pjool at mek.dtu.dk (=?iso-8859-1?Q?Peder_J=F8rgensgaard_Olesen?=) Date: Sat, 3 Jul 2021 18:27:47 +0000 Subject: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process In-Reply-To: References: <93521e6acde64da2af7c415ceee9273c@mek.dtu.dk> <61644b633c624282a29f4e0ea80e61c7@mek.dtu.dk> <3285e9ab0ba941e583998a3bb7a5c67c@mek.dtu.dk> <87y2arpg8a.fsf@jedbrown.org> <87pmw2nr2m.fsf@jedbrown.org> , Message-ID: <90870e35d4c647d49bba3b6ce38c9409@mek.dtu.dk> Yeah, scattering a parallel vector to a sequential on one rank was exactly what I wanted to do (apologies if I didn't phrase that clearly). A code like the one I shared does just what I needed, replacing size-1 with the desired target rank in the if-statement. Isn't what you describe what VecScatterCreateToAll is for? Med venlig hilsen / Best regards Peder ________________________________ Fra: Junchao Zhang Sendt: 3. juli 2021 04:42:48 Til: Peder J?rgensgaard Olesen Cc: Jed Brown; petsc-users at mcs.anl.gov Emne: Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process Peder, Your example scatters a parallel vector to a sequential vector on one rank. It is a pattern like MPI_Gatherv. I want to see how you scatter parallel vectors to sequential vectors on every rank. --Junchao Zhang On Fri, Jul 2, 2021 at 4:07 AM Peder J?rgensgaard Olesen > wrote: Matt's method seems to work well, though instead of editing the actual function I put the relevant parts directly into my code. I made the small example attached here. I might look into Star Forests at some point, though it's not really touched upon in the manual (I will probably take a look at your paper, https://arxiv.org/abs/2102.13018). Med venlig hilsen / Best regards Peder ________________________________ Fra: Junchao Zhang > Sendt: 1. juli 2021 16:38:29 Til: Jed Brown Cc: Peder J?rgensgaard Olesen; petsc-users at mcs.anl.gov Emne: Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process Peder, PETSCSF_PATTERN_ALLTOALL only supports MPI_Alltoall (not Alltoallv), and is only used by petsc internally at few places. I suggest you can go with Matt's approach. After it solves your problem, you can distill an example to demo the communication pattern. Then we can see how to efficiently support that in petsc. Thanks. --Junchao Zhang On Thu, Jul 1, 2021 at 7:42 AM Jed Brown > wrote: Peder J?rgensgaard Olesen > writes: > Each process is assigned an indexed subset of the tasks (the tasks are of constant size), and, for each task index, the relevant data is scattered as a SEQVEC to the process (this is done for all processes in each step, using an adaption of the code in Matt's link). This way each process only receives just the data it needs to complete the task. While I'm currently working with very moderate size data sets I'll eventually need to handle something rather more massive, so I want to economize memory where possible and give each process only the data it needs. >From the sounds of it, this pattern ultimately boils down to MPI_Gather being called P times where P is the size of the communicator. This will work okay when P is small, but it's much less efficient than calling MPI_Alltoall (or MPI_Alltoallv), which you can do by creating one PetscSF that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You can see an example. https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151 -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Sat Jul 3 22:36:29 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Sat, 3 Jul 2021 22:36:29 -0500 Subject: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process In-Reply-To: <90870e35d4c647d49bba3b6ce38c9409@mek.dtu.dk> References: <93521e6acde64da2af7c415ceee9273c@mek.dtu.dk> <61644b633c624282a29f4e0ea80e61c7@mek.dtu.dk> <3285e9ab0ba941e583998a3bb7a5c67c@mek.dtu.dk> <87y2arpg8a.fsf@jedbrown.org> <87pmw2nr2m.fsf@jedbrown.org> <90870e35d4c647d49bba3b6ce38c9409@mek.dtu.dk> Message-ID: VecScatterCreateToAll() scatters the MPI vector to a sequential vector on every rank (as if each rank has a duplicate of the same sequential vector). If the sample code you provided is what you want, it is fine and we just need to implement a minor optimization in petsc to make it efficient. But if you want to put the scatter in a loop as follows, then it is a very bad code. for (p=0; p wrote: > Yeah, scattering a parallel vector to a sequential on one rank was exactly > what I wanted to do (apologies if I didn't phrase that clearly). A code > like the one I shared does just what I needed, replacing size-1 with the > desired target rank in the if-statement. > > > Isn't what you describe what VecScatterCreateToAll is for? > > > > Med venlig hilsen / Best regards > > Peder > ------------------------------ > *Fra:* Junchao Zhang > *Sendt:* 3. juli 2021 04:42:48 > *Til:* Peder J?rgensgaard Olesen > *Cc:* Jed Brown; petsc-users at mcs.anl.gov > *Emne:* Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on > non-zeroth process > > Peder, > Your example scatters a parallel vector to a sequential vector on one > rank. It is a pattern like MPI_Gatherv. > I want to see how you scatter parallel vectors to sequential vectors on > every rank. > > --Junchao Zhang > > > On Fri, Jul 2, 2021 at 4:07 AM Peder J?rgensgaard Olesen > wrote: > >> Matt's method seems to work well, though instead of editing the actual >> function I put the relevant parts directly into my code. I made the small >> example attached here. >> >> >> I might look into Star Forests at some point, though it's not really >> touched upon in the manual (I will probably take a look at your paper, >> https://arxiv.org/abs/2102.13018). >> >> >> Med venlig hilsen / Best regards >> >> Peder >> ------------------------------ >> *Fra:* Junchao Zhang >> *Sendt:* 1. juli 2021 16:38:29 >> *Til:* Jed Brown >> *Cc:* Peder J?rgensgaard Olesen; petsc-users at mcs.anl.gov >> *Emne:* Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on >> non-zeroth process >> >> Peder, >> PETSCSF_PATTERN_ALLTOALL only supports MPI_Alltoall (not Alltoallv), >> and is only used by petsc internally at few places. >> I suggest you can go with Matt's approach. After it solves your >> problem, you can distill an example to demo the communication pattern. Then >> we can see how to efficiently support that in petsc. >> >> Thanks. >> --Junchao Zhang >> >> >> On Thu, Jul 1, 2021 at 7:42 AM Jed Brown wrote: >> >>> Peder J?rgensgaard Olesen writes: >>> >>> > Each process is assigned an indexed subset of the tasks (the tasks are >>> of constant size), and, for each task index, the relevant data is scattered >>> as a SEQVEC to the process (this is done for all processes in each step, >>> using an adaption of the code in Matt's link). This way each process only >>> receives just the data it needs to complete the task. While I'm currently >>> working with very moderate size data sets I'll eventually need to handle >>> something rather more massive, so I want to economize memory where possible >>> and give each process only the data it needs. >>> >>> From the sounds of it, this pattern ultimately boils down to MPI_Gather >>> being called P times where P is the size of the communicator. This will >>> work okay when P is small, but it's much less efficient than calling >>> MPI_Alltoall (or MPI_Alltoallv), which you can do by creating one PetscSF >>> that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You >>> can see an example. >>> >>> >>> https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151 >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pjool at mek.dtu.dk Sun Jul 4 05:27:30 2021 From: pjool at mek.dtu.dk (=?iso-8859-1?Q?Peder_J=F8rgensgaard_Olesen?=) Date: Sun, 4 Jul 2021 10:27:30 +0000 Subject: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process In-Reply-To: References: <93521e6acde64da2af7c415ceee9273c@mek.dtu.dk> <61644b633c624282a29f4e0ea80e61c7@mek.dtu.dk> <3285e9ab0ba941e583998a3bb7a5c67c@mek.dtu.dk> <87y2arpg8a.fsf@jedbrown.org> <87pmw2nr2m.fsf@jedbrown.org> <90870e35d4c647d49bba3b6ce38c9409@mek.dtu.dk>, Message-ID: <1fad90330b674132a8a49aba4ee2cdad@mek.dtu.dk> I agree, using the loop you describe would definitely not be a clever way of doing it, nor is it at all what I was going for. The code with Matt's method indeed does what I needed. I'd be happy if it could be further optimized. Med venlig hilsen / Best regards Peder ________________________________ Fra: Junchao Zhang Sendt: 4. juli 2021 05:36:29 Til: Peder J?rgensgaard Olesen Cc: Jed Brown; petsc-users at mcs.anl.gov Emne: Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process VecScatterCreateToAll() scatters the MPI vector to a sequential vector on every rank (as if each rank has a duplicate of the same sequential vector). If the sample code you provided is what you want, it is fine and we just need to implement a minor optimization in petsc to make it efficient. But if you want to put the scatter in a loop as follows, then it is a very bad code. for (p=0; p> wrote: Yeah, scattering a parallel vector to a sequential on one rank was exactly what I wanted to do (apologies if I didn't phrase that clearly). A code like the one I shared does just what I needed, replacing size-1 with the desired target rank in the if-statement. Isn't what you describe what VecScatterCreateToAll is for? Med venlig hilsen / Best regards Peder ________________________________ Fra: Junchao Zhang > Sendt: 3. juli 2021 04:42:48 Til: Peder J?rgensgaard Olesen Cc: Jed Brown; petsc-users at mcs.anl.gov Emne: Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process Peder, Your example scatters a parallel vector to a sequential vector on one rank. It is a pattern like MPI_Gatherv. I want to see how you scatter parallel vectors to sequential vectors on every rank. --Junchao Zhang On Fri, Jul 2, 2021 at 4:07 AM Peder J?rgensgaard Olesen > wrote: Matt's method seems to work well, though instead of editing the actual function I put the relevant parts directly into my code. I made the small example attached here. I might look into Star Forests at some point, though it's not really touched upon in the manual (I will probably take a look at your paper, https://arxiv.org/abs/2102.13018). Med venlig hilsen / Best regards Peder ________________________________ Fra: Junchao Zhang > Sendt: 1. juli 2021 16:38:29 Til: Jed Brown Cc: Peder J?rgensgaard Olesen; petsc-users at mcs.anl.gov Emne: Re: Sv: [petsc-users] Scatter parallel Vec to sequential Vec on non-zeroth process Peder, PETSCSF_PATTERN_ALLTOALL only supports MPI_Alltoall (not Alltoallv), and is only used by petsc internally at few places. I suggest you can go with Matt's approach. After it solves your problem, you can distill an example to demo the communication pattern. Then we can see how to efficiently support that in petsc. Thanks. --Junchao Zhang On Thu, Jul 1, 2021 at 7:42 AM Jed Brown > wrote: Peder J?rgensgaard Olesen > writes: > Each process is assigned an indexed subset of the tasks (the tasks are of constant size), and, for each task index, the relevant data is scattered as a SEQVEC to the process (this is done for all processes in each step, using an adaption of the code in Matt's link). This way each process only receives just the data it needs to complete the task. While I'm currently working with very moderate size data sets I'll eventually need to handle something rather more massive, so I want to economize memory where possible and give each process only the data it needs. >From the sounds of it, this pattern ultimately boils down to MPI_Gather being called P times where P is the size of the communicator. This will work okay when P is small, but it's much less efficient than calling MPI_Alltoall (or MPI_Alltoallv), which you can do by creating one PetscSF that ships the needed data to each task and PETSCSF_PATTERN_ALLTOALL. You can see an example. https://gitlab.com/petsc/petsc/-/blob/main/src/vec/is/sf/tests/ex3.c#L93-151 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Mon Jul 5 11:50:30 2021 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Mon, 5 Jul 2021 18:50:30 +0200 Subject: [petsc-users] HDF5 DM and VecView with MPI :: Crash Message-ID: Dear all, I keep having this error on one of the supercomputers I have access to : [1]PETSC ERROR: The EXACT line numbers in the error traceback are not available. [1]PETSC ERROR: instead the line number of the start of the function is given. [1]PETSC ERROR: #1 H5Dcreate2() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:690 [1]PETSC ERROR: #2 VecView_MPI_HDF5() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:594 [1]PETSC ERROR: #3 VecView_MPI() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:787 [1]PETSC ERROR: #4 VecView() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/interface/vector.c:576 [1]PETSC ERROR: #5 DMPlexCoordinatesView_HDF5_Internal() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:560 [1]PETSC ERROR: #6 DMPlexView_HDF5_Internal() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:802 [1]PETSC ERROR: #7 DMView_Plex() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plex.c:1366 [1]PETSC ERROR: #8 DMView() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/interface/dm.c:954 [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- The configure options are as follow : [1]PETSC ERROR: Configure options --with-clean=1 --prefix=/ccc/work/cont001/ocre/bridelbert/05-PETSC/build_uns3D_inti --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=INTI_UNS3D --with-fc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpifort --with-cc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicc --with-cxx=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicxx --with-openmp=0 --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p2.tar.gz --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default --download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2 --download-netcdf=/ccc/work/cont001/ocre/bridelbert/netcdf-4.5.0.tar.gz --download-pnetcdf=/ccc/work/cont001/ocre/bridelbert/pnetcdf-1.12.1.tar.gz --download-exodusii=/ccc/work/cont001/ocre/bridelbert/v2021-01-20.tar.gz --download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz The piece of code that is responsible is that one : call PetscViewerHDF5Open(PETSC_COMM_WORLD, "debug_initmesh.h5", FILE_MODE_WRITE, hdf5Viewer, ierr); CHKERRA(ierr) call PetscViewerPushFormat(hdf5Viewer, PETSC_VIEWER_HDF5_XDMF, ierr); CHKERRA(ierr) call DMView(dm, hdf5Viewer, ierr); CHKERRA(ierr) call PetscViewerPopFormat(hdf5Viewer, ierr); CHKERRA(ierr) call PetscViewerDestroy(hdf5Viewer, ierr); CHKERRA(ierr) I tried with gcc, intel compiler, openmpi 2.x.x or openmpi 4.x.x ... same problems ... can anyone please advise ? It's starting to make me quite crazy ... x( Thank you !!! Thibault -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jul 5 18:08:05 2021 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 5 Jul 2021 18:08:05 -0500 Subject: [petsc-users] HDF5 DM and VecView with MPI :: Crash In-Reply-To: References: Message-ID: <09C51A04-19AF-4F2E-B020-BE4B6B4441B1@petsc.dev> Please send the error message that is printed to the screen. Also please send the exact PETSc version you are using. If possible also a code that reproduces the problem. Can you view other simpler things with HDF5? Like say just a vector? Barry > On Jul 5, 2021, at 11:50 AM, Thibault Bridel-Bertomeu wrote: > > Dear all, > > I keep having this error on one of the supercomputers I have access to : > > [1]PETSC ERROR: The EXACT line numbers in the error traceback are not available. > [1]PETSC ERROR: instead the line number of the start of the function is given. > [1]PETSC ERROR: #1 H5Dcreate2() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:690 > [1]PETSC ERROR: #2 VecView_MPI_HDF5() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:594 > [1]PETSC ERROR: #3 VecView_MPI() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:787 > [1]PETSC ERROR: #4 VecView() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/interface/vector.c:576 > [1]PETSC ERROR: #5 DMPlexCoordinatesView_HDF5_Internal() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:560 > [1]PETSC ERROR: #6 DMPlexView_HDF5_Internal() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:802 > [1]PETSC ERROR: #7 DMView_Plex() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plex.c:1366 > [1]PETSC ERROR: #8 DMView() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/interface/dm.c:954 > [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > > The configure options are as follow : > > [1]PETSC ERROR: Configure options --with-clean=1 --prefix=/ccc/work/cont001/ocre/bridelbert/05-PETSC/build_uns3D_inti --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=INTI_UNS3D --with-fc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpifort --with-cc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicc --with-cxx=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicxx --with-openmp=0 --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p2.tar.gz --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default --download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2 --download-netcdf=/ccc/work/cont001/ocre/bridelbert/netcdf-4.5.0.tar.gz --download-pnetcdf=/ccc/work/cont001/ocre/bridelbert/pnetcdf-1.12.1.tar.gz --download-exodusii=/ccc/work/cont001/ocre/bridelbert/v2021-01-20.tar.gz --download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz > > The piece of code that is responsible is that one : > > call PetscViewerHDF5Open(PETSC_COMM_WORLD, "debug_initmesh.h5", FILE_MODE_WRITE, hdf5Viewer, ierr); CHKERRA(ierr) > call PetscViewerPushFormat(hdf5Viewer, PETSC_VIEWER_HDF5_XDMF, ierr); CHKERRA(ierr) > call DMView(dm, hdf5Viewer, ierr); CHKERRA(ierr) > call PetscViewerPopFormat(hdf5Viewer, ierr); CHKERRA(ierr) > call PetscViewerDestroy(hdf5Viewer, ierr); CHKERRA(ierr) > > I tried with gcc, intel compiler, openmpi 2.x.x or openmpi 4.x.x ... same problems ... can anyone please advise ? It's starting to make me quite crazy ... x( > > Thank you !!! > > Thibault -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Tue Jul 6 01:51:25 2021 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Tue, 6 Jul 2021 08:51:25 +0200 Subject: [petsc-users] HDF5 DM and VecView with MPI :: Crash In-Reply-To: <09C51A04-19AF-4F2E-B020-BE4B6B4441B1@petsc.dev> References: <09C51A04-19AF-4F2E-B020-BE4B6B4441B1@petsc.dev> Message-ID: Hello Barry, Thank you for your answer. And sorry I forgot those important details ... Here is the complete error message for a DMView : [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: The EXACT line numbers in the error traceback are not available. [1]PETSC ERROR: instead the line number of the start of the function is given. [1]PETSC ERROR: #1 H5Dcreate2() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:690 [1]PETSC ERROR: #2 VecView_MPI_HDF5() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:594 [1]PETSC ERROR: #3 VecView_MPI() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:787 [1]PETSC ERROR: #4 VecView() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/interface/vector.c:576 [1]PETSC ERROR: #5 DMPlexCoordinatesView_HDF5_Internal() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:560 [1]PETSC ERROR: #6 DMPlexView_HDF5_Internal() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:802 [1]PETSC ERROR: #7 DMView_Plex() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plex.c:1366 [1]PETSC ERROR: #8 DMView() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/interface/dm.c:954 [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Signal received [1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Development GIT revision: v3.15.1-558-g07f732cb94 GIT Date: 2021-07-04 15:58:55 +0000 [1]PETSC ERROR: /ccc/work/cont001/ocre/bridelbert/EULERIAN2D/bin/eulerian2D on a named r1login by bridelbert Mon Jul 5 18:45:41 2021 [1]PETSC ERROR: Configure options --with-clean=1 --prefix=/ccc/work/cont001/ocre/bridelbert/05-PETSC/build_uns3D_inti --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=INTI_UNS3D --with-fc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpifort --with-cc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicc --with-cxx=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicxx --with-openmp=0 --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p2.tar.gz --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default --download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2 --download-netcdf=/ccc/work/cont001/ocre/bridelbert/netcdf-4.5.0.tar.gz --download-pnetcdf=/ccc/work/cont001/ocre/bridelbert/pnetcdf-1.12.1.tar.gz --download-exodusii=/ccc/work/cont001/ocre/bridelbert/v2021-01-20.tar.gz --download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz [1]PETSC ERROR: #1 User provided function() at unknown file:0 [1]PETSC ERROR: Checking the memory for corruption. -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- Here is the complete message for a VecView : [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: The EXACT line numbers in the error traceback are not available. [1]PETSC ERROR: instead the line number of the start of the function is given. [1]PETSC ERROR: #1 H5Dcreate2() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:690 [1]PETSC ERROR: #2 VecView_MPI_HDF5() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:594 [1]PETSC ERROR: #3 VecView_MPI() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:787 [1]PETSC ERROR: #4 VecView_Plex_Local_HDF5_Internal() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:132 [1]PETSC ERROR: #5 VecView_Plex_HDF5_Internal() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:247 [1]PETSC ERROR: #6 VecView_Plex() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plex.c:391 [1]PETSC ERROR: #7 VecView() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/interface/vector.c:576 [1]PETSC ERROR: #8 ourmonitor() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/ts/interface/ftn-custom/ztsf.c:129 [1]PETSC ERROR: #9 TSMonitor() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/ts/interface/tsmon.c:31 [1]PETSC ERROR: #10 TSSolve() at /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/ts/interface/ts.c:3858 [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Signal received [1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Development GIT revision: v3.15.1-558-g07f732cb94 GIT Date: 2021-07-04 15:58:55 +0000 [1]PETSC ERROR: /ccc/work/cont001/ocre/bridelbert/EULERIAN2D/bin/eulerian2D on a named r1login by bridelbert Tue Jul 6 08:46:43 2021 [1]PETSC ERROR: Configure options --with-clean=1 --prefix=/ccc/work/cont001/ocre/bridelbert/05-PETSC/build_uns3D_inti --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=INTI_UNS3D --with-fc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpifort --with-cc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicc --with-cxx=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicxx --with-openmp=0 --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p2.tar.gz --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default --download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2 --download-netcdf=/ccc/work/cont001/ocre/bridelbert/netcdf-4.5.0.tar.gz --download-pnetcdf=/ccc/work/cont001/ocre/bridelbert/pnetcdf-1.12.1.tar.gz --download-exodusii=/ccc/work/cont001/ocre/bridelbert/v2021-01-20.tar.gz --download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz [1]PETSC ERROR: #1 User provided function() at unknown file:0 [1]PETSC ERROR: Checking the memory for corruption. -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- I am currently on the "main" branch, my HEAD being at commit id. 07f732cb949ae259de817d126d140b8fa08e2d25 I have the same issue with the "master" branch actually, that's why I went with the "main", hoping something might have been fixed meanwhile. I cannot provide you with a MWE yet unfortunately because it's part of a bigger solver and I have to extract the workflow from it. I'll work on it so you have everything you need. Thanks !! Thibault Le mar. 6 juil. 2021 ? 01:08, Barry Smith a ?crit : > > Please send the error message that is printed to the screen. > > Also please send the exact PETSc version you are using. If possible > also a code that reproduces the problem. > > Can you view other simpler things with HDF5? Like say just a vector? > > Barry > > > > On Jul 5, 2021, at 11:50 AM, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > > Dear all, > > I keep having this error on one of the supercomputers I have access to : > > [1]PETSC ERROR: The EXACT line numbers in the error traceback are not > available. > [1]PETSC ERROR: instead the line number of the start of the function is > given. > [1]PETSC ERROR: #1 H5Dcreate2() at > /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:690 > [1]PETSC ERROR: #2 VecView_MPI_HDF5() at > /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:594 > [1]PETSC ERROR: #3 VecView_MPI() at > /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/impls/mpi/pdvec.c:787 > [1]PETSC ERROR: #4 VecView() at > /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/vec/vec/interface/vector.c:576 > [1]PETSC ERROR: #5 DMPlexCoordinatesView_HDF5_Internal() at > /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:560 > [1]PETSC ERROR: #6 DMPlexView_HDF5_Internal() at > /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plexhdf5.c:802 > [1]PETSC ERROR: #7 DMView_Plex() at > /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/impls/plex/plex.c:1366 > [1]PETSC ERROR: #8 DMView() at > /ccc/work/cont001/ocre/bridelbert/05-PETSC/src/dm/interface/dm.c:954 > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > The configure options are as follow : > > [1]PETSC ERROR: Configure options --with-clean=1 > --prefix=/ccc/work/cont001/ocre/bridelbert/05-PETSC/build_uns3D_inti > --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 > --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 > --PETSC_ARCH=INTI_UNS3D > --with-fc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpifort > --with-cc=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicc > --with-cxx=/ccc/products/openmpi-4.0.3/gcc--8.3.0/default/bin/mpicxx > --with-openmp=0 > --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p2.tar.gz > --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz > --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz > --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz > --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default > --download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2 > --download-netcdf=/ccc/work/cont001/ocre/bridelbert/netcdf-4.5.0.tar.gz > --download-pnetcdf=/ccc/work/cont001/ocre/bridelbert/pnetcdf-1.12.1.tar.gz > --download-exodusii=/ccc/work/cont001/ocre/bridelbert/v2021-01-20.tar.gz > --download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz > > The piece of code that is responsible is that one : > > call PetscViewerHDF5Open(PETSC_COMM_WORLD, > "debug_initmesh.h5", FILE_MODE_WRITE, hdf5Viewer, ierr); CHKERRA(ierr) > call PetscViewerPushFormat(hdf5Viewer, > PETSC_VIEWER_HDF5_XDMF, ierr); CHKERRA(ierr) > call DMView(dm, hdf5Viewer, ierr); CHKERRA(ierr) > call PetscViewerPopFormat(hdf5Viewer, ierr); CHKERRA(ierr) > call PetscViewerDestroy(hdf5Viewer, ierr); CHKERRA(ierr) > > I tried with gcc, intel compiler, openmpi 2.x.x or openmpi 4.x.x ... same > problems ... can anyone please advise ? It's starting to make me quite > crazy ... x( > > Thank you !!! > > Thibault > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sod.msh Type: application/octet-stream Size: 2007782 bytes Desc: not available URL: From mfadams at lbl.gov Tue Jul 6 07:23:17 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 6 Jul 2021 08:23:17 -0400 Subject: [petsc-users] Kokkos and HIP Message-ID: I am getting a make error on Spock at ORNL with Kokkos and HIP. This was building last week and this was a clean build. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 24216 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 2555087 bytes Desc: not available URL: From vijayskumar at gmail.com Tue Jul 6 08:30:14 2021 From: vijayskumar at gmail.com (Vijay S Kumar) Date: Tue, 6 Jul 2021 09:30:14 -0400 Subject: [petsc-users] PetSc and CrayPat: MPI assertion errors Message-ID: Hello all, By way of background, we have a PetSc-based solver that we run on our in-house Cray system. We are carrying out performance analysis using profilers in the CrayPat suite that provide more fine-grained performance-related information than the PetSc log_view summary. When instrumented using CrayPat perftools, it turns out that the MPI initialization (MPI_Init) internally invoked by PetscInitialize is not picked up by the profiler. That is, simply specifying the following: ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) return ierr; results in the following runtime error: CrayPat/X: Version 7.1.1 Revision 7c0ddd79b 08/19/19 16:58:46 Attempting to use an MPI routine before initializing MPICH To circumvent this, we had to explicitly call MPI_Init prior to PetscInitialize: MPI_Init(&argc,&argv); ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) return ierr; However, the side-effect of this above workaround seems to be several downstream runtime (assertion) errors with VecAssemblyBegin/End and MatAssemblyBeing/End statements: CrayPat/X: Version 7.1.1 Revision 7c0ddd79b 08/19/19 16:58:46 main.x: ../rtsum.c:5662: __pat_trsup_trace_waitsome_rtsum: Assertion `recv_count != MPI_UNDEFINED' failed. main at main.c:769 VecAssemblyEnd at 0x2aaab951b3ba VecAssemblyEnd_MPI_BTS at 0x2aaab950b179 MPI_Waitsome at 0x43a238 __pat_trsup_trace_waitsome_rtsum at 0x5f1a17 __GI___assert_fail at 0x2aaabc61e7d1 __assert_fail_base at 0x2aaabc61e759 __GI_abort at 0x2aaabc627740 __GI_raise at 0x2aaabc626160 Interestingly, we do not see such errors when there is no explicit MPI_Init, and no instrumentation for performance. Looking for someone to help throw more light on why PetSc Mat/Vec AssemblyEnd statements lead to such MPI-level assertion errors in cases where MPI_Init is explicitly called. (Or alternatively, is there a way to call PetscInitialize in a manner that ensures that the MPI initialization is picked up by the profilers in question?) We would highly appreciate any help/pointers, Thanks! Vijay -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Jul 6 09:10:52 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 6 Jul 2021 09:10:52 -0500 Subject: [petsc-users] Kokkos and HIP In-Reply-To: References: Message-ID: Mark, It is because we reverted an MR that improperly fixed the problem. This new MR, https://gitlab.com/petsc/petsc/-/merge_requests/4150, still a workaround, should fix the problem. --Junchao Zhang On Tue, Jul 6, 2021 at 7:24 AM Mark Adams wrote: > I am getting a make error on Spock at ORNL with Kokkos and HIP. > This was building last week and this was a clean build. > Thanks, > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Jul 6 09:13:47 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 6 Jul 2021 09:13:47 -0500 Subject: [petsc-users] PetSc and CrayPat: MPI assertion errors In-Reply-To: References: Message-ID: On Tue, Jul 6, 2021 at 8:31 AM Vijay S Kumar wrote: > Hello all, > > By way of background, we have a PetSc-based solver that we run on our > in-house Cray system. We are carrying out performance analysis using > profilers in the CrayPat suite that provide more fine-grained > performance-related information than the PetSc log_view summary. > > When instrumented using CrayPat perftools, it turns out that the MPI > initialization (MPI_Init) internally invoked by PetscInitialize is not > picked up by the profiler. That is, simply specifying the following: > ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) > return ierr; > results in the following runtime error: > > CrayPat/X: Version 7.1.1 Revision 7c0ddd79b 08/19/19 > 16:58:46 > > Attempting to use an MPI routine before initializing MPICH > Do you happen to know what the MPI routine is? > > To circumvent this, we had to explicitly call MPI_Init prior to > PetscInitialize: > MPI_Init(&argc,&argv); > ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) > return ierr; > > However, the side-effect of this above workaround seems to be several > downstream runtime (assertion) errors with VecAssemblyBegin/End and > MatAssemblyBeing/End statements: > > CrayPat/X: Version 7.1.1 Revision 7c0ddd79b 08/19/19 16:58:46 > main.x: ../rtsum.c:5662: __pat_trsup_trace_waitsome_rtsum: Assertion > `recv_count != MPI_UNDEFINED' failed. > > main at main.c:769 > VecAssemblyEnd at 0x2aaab951b3ba > VecAssemblyEnd_MPI_BTS at 0x2aaab950b179 > MPI_Waitsome at 0x43a238 > __pat_trsup_trace_waitsome_rtsum at 0x5f1a17 > __GI___assert_fail at 0x2aaabc61e7d1 > __assert_fail_base at 0x2aaabc61e759 > __GI_abort at 0x2aaabc627740 > __GI_raise at 0x2aaabc626160 > > > Interestingly, we do not see such errors when there is no explicit > MPI_Init, and no instrumentation for performance. > Looking for someone to help throw more light on why PetSc Mat/Vec > AssemblyEnd statements lead to such MPI-level assertion errors in cases > where MPI_Init is explicitly called. > (Or alternatively, is there a way to call PetscInitialize in a manner that > ensures that the MPI initialization is picked up by the profilers in > question?) > > We would highly appreciate any help/pointers, > > Thanks! > Vijay > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 6 09:26:20 2021 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 6 Jul 2021 10:26:20 -0400 Subject: [petsc-users] PetSc and CrayPat: MPI assertion errors In-Reply-To: References: Message-ID: On Tue, Jul 6, 2021 at 9:31 AM Vijay S Kumar wrote: > Hello all, > > By way of background, we have a PetSc-based solver that we run on our > in-house Cray system. We are carrying out performance analysis using > profilers in the CrayPat suite that provide more fine-grained > performance-related information than the PetSc log_view summary. > > When instrumented using CrayPat perftools, it turns out that the MPI > initialization (MPI_Init) internally invoked by PetscInitialize is not > picked up by the profiler. That is, simply specifying the following: > ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) > return ierr; > results in the following runtime error: > > CrayPat/X: Version 7.1.1 Revision 7c0ddd79b 08/19/19 > 16:58:46 > > Attempting to use an MPI routine before initializing MPICH > > To circumvent this, we had to explicitly call MPI_Init prior to > PetscInitialize: > MPI_Init(&argc,&argv); > ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) > return ierr; > > However, the side-effect of this above workaround seems to be several > downstream runtime (assertion) errors with VecAssemblyBegin/End and > MatAssemblyBeing/End statements: > > CrayPat/X: Version 7.1.1 Revision 7c0ddd79b 08/19/19 16:58:46 > main.x: ../rtsum.c:5662: __pat_trsup_trace_waitsome_rtsum: Assertion > `recv_count != MPI_UNDEFINED' failed. > > main at main.c:769 > VecAssemblyEnd at 0x2aaab951b3ba > VecAssemblyEnd_MPI_BTS at 0x2aaab950b179 > MPI_Waitsome at 0x43a238 > __pat_trsup_trace_waitsome_rtsum at 0x5f1a17 > __GI___assert_fail at 0x2aaabc61e7d1 > __assert_fail_base at 0x2aaabc61e759 > __GI_abort at 0x2aaabc627740 > __GI_raise at 0x2aaabc626160 > > > Interestingly, we do not see such errors when there is no explicit > MPI_Init, and no instrumentation for performance. > Looking for someone to help throw more light on why PetSc Mat/Vec > AssemblyEnd statements lead to such MPI-level assertion errors in cases > where MPI_Init is explicitly called. > (Or alternatively, is there a way to call PetscInitialize in a manner that > ensures that the MPI initialization is picked up by the profilers in > question?) > > We would highly appreciate any help/pointers, > There is no problem calling MPI_Init() before PetscInitialize(), although then you also have to call MPI_Finalize() explicitly at the end. Both errors appears to arise from the Cray instrumentation, which is evidently buggy. Did you try calling MPI_Init() yourself without instrumentation? Also, what info are you getting from CrayPat that we do not log? Thanks, Matt > Thanks! > Vijay > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron at jubileedev.com Tue Jul 6 07:46:21 2021 From: aaron at jubileedev.com (Aaron Scheinberg) Date: Tue, 6 Jul 2021 08:46:21 -0400 Subject: [petsc-users] Kokkos and HIP In-Reply-To: References: Message-ID: Hi Mark, I used the following flags for PETSc: cmake -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_INSTALL_PREFIX=${KOKKOS_SRC_DIR}/install \ -DCMAKE_CXX_COMPILER=hipcc \ -DCMAKE_CXX_STANDARD=14 \ -DBUILD_TESTING=OFF \ -DKokkos_ENABLE_HIP=ON \ -DKokkos_ENABLE_OPENMP=ON \ -DKokkos_ENABLE_SERIAL=ON \ -DKokkos_ENABLE_AGGRESSIVE_VECTORIZATION=ON \ -DKokkos_ARCH_VEGA908=ON \ -DKokkos_ARCH_ZEN2=ON \ .. You can also use my installation (let me know if you don't have access): set(Kokkos_ROOT "/ccs/home/scheinberg/spock/kokkos/install") set(Cabana_ROOT "/ccs/home/scheinberg/spock/Cabana/install") I saw your email about the PETSc installation, will give it a try today. On Tue, Jul 6, 2021 at 8:23 AM Mark Adams wrote: > I am getting a make error on Spock at ORNL with Kokkos and HIP. > This was building last week and this was a clean build. > Thanks, > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 6 10:57:02 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 6 Jul 2021 11:57:02 -0400 Subject: [petsc-users] Kokkos and HIP In-Reply-To: References: Message-ID: Thanks, Now I seem to be failing to get zlib. Mark On Tue, Jul 6, 2021 at 10:11 AM Junchao Zhang wrote: > Mark, > It is because we reverted an MR that improperly fixed the problem. This > new MR, https://gitlab.com/petsc/petsc/-/merge_requests/4150, still a > workaround, should fix the problem. > > --Junchao Zhang > > On Tue, Jul 6, 2021 at 7:24 AM Mark Adams wrote: > >> I am getting a make error on Spock at ORNL with Kokkos and HIP. >> This was building last week and this was a clean build. >> Thanks, >> Mark >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1346110 bytes Desc: not available URL: From jacob.fai at gmail.com Tue Jul 6 14:46:46 2021 From: jacob.fai at gmail.com (Jacob Faibussowitsch) Date: Tue, 6 Jul 2021 15:46:46 -0400 Subject: [petsc-users] [SLEPc] Computing Smallest Eigenvalue+Eigenvector of Many Small Matrices Message-ID: <4051E7AF-6A72-4797-A025-03EB63875795@gmail.com> Hello PETSc/SLEPc users, Similar to a recent question I am looking for an algorithm to compute the smallest eigenvalue and eigenvector for a bunch of matrices however I have a few extra ?restrictions?. All matrices have the following properties: - All matrices are the same size - All matrices are small (perhaps no larger than 12x12) - All matrices are SPD - I only need the smallest eigenpair So far my best bet seems to be Lanczos but I?m wondering if there is some wunder method I?ve overlooked. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Jul 6 14:56:40 2021 From: jed at jedbrown.org (Jed Brown) Date: Tue, 06 Jul 2021 13:56:40 -0600 Subject: [petsc-users] [SLEPc] Computing Smallest Eigenvalue+Eigenvector of Many Small Matrices In-Reply-To: <4051E7AF-6A72-4797-A025-03EB63875795@gmail.com> References: <4051E7AF-6A72-4797-A025-03EB63875795@gmail.com> Message-ID: <8735srfc7b.fsf@jedbrown.org> Have you tried just calling LAPACK directly? (You could try dsyevx to see if there's something to gain by computing less than all the eigenvalues.) I'm not aware of a batched interface at this time, but that's what you'd want for performance. Jacob Faibussowitsch writes: > Hello PETSc/SLEPc users, > > Similar to a recent question I am looking for an algorithm to compute the smallest eigenvalue and eigenvector for a bunch of matrices however I have a few extra ?restrictions?. All matrices have the following properties: > > - All matrices are the same size > - All matrices are small (perhaps no larger than 12x12) > - All matrices are SPD > - I only need the smallest eigenpair > > So far my best bet seems to be Lanczos but I?m wondering if there is some wunder method I?ve overlooked. > > Best regards, > > Jacob Faibussowitsch > (Jacob Fai - booss - oh - vitch) From bsmith at petsc.dev Tue Jul 6 15:21:40 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 6 Jul 2021 15:21:40 -0500 Subject: [petsc-users] PetSc and CrayPat: MPI assertion errors In-Reply-To: References: Message-ID: <8F50FBFD-5B2B-409C-A380-F6C02639A640@petsc.dev> > On Jul 6, 2021, at 8:30 AM, Vijay S Kumar wrote: > > Hello all, > > By way of background, we have a PetSc-based solver that we run on our in-house Cray system. We are carrying out performance analysis using profilers in the CrayPat suite that provide more fine-grained performance-related information than the PetSc log_view summary. > > When instrumented using CrayPat perftools, it turns out that the MPI initialization (MPI_Init) internally invoked by PetscInitialize is not picked up by the profiler. That is, simply specifying the following: > ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) return ierr; > results in the following runtime error: > CrayPat/X: Version 7.1.1 Revision 7c0ddd79b 08/19/19 16:58:46 > Attempting to use an MPI routine before initializing MPICH This is certainly unexpected behavior, PETSc is "just" an MPI application it does not do anything special for CrayPat. We do not expect that one would need to call MPI_Init() outside of PETSc to use a performance tool. Perhaps PETSc is not being configured/compiled with the correct flags for the CrayPat performance tools or its shared library is not being built appropriately. If CrayPat uses the PMPI_xxx wrapper model for MPI profiling it may cause these kinds of difficulties if the correct profile wrapper functions are not inserted during the build process. I would try running a standard PETSc program in a debugger with breakpoints for MPI_Init() (and possible others) to investigate what is happening exactly and maybe why. You can send to petsc-maint at mcs.anl.gov the configure.log and make.log that was generated. Barry > > To circumvent this, we had to explicitly call MPI_Init prior to PetscInitialize: > MPI_Init(&argc,&argv); > ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) return ierr; > > However, the side-effect of this above workaround seems to be several downstream runtime (assertion) errors with VecAssemblyBegin/End and MatAssemblyBeing/End statements: > > CrayPat/X: Version 7.1.1 Revision 7c0ddd79b 08/19/19 16:58:46 > main.x: ../rtsum.c:5662: __pat_trsup_trace_waitsome_rtsum: Assertion `recv_count != MPI_UNDEFINED' failed. > > main at main.c:769 > VecAssemblyEnd at 0x2aaab951b3ba > VecAssemblyEnd_MPI_BTS at 0x2aaab950b179 > MPI_Waitsome at 0x43a238 > __pat_trsup_trace_waitsome_rtsum at 0x5f1a17 > __GI___assert_fail at 0x2aaabc61e7d1 > __assert_fail_base at 0x2aaabc61e759 > __GI_abort at 0x2aaabc627740 > __GI_raise at 0x2aaabc626160 > > Interestingly, we do not see such errors when there is no explicit MPI_Init, and no instrumentation for performance. > Looking for someone to help throw more light on why PetSc Mat/Vec AssemblyEnd statements lead to such MPI-level assertion errors in cases where MPI_Init is explicitly called. > (Or alternatively, is there a way to call PetscInitialize in a manner that ensures that the MPI initialization is picked up by the profilers in question?) > > We would highly appreciate any help/pointers, > > Thanks! > Vijay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 6 16:29:59 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 6 Jul 2021 17:29:59 -0400 Subject: [petsc-users] download zlib error Message-ID: I am getting some sort of error in build zlib on Spock at ORNL. Other libraries are downloaded and I am sure the network is fine. Any ideas? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1346110 bytes Desc: not available URL: From mfadams at lbl.gov Tue Jul 6 16:33:17 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 6 Jul 2021 17:33:17 -0400 Subject: [petsc-users] CUDA running out of memory in PtAP Message-ID: I am running out of memory in GAMG. It looks like this is from the new cuSparse RAP. I was able to run Hypre with twice as much work on the GPU as this run. Are there parameters to tweek for this perhaps or can I disable it? Thanks, Mark 0 SNES Function norm 5.442539952302e-04 [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [2]PETSC ERROR: GPU resources unavailable [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. Reports alloc failed; this indicates the GPU has run out resources [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e GIT Date: 2021-07-06 03:22:54 -0700 [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by madams Tue Jul 6 13:37:43 2021 [2]PETSC ERROR: Configure options --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --FFLAGS=" -g " - -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 -- download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/ aijcusparse.cu:2622 [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 [2]PETSC ERROR: #3 MatProductSymbolic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 [2]PETSC ERROR: #4 MatPtAP() at /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 [2]PETSC ERROR: #6 PCSetUp_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 [2]PETSC ERROR: #7 PCSetUp() at /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 [2]PETSC ERROR: #8 KSPSetUp() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 [2]PETSC ERROR: #9 KSPSolve_Private() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 [2]PETSC ERROR: #10 KSPSolve() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 [2]PETSC ERROR: #12 SNESSolve() at /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 [2]PETSC ERROR: #13 TSTheta_SNESSolve() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 [2]PETSC ERROR: #14 TSStep_Theta() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 [2]PETSC ERROR: #15 TSStep() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 [2]PETSC ERROR: #16 TSSolve() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 [2]PETSC ERROR: #17 main() at ex2.c:699 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jul 6 17:25:26 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 6 Jul 2021 17:25:26 -0500 Subject: [petsc-users] CUDA running out of memory in PtAP In-Reply-To: References: Message-ID: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> Stefano has mentioned this before. He reported cuSparse matrix-matrix vector products use a very amount of memory. > On Jul 6, 2021, at 4:33 PM, Mark Adams wrote: > > I am running out of memory in GAMG. It looks like this is from the new cuSparse RAP. > I was able to run Hypre with twice as much work on the GPU as this run. > Are there parameters to tweek for this perhaps or can I disable it? > > Thanks, > Mark > > 0 SNES Function norm 5.442539952302e-04 > [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [2]PETSC ERROR: GPU resources unavailable > [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. Reports alloc failed; this indicates the GPU has run out resources > [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e GIT Date: 2021-07-06 03:22:54 -0700 > [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by madams Tue Jul 6 13:37:43 2021 > [2]PETSC ERROR: Configure options --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI > ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --FFLAGS=" -g " - > -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 -- > download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc > [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu:2622 > [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 > [2]PETSC ERROR: #3 MatProductSymbolic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 > [2]PETSC ERROR: #4 MatPtAP() at /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 > [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 > [2]PETSC ERROR: #6 PCSetUp_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 > [2]PETSC ERROR: #7 PCSetUp() at /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 > [2]PETSC ERROR: #8 KSPSetUp() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 > [2]PETSC ERROR: #9 KSPSolve_Private() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 > [2]PETSC ERROR: #10 KSPSolve() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 > [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 > [2]PETSC ERROR: #12 SNESSolve() at /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 > [2]PETSC ERROR: #13 TSTheta_SNESSolve() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 > [2]PETSC ERROR: #14 TSStep_Theta() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 > [2]PETSC ERROR: #15 TSStep() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 > [2]PETSC ERROR: #16 TSSolve() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 > [2]PETSC ERROR: #17 main() at ex2.c:699 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jul 6 17:42:05 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 6 Jul 2021 17:42:05 -0500 Subject: [petsc-users] download zlib error In-Reply-To: References: Message-ID: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> Mark, You can try what the configure error message should be suggesting (it is not clear if that is being printed to your screen or no). ERROR: Unable to download package ZLIB from: http://www.zlib.net/zlib-1.2.11.tar.gz * If URL specified manually - perhaps there is a typo? * If your network is disconnected - please reconnect and rerun ./configure * Or perhaps you have a firewall blocking the download * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually * or you can download the above URL manually, to /yourselectedlocation/zlib-1.2.11.tar.gz and use the configure option: --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz Barry > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: > > I am getting some sort of error in build zlib on Spock at ORNL. > Other libraries are downloaded and I am sure the network is fine. > Any ideas? > Thanks, > Mark > From mfadams at lbl.gov Tue Jul 6 19:43:13 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 6 Jul 2021 20:43:13 -0400 Subject: [petsc-users] CUDA running out of memory in PtAP In-Reply-To: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> References: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> Message-ID: Can I turn off using cuSprarse for RAP? On Tue, Jul 6, 2021 at 6:25 PM Barry Smith wrote: > > Stefano has mentioned this before. He reported cuSparse matrix-matrix > vector products use a very amount of memory. > > On Jul 6, 2021, at 4:33 PM, Mark Adams wrote: > > I am running out of memory in GAMG. It looks like this is from the new > cuSparse RAP. > I was able to run Hypre with twice as much work on the GPU as this run. > Are there parameters to tweek for this perhaps or can I disable it? > > Thanks, > Mark > > 0 SNES Function norm 5.442539952302e-04 > [2]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [2]PETSC ERROR: GPU resources unavailable > [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. > Reports alloc failed; this indicates the GPU has run out resources > [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e > GIT Date: 2021-07-06 03:22:54 -0700 > [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by madams > Tue Jul 6 13:37:43 2021 > [2]PETSC ERROR: Configure options > --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc > --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" > -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI > ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 > -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler > -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" > --FFLAGS=" -g " - > -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" > --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" > --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 > -- > download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc > [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at > /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/ > aijcusparse.cu:2622 > [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at > /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 > [2]PETSC ERROR: #3 MatProductSymbolic() at > /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 > [2]PETSC ERROR: #4 MatPtAP() at > /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 > [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at > /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 > [2]PETSC ERROR: #6 PCSetUp_GAMG() at > /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 > [2]PETSC ERROR: #7 PCSetUp() at > /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 > [2]PETSC ERROR: #8 KSPSetUp() at > /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 > [2]PETSC ERROR: #9 KSPSolve_Private() at > /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 > [2]PETSC ERROR: #10 KSPSolve() at > /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 > [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at > /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 > [2]PETSC ERROR: #12 SNESSolve() at > /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 > [2]PETSC ERROR: #13 TSTheta_SNESSolve() at > /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 > [2]PETSC ERROR: #14 TSStep_Theta() at > /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 > [2]PETSC ERROR: #15 TSStep() at > /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 > [2]PETSC ERROR: #16 TSSolve() at > /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 > [2]PETSC ERROR: #17 main() at ex2.c:699 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 6 19:48:41 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 6 Jul 2021 20:48:41 -0400 Subject: [petsc-users] download zlib error In-Reply-To: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> Message-ID: I see this: ============================================================================================= Trying to download http://www.zlib.net/zlib-1.2.11.tar.gz for ZLIB ============================================================================================= ============================================================================================= Building and installing zlib; this may take several minutes ============================================================================================= ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Error building/install zlib files from /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11/zlib to /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 ******************************************************************************* And I did look in the configure log and saw these suggestions, but I was so confused by this output that I did not want to chase it down. I will try this --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz stuff. Thanks, On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: > > Mark, > > You can try what the configure error message should be suggesting (it > is not clear if that is being printed to your screen or no). > > ERROR: Unable to download package ZLIB from: > http://www.zlib.net/zlib-1.2.11.tar.gz > * If URL specified manually - perhaps there is a typo? > * If your network is disconnected - please reconnect and rerun ./configure > * Or perhaps you have a firewall blocking the download > * You can run with --with-packages-download-dir=/adirectory and > ./configure will instruct you what packages to download manually > * or you can download the above URL manually, to > /yourselectedlocation/zlib-1.2.11.tar.gz > and use the configure option: > --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz > > Barry > > > > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: > > > > I am getting some sort of error in build zlib on Spock at ORNL. > > Other libraries are downloaded and I am sure the network is fine. > > Any ideas? > > Thanks, > > Mark > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 6 19:57:21 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 6 Jul 2021 20:57:21 -0400 Subject: [petsc-users] download zlib error In-Reply-To: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> Message-ID: On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: > > Mark, > > You can try what the configure error message should be suggesting (it > is not clear if that is being printed to your screen or no). > > ERROR: Unable to download package ZLIB from: > http://www.zlib.net/zlib-1.2.11.tar.gz My browser can not open this and I could not see a download button on this site. Can you download this? > > * If URL specified manually - perhaps there is a typo? > * If your network is disconnected - please reconnect and rerun ./configure > * Or perhaps you have a firewall blocking the download > * You can run with --with-packages-download-dir=/adirectory and > ./configure will instruct you what packages to download manually > * or you can download the above URL manually, to > /yourselectedlocation/zlib-1.2.11.tar.gz > and use the configure option: > --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz > > Barry > > > > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: > > > > I am getting some sort of error in build zlib on Spock at ORNL. > > Other libraries are downloaded and I am sure the network is fine. > > Any ideas? > > Thanks, > > Mark > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Jul 6 20:32:46 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 6 Jul 2021 20:32:46 -0500 Subject: [petsc-users] CUDA running out of memory in PtAP In-Reply-To: References: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> Message-ID: Using cuda-10.x might solve the problem. --Junchao Zhang On Tue, Jul 6, 2021 at 7:43 PM Mark Adams wrote: > Can I turn off using cuSprarse for RAP? > > On Tue, Jul 6, 2021 at 6:25 PM Barry Smith wrote: > >> >> Stefano has mentioned this before. He reported cuSparse matrix-matrix >> vector products use a very amount of memory. >> >> On Jul 6, 2021, at 4:33 PM, Mark Adams wrote: >> >> I am running out of memory in GAMG. It looks like this is from the new >> cuSparse RAP. >> I was able to run Hypre with twice as much work on the GPU as this run. >> Are there parameters to tweek for this perhaps or can I disable it? >> >> Thanks, >> Mark >> >> 0 SNES Function norm 5.442539952302e-04 >> [2]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [2]PETSC ERROR: GPU resources unavailable >> [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. >> Reports alloc failed; this indicates the GPU has run out resources >> [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e >> GIT Date: 2021-07-06 03:22:54 -0700 >> [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by madams >> Tue Jul 6 13:37:43 2021 >> [2]PETSC ERROR: Configure options >> --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc >> --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" >> -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI >> ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 >> -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler >> -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" >> --FFLAGS=" -g " - >> -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" >> --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" >> --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 >> -- >> download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc >> [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at >> /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/ >> aijcusparse.cu:2622 >> [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at >> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 >> [2]PETSC ERROR: #3 MatProductSymbolic() at >> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 >> [2]PETSC ERROR: #4 MatPtAP() at >> /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 >> [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at >> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 >> [2]PETSC ERROR: #6 PCSetUp_GAMG() at >> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 >> [2]PETSC ERROR: #7 PCSetUp() at >> /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 >> [2]PETSC ERROR: #8 KSPSetUp() at >> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 >> [2]PETSC ERROR: #9 KSPSolve_Private() at >> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 >> [2]PETSC ERROR: #10 KSPSolve() at >> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 >> [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at >> /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 >> [2]PETSC ERROR: #12 SNESSolve() at >> /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 >> [2]PETSC ERROR: #13 TSTheta_SNESSolve() at >> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 >> [2]PETSC ERROR: #14 TSStep_Theta() at >> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 >> [2]PETSC ERROR: #15 TSStep() at >> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 >> [2]PETSC ERROR: #16 TSSolve() at >> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 >> [2]PETSC ERROR: #17 main() at ex2.c:699 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jul 6 22:53:29 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 6 Jul 2021 22:53:29 -0500 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> Message-ID: <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- --:--:-- 834k ~/Src/petsc (barry/2021-07-03/demonstrate-network-parallel-build=) arch-demonstrate-network-parallel-build $ tar -zxf zlib-1.2.11.tar.gz ~/Src/petsc (barry/2021-07-03/demonstrate-network-parallel-build=) arch-demonstrate-network-parallel-build $ ls zlib-1.2.11 CMakeLists.txt adler32.c deflate.c gzread.c inflate.h os400 watcom zlib.h ChangeLog amiga deflate.h gzwrite.c inftrees.c qnx win32 zlib.map FAQ compress.c doc infback.c inftrees.h test zconf.h zlib.pc.cmakein INDEX configure examples inffast.c make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in Makefile contrib gzclose.c inffast.h msdos trees.c zconf.h.in zlib2ansi Makefile.in crc32.c gzguts.h inffixed.h nintendods trees.h zlib.3 zutil.c README crc32.h gzlib.c inflate.c old uncompr.c zlib.3.pdf zutil.h > On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: > > > > On Tue, Jul 6, 2021 at 6:42 PM Barry Smith > wrote: > > Mark, > > You can try what the configure error message should be suggesting (it is not clear if that is being printed to your screen or no). > > ERROR: Unable to download package ZLIB from: http://www.zlib.net/zlib-1.2.11.tar.gz > > My browser can not open this and I could not see a download button on this site. > > Can you download this? > > > * If URL specified manually - perhaps there is a typo? > * If your network is disconnected - please reconnect and rerun ./configure > * Or perhaps you have a firewall blocking the download > * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually > * or you can download the above URL manually, to /yourselectedlocation/zlib-1.2.11.tar.gz > and use the configure option: > --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz > > Barry > > > > On Jul 6, 2021, at 4:29 PM, Mark Adams > wrote: > > > > I am getting some sort of error in build zlib on Spock at ORNL. > > Other libraries are downloaded and I am sure the network is fine. > > Any ideas? > > Thanks, > > Mark > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Jul 7 02:31:36 2021 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 7 Jul 2021 09:31:36 +0200 Subject: [petsc-users] [SLEPc] Computing Smallest Eigenvalue+Eigenvector of Many Small Matrices In-Reply-To: <8735srfc7b.fsf@jedbrown.org> References: <4051E7AF-6A72-4797-A025-03EB63875795@gmail.com> <8735srfc7b.fsf@jedbrown.org> Message-ID: cuSolver has syevjBatched, which seems to fit your purpose. But I have never used it. Lanczos is not competitive for such small matrices. Jose > El 6 jul 2021, a las 21:56, Jed Brown escribi?: > > Have you tried just calling LAPACK directly? (You could try dsyevx to see if there's something to gain by computing less than all the eigenvalues.) I'm not aware of a batched interface at this time, but that's what you'd want for performance. > > Jacob Faibussowitsch writes: > >> Hello PETSc/SLEPc users, >> >> Similar to a recent question I am looking for an algorithm to compute the smallest eigenvalue and eigenvector for a bunch of matrices however I have a few extra ?restrictions?. All matrices have the following properties: >> >> - All matrices are the same size >> - All matrices are small (perhaps no larger than 12x12) >> - All matrices are SPD >> - I only need the smallest eigenpair >> >> So far my best bet seems to be Lanczos but I?m wondering if there is some wunder method I?ve overlooked. >> >> Best regards, >> >> Jacob Faibussowitsch >> (Jacob Fai - booss - oh - vitch) From stefano.zampini at gmail.com Wed Jul 7 02:55:35 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 7 Jul 2021 09:55:35 +0200 Subject: [petsc-users] CUDA running out of memory in PtAP In-Reply-To: References: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> Message-ID: <3EC35263-6F44-46D2-A3CA-5C4537D3AF99@gmail.com> This will select the CPU path -matmatmult_backend_cpu -matptap_backend_cpu > On Jul 7, 2021, at 2:43 AM, Mark Adams wrote: > > Can I turn off using cuSprarse for RAP? > > On Tue, Jul 6, 2021 at 6:25 PM Barry Smith > wrote: > > Stefano has mentioned this before. He reported cuSparse matrix-matrix vector products use a very amount of memory. > >> On Jul 6, 2021, at 4:33 PM, Mark Adams > wrote: >> >> I am running out of memory in GAMG. It looks like this is from the new cuSparse RAP. >> I was able to run Hypre with twice as much work on the GPU as this run. >> Are there parameters to tweek for this perhaps or can I disable it? >> >> Thanks, >> Mark >> >> 0 SNES Function norm 5.442539952302e-04 >> [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [2]PETSC ERROR: GPU resources unavailable >> [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. Reports alloc failed; this indicates the GPU has run out resources >> [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e GIT Date: 2021-07-06 03:22:54 -0700 >> [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by madams Tue Jul 6 13:37:43 2021 >> [2]PETSC ERROR: Configure options --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI >> ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --FFLAGS=" -g " - >> -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 -- >> download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc >> [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu:2622 >> [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 >> [2]PETSC ERROR: #3 MatProductSymbolic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 >> [2]PETSC ERROR: #4 MatPtAP() at /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 >> [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 >> [2]PETSC ERROR: #6 PCSetUp_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 >> [2]PETSC ERROR: #7 PCSetUp() at /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 >> [2]PETSC ERROR: #8 KSPSetUp() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 >> [2]PETSC ERROR: #9 KSPSolve_Private() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 >> [2]PETSC ERROR: #10 KSPSolve() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 >> [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 >> [2]PETSC ERROR: #12 SNESSolve() at /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 >> [2]PETSC ERROR: #13 TSTheta_SNESSolve() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 >> [2]PETSC ERROR: #14 TSStep_Theta() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 >> [2]PETSC ERROR: #15 TSStep() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 >> [2]PETSC ERROR: #16 TSSolve() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 >> [2]PETSC ERROR: #17 main() at ex2.c:699 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 07:46:31 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 08:46:31 -0400 Subject: [petsc-users] download zlib error In-Reply-To: <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Apparently the same error with --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz On Tue, Jul 6, 2021 at 11:53 PM Barry Smith wrote: > $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz > % Total % Received % Xferd Average Speed Time Time Time > Current > Dload Upload Total Spent Left > Speed > 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- --:--:-- > 834k > ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* > arch-demonstrate-network-parallel-build > $ tar -zxf zlib-1.2.11.tar.gz > ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* > arch-demonstrate-network-parallel-build > $ ls zlib-1.2.11 > CMakeLists.txt adler32.c deflate.c gzread.c inflate.h > os400 watcom zlib.h > ChangeLog amiga deflate.h gzwrite.c > inftrees.c qnx win32 zlib.map > FAQ compress.c doc infback.c > inftrees.h test zconf.h zlib.pc.cmakein > INDEX configure examples inffast.c > make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in > Makefile contrib gzclose.c inffast.h msdos > trees.c zconf.h.in zlib2ansi > Makefile.in crc32.c gzguts.h inffixed.h nintendods > trees.h zlib.3 zutil.c > README crc32.h gzlib.c inflate.c old > uncompr.c zlib.3.pdf zutil.h > > > > On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: > > > > On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: > >> >> Mark, >> >> You can try what the configure error message should be suggesting (it >> is not clear if that is being printed to your screen or no). >> >> ERROR: Unable to download package ZLIB from: >> http://www.zlib.net/zlib-1.2.11.tar.gz > > > My browser can not open this and I could not see a download button on this > site. > > Can you download this? > > >> >> * If URL specified manually - perhaps there is a typo? >> * If your network is disconnected - please reconnect and rerun ./configure >> * Or perhaps you have a firewall blocking the download >> * You can run with --with-packages-download-dir=/adirectory and >> ./configure will instruct you what packages to download manually >> * or you can download the above URL manually, to >> /yourselectedlocation/zlib-1.2.11.tar.gz >> and use the configure option: >> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >> >> Barry >> >> >> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >> > >> > I am getting some sort of error in build zlib on Spock at ORNL. >> > Other libraries are downloaded and I am sure the network is fine. >> > Any ideas? >> > Thanks, >> > Mark >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1327259 bytes Desc: not available URL: From mfadams at lbl.gov Wed Jul 7 07:47:46 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 08:47:46 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Also, this is in jczhang/fix-kokkos-includes On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: > Apparently the same error with > --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz > > On Tue, Jul 6, 2021 at 11:53 PM Barry Smith wrote: > >> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >> % Total % Received % Xferd Average Speed Time Time Time >> Current >> Dload Upload Total Spent Left >> Speed >> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- --:--:-- >> 834k >> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >> arch-demonstrate-network-parallel-build >> $ tar -zxf zlib-1.2.11.tar.gz >> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >> arch-demonstrate-network-parallel-build >> $ ls zlib-1.2.11 >> CMakeLists.txt adler32.c deflate.c gzread.c inflate.h >> os400 watcom zlib.h >> ChangeLog amiga deflate.h gzwrite.c >> inftrees.c qnx win32 zlib.map >> FAQ compress.c doc infback.c >> inftrees.h test zconf.h zlib.pc.cmakein >> INDEX configure examples inffast.c >> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >> Makefile contrib gzclose.c inffast.h msdos >> trees.c zconf.h.in zlib2ansi >> Makefile.in crc32.c gzguts.h inffixed.h >> nintendods trees.h zlib.3 zutil.c >> README crc32.h gzlib.c inflate.c old >> uncompr.c zlib.3.pdf zutil.h >> >> >> >> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >> >> >> >> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: >> >>> >>> Mark, >>> >>> You can try what the configure error message should be suggesting (it >>> is not clear if that is being printed to your screen or no). >>> >>> ERROR: Unable to download package ZLIB from: >>> http://www.zlib.net/zlib-1.2.11.tar.gz >> >> >> My browser can not open this and I could not see a download button on >> this site. >> >> Can you download this? >> >> >>> >>> * If URL specified manually - perhaps there is a typo? >>> * If your network is disconnected - please reconnect and rerun >>> ./configure >>> * Or perhaps you have a firewall blocking the download >>> * You can run with --with-packages-download-dir=/adirectory and >>> ./configure will instruct you what packages to download manually >>> * or you can download the above URL manually, to >>> /yourselectedlocation/zlib-1.2.11.tar.gz >>> and use the configure option: >>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>> >>> Barry >>> >>> >>> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >>> > >>> > I am getting some sort of error in build zlib on Spock at ORNL. >>> > Other libraries are downloaded and I am sure the network is fine. >>> > Any ideas? >>> > Thanks, >>> > Mark >>> > >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 7 08:17:59 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Jul 2021 09:17:59 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: It is hard to see the error. I suspect it is something crazy with the install. Can you run the build by hand? cd /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 -I${ROCM_PATH}/include" prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install and see what happens, and what the error code is? Thanks, Matt On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: > Also, this is in jczhang/fix-kokkos-includes > > On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: > >> Apparently the same error with >> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >> >> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith wrote: >> >>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >>> % Total % Received % Xferd Average Speed Time Time Time >>> Current >>> Dload Upload Total Spent Left >>> Speed >>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>> --:--:-- 834k >>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>> arch-demonstrate-network-parallel-build >>> $ tar -zxf zlib-1.2.11.tar.gz >>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>> arch-demonstrate-network-parallel-build >>> $ ls zlib-1.2.11 >>> CMakeLists.txt adler32.c deflate.c gzread.c >>> inflate.h os400 watcom zlib.h >>> ChangeLog amiga deflate.h gzwrite.c >>> inftrees.c qnx win32 zlib.map >>> FAQ compress.c doc infback.c >>> inftrees.h test zconf.h zlib.pc.cmakein >>> INDEX configure examples inffast.c >>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>> Makefile contrib gzclose.c inffast.h msdos >>> trees.c zconf.h.in zlib2ansi >>> Makefile.in crc32.c gzguts.h inffixed.h >>> nintendods trees.h zlib.3 zutil.c >>> README crc32.h gzlib.c inflate.c old >>> uncompr.c zlib.3.pdf zutil.h >>> >>> >>> >>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>> >>> >>> >>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: >>> >>>> >>>> Mark, >>>> >>>> You can try what the configure error message should be suggesting >>>> (it is not clear if that is being printed to your screen or no). >>>> >>>> ERROR: Unable to download package ZLIB from: >>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>> >>> >>> My browser can not open this and I could not see a download button on >>> this site. >>> >>> Can you download this? >>> >>> >>>> >>>> * If URL specified manually - perhaps there is a typo? >>>> * If your network is disconnected - please reconnect and rerun >>>> ./configure >>>> * Or perhaps you have a firewall blocking the download >>>> * You can run with --with-packages-download-dir=/adirectory and >>>> ./configure will instruct you what packages to download manually >>>> * or you can download the above URL manually, to >>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>> and use the configure option: >>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>> >>>> Barry >>>> >>>> >>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >>>> > >>>> > I am getting some sort of error in build zlib on Spock at ORNL. >>>> > Other libraries are downloaded and I am sure the network is fine. >>>> > Any ideas? >>>> > Thanks, >>>> > Mark >>>> > >>>> >>>> >>> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 08:25:15 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 09:25:15 -0400 Subject: [petsc-users] CUDA running out of memory in PtAP In-Reply-To: <3EC35263-6F44-46D2-A3CA-5C4537D3AF99@gmail.com> References: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> <3EC35263-6F44-46D2-A3CA-5C4537D3AF99@gmail.com> Message-ID: Thanks, but that did not work. It looks like this is just in MPIAIJ, but I am using SeqAIJ. ex2 (below) uses PETSC_COMM_SELF everywhere. + srun -G 1 -n 16 -c 1 --cpu-bind=cores --ntasks-per-core=2 /global/homes/m/madams/mps-wrapper.sh ../ex2 -dm_landau_device_type cuda -dm_mat_type aijcusparse -dm_vec_type cuda -log_view -pc_type gamg -ksp_type gmres -pc_gamg_reuse_interpolation *-matmatmult_backend_cpu -matptap_backend_cpu *-dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 -dm_landau_thermal_temps 1,1,1,1,1,1,1,1,1,1 -dm_landau_n 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 0 starting nvidia-cuda-mps-control on cgpu17 mps ready: 2021-07-07T06:17:36-07:00 masses: e= 9.109e-31; ions in proton mass units: 5.000e-04 1.000e+00 ... charges: e=-1.602e-19; charges in elementary units: 1.000e+00 2.000e+00 thermal T (K): e= 1.160e+07 i= 1.160e+07 imp= 1.160e+07. v_0= 1.326e+07 n_0= 1.000e+20 t_0= 5.787e-06 domain= 5.000e+00 CalculateE j0=0. Ec = 0.050991 0 TS dt 1. time 0. 0) species-0: charge density= -1.6054532569865e+01 z-momentum= -1.9059929215360e-19 energy= 2.4178543516210e+04 0) species-1: charge density= 8.0258396545108e+00 z-momentum= 7.0660527288120e-20 energy= 1.2082380663859e+04 0) species-2: charge density= 6.3912608577597e-05 z-momentum= -1.1513901010709e-24 energy= 3.5799558195524e-01 0) species-3: charge density= 9.5868912866395e-05 z-momentum= -1.1513901010709e-24 energy= 3.5799558195524e-01 0) species-4: charge density= 1.2782521715519e-04 z-momentum= -1.1513901010709e-24 energy= 3.5799558195524e-01 [7]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [7]PETSC ERROR: GPU resources unavailable [7]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. Reports alloc failed; this indicates the GPU has run out resources [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [7]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e GIT Date: 2021-07-06 03:22:54 -0700 [7]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu17 by madams Wed Jul 7 06:17:38 2021 [7]PETSC ERROR: Configure options --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --FFLAGS=" -g " --COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 --download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc *[7]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu:2622 *[7]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1146 [7]PETSC ERROR: #3 MatProductSymbolic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 [7]PETSC ERROR: #4 MatPtAP() at /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 [7]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 [7]PETSC ERROR: #6 PCSetUp_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 [7]PETSC ERROR: #7 PCSetUp() at /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 [7]PETSC ERROR: #8 KSPSetUp() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 [7]PETSC ERROR: #9 KSPSolve_Private() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 [7]PETSC ERROR: #10 KSPSolve() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 [7]PETSC ERROR: #11 SNESSolve_NEWTONLS() at /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 [7]PETSC ERROR: #12 SNESSolve() at /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 [7]PETSC ERROR: #13 TSTheta_SNESSolve() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 [7]PETSC ERROR: #14 TSStep_Theta() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 [7]PETSC ERROR: #15 TSStep() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 [7]PETSC ERROR: #16 TSSolve() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 [7]PETSC ERROR: #17 main() at ex2.c:699 [7]PETSC ERROR: PETSc Option Table entries: [7]PETSC ERROR: -dm_landau_amr_levels_max 0 [7]PETSC ERROR: -dm_landau_amr_post_refine 5 [7]PETSC ERROR: -dm_landau_device_type cuda [7]PETSC ERROR: -dm_landau_domain_radius 5 [7]PETSC ERROR: -dm_landau_Ez 0 [7]PETSC ERROR: -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 [7]PETSC ERROR: -dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 [7]PETSC ERROR: -dm_landau_n 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 [7]PETSC ERROR: -dm_landau_thermal_temps 1,1,1,1,1,1,1,1,1,1 [7]PETSC ERROR: -dm_landau_type p4est [7]PETSC ERROR: -dm_mat_type aijcusparse [7]PETSC ERROR: -dm_preallocate_only [7]PETSC ERROR: -dm_vec_type cuda [7]PETSC ERROR: -ex2_connor_e_field_units [7]PETSC ERROR: -ex2_impurity_index 1 [7]PETSC ERROR: -ex2_plot_dt 200 [7]PETSC ERROR: -ex2_test_type none [7]PETSC ERROR: -ksp_type gmres [7]PETSC ERROR: -log_view *[7]PETSC ERROR: -matmatmult_backend_cpu[7]PETSC ERROR: -matptap_backend_cpu* [7]PETSC ERROR: -pc_gamg_reuse_interpolation [7]PETSC ERROR: -pc_type gamg [7]PETSC ERROR: -petscspace_degree 1 [7]PETSC ERROR: -snes_max_it 15 [7]PETSC ERROR: -snes_rtol 1.e-6 [7]PETSC ERROR: -snes_stol 1.e-6 [7]PETSC ERROR: -ts_adapt_scale_solve_failed 0.5 [7]PETSC ERROR: -ts_adapt_time_step_increase_delay 5 [7]PETSC ERROR: -ts_dt 1 [7]PETSC ERROR: -ts_exact_final_time stepover [7]PETSC ERROR: -ts_max_snes_failures -1 [7]PETSC ERROR: -ts_max_steps 10 [7]PETSC ERROR: -ts_max_time 300 [7]PETSC ERROR: -ts_rtol 1e-2 [7]PETSC ERROR: -ts_type beuler On Wed, Jul 7, 2021 at 4:07 AM Stefano Zampini wrote: > This will select the CPU path > > -matmatmult_backend_cpu -matptap_backend_cpu > > On Jul 7, 2021, at 2:43 AM, Mark Adams wrote: > > Can I turn off using cuSprarse for RAP? > > On Tue, Jul 6, 2021 at 6:25 PM Barry Smith wrote: > >> >> Stefano has mentioned this before. He reported cuSparse matrix-matrix >> vector products use a very amount of memory. >> >> On Jul 6, 2021, at 4:33 PM, Mark Adams wrote: >> >> I am running out of memory in GAMG. It looks like this is from the new >> cuSparse RAP. >> I was able to run Hypre with twice as much work on the GPU as this run. >> Are there parameters to tweek for this perhaps or can I disable it? >> >> Thanks, >> Mark >> >> 0 SNES Function norm 5.442539952302e-04 >> [2]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [2]PETSC ERROR: GPU resources unavailable >> [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. >> Reports alloc failed; this indicates the GPU has run out resources >> [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e >> GIT Date: 2021-07-06 03:22:54 -0700 >> [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by madams >> Tue Jul 6 13:37:43 2021 >> [2]PETSC ERROR: Configure options >> --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc >> --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" >> -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI >> ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 >> -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler >> -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" >> --FFLAGS=" -g " - >> -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" >> --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" >> --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 >> -- >> download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc >> [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at >> /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/ >> aijcusparse.cu:2622 >> [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at >> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 >> [2]PETSC ERROR: #3 MatProductSymbolic() at >> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 >> [2]PETSC ERROR: #4 MatPtAP() at >> /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 >> [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at >> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 >> [2]PETSC ERROR: #6 PCSetUp_GAMG() at >> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 >> [2]PETSC ERROR: #7 PCSetUp() at >> /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 >> [2]PETSC ERROR: #8 KSPSetUp() at >> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 >> [2]PETSC ERROR: #9 KSPSolve_Private() at >> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 >> [2]PETSC ERROR: #10 KSPSolve() at >> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 >> [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at >> /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 >> [2]PETSC ERROR: #12 SNESSolve() at >> /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 >> [2]PETSC ERROR: #13 TSTheta_SNESSolve() at >> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 >> [2]PETSC ERROR: #14 TSStep_Theta() at >> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 >> [2]PETSC ERROR: #15 TSStep() at >> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 >> [2]PETSC ERROR: #16 TSSolve() at >> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 >> [2]PETSC ERROR: #17 main() at ex2.c:699 >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 08:39:48 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 09:39:48 -0400 Subject: [petsc-users] CUDA running out of memory in PtAP In-Reply-To: References: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> <3EC35263-6F44-46D2-A3CA-5C4537D3AF99@gmail.com> Message-ID: OK, I found where its not protected in sequential. On Wed, Jul 7, 2021 at 9:25 AM Mark Adams wrote: > Thanks, but that did not work. > > It looks like this is just in MPIAIJ, but I am using SeqAIJ. ex2 (below) > uses PETSC_COMM_SELF everywhere. > > + srun -G 1 -n 16 -c 1 --cpu-bind=cores --ntasks-per-core=2 > /global/homes/m/madams/mps-wrapper.sh ../ex2 -dm_landau_device_type cuda > -dm_mat_type aijcusparse -dm_vec_type cuda -log_view -pc_type gamg > -ksp_type gmres -pc_gamg_reuse_interpolation *-matmatmult_backend_cpu > -matptap_backend_cpu *-dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 > -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 -dm_landau_thermal_temps > 1,1,1,1,1,1,1,1,1,1 -dm_landau_n > 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 > 0 starting nvidia-cuda-mps-control on cgpu17 > mps ready: 2021-07-07T06:17:36-07:00 > masses: e= 9.109e-31; ions in proton mass units: 5.000e-04 > 1.000e+00 ... > charges: e=-1.602e-19; charges in elementary units: 1.000e+00 > 2.000e+00 > thermal T (K): e= 1.160e+07 i= 1.160e+07 imp= 1.160e+07. v_0= 1.326e+07 > n_0= 1.000e+20 t_0= 5.787e-06 domain= 5.000e+00 > CalculateE j0=0. Ec = 0.050991 > 0 TS dt 1. time 0. > 0) species-0: charge density= -1.6054532569865e+01 z-momentum= > -1.9059929215360e-19 energy= 2.4178543516210e+04 > 0) species-1: charge density= 8.0258396545108e+00 z-momentum= > 7.0660527288120e-20 energy= 1.2082380663859e+04 > 0) species-2: charge density= 6.3912608577597e-05 z-momentum= > -1.1513901010709e-24 energy= 3.5799558195524e-01 > 0) species-3: charge density= 9.5868912866395e-05 z-momentum= > -1.1513901010709e-24 energy= 3.5799558195524e-01 > 0) species-4: charge density= 1.2782521715519e-04 z-momentum= > -1.1513901010709e-24 energy= 3.5799558195524e-01 > [7]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [7]PETSC ERROR: GPU resources unavailable > [7]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. > Reports alloc failed; this indicates the GPU has run out resources > [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [7]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e > GIT Date: 2021-07-06 03:22:54 -0700 > [7]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu17 by madams > Wed Jul 7 06:17:38 2021 > [7]PETSC ERROR: Configure options > --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc > --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" > -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g > -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g > -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 > -DLANDAU_MAX_Q=4" --FFLAGS=" -g " --COPTFLAGS=" -O3" --CXXOPTFLAGS=" > -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 > --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 > --with-cuda=1 --download-p4est=1 --download-hypre=1 --with-zlib=1 > PETSC_ARCH=arch-cori-gpu-opt-gcc > > *[7]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at > /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu:2622 > *[7]PETSC ERROR: #2 > MatProductSymbolic_ABC_Basic() at > /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1146 > [7]PETSC ERROR: #3 MatProductSymbolic() at > /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 > [7]PETSC ERROR: #4 MatPtAP() at > /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 > [7]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at > /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 > [7]PETSC ERROR: #6 PCSetUp_GAMG() at > /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 > [7]PETSC ERROR: #7 PCSetUp() at > /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 > [7]PETSC ERROR: #8 KSPSetUp() at > /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 > [7]PETSC ERROR: #9 KSPSolve_Private() at > /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 > [7]PETSC ERROR: #10 KSPSolve() at > /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 > [7]PETSC ERROR: #11 SNESSolve_NEWTONLS() at > /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 > [7]PETSC ERROR: #12 SNESSolve() at > /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 > [7]PETSC ERROR: #13 TSTheta_SNESSolve() at > /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 > [7]PETSC ERROR: #14 TSStep_Theta() at > /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 > [7]PETSC ERROR: #15 TSStep() at > /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 > [7]PETSC ERROR: #16 TSSolve() at > /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 > [7]PETSC ERROR: #17 main() at ex2.c:699 > [7]PETSC ERROR: PETSc Option Table entries: > [7]PETSC ERROR: -dm_landau_amr_levels_max 0 > [7]PETSC ERROR: -dm_landau_amr_post_refine 5 > [7]PETSC ERROR: -dm_landau_device_type cuda > [7]PETSC ERROR: -dm_landau_domain_radius 5 > [7]PETSC ERROR: -dm_landau_Ez 0 > [7]PETSC ERROR: -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 > [7]PETSC ERROR: -dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 > [7]PETSC ERROR: -dm_landau_n > 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 > [7]PETSC ERROR: -dm_landau_thermal_temps 1,1,1,1,1,1,1,1,1,1 > [7]PETSC ERROR: -dm_landau_type p4est > [7]PETSC ERROR: -dm_mat_type aijcusparse > [7]PETSC ERROR: -dm_preallocate_only > [7]PETSC ERROR: -dm_vec_type cuda > [7]PETSC ERROR: -ex2_connor_e_field_units > [7]PETSC ERROR: -ex2_impurity_index 1 > [7]PETSC ERROR: -ex2_plot_dt 200 > [7]PETSC ERROR: -ex2_test_type none > [7]PETSC ERROR: -ksp_type gmres > [7]PETSC ERROR: -log_view > > *[7]PETSC ERROR: -matmatmult_backend_cpu[7]PETSC ERROR: > -matptap_backend_cpu* > [7]PETSC ERROR: -pc_gamg_reuse_interpolation > [7]PETSC ERROR: -pc_type gamg > [7]PETSC ERROR: -petscspace_degree 1 > [7]PETSC ERROR: -snes_max_it 15 > [7]PETSC ERROR: -snes_rtol 1.e-6 > [7]PETSC ERROR: -snes_stol 1.e-6 > [7]PETSC ERROR: -ts_adapt_scale_solve_failed 0.5 > [7]PETSC ERROR: -ts_adapt_time_step_increase_delay 5 > [7]PETSC ERROR: -ts_dt 1 > [7]PETSC ERROR: -ts_exact_final_time stepover > [7]PETSC ERROR: -ts_max_snes_failures -1 > [7]PETSC ERROR: -ts_max_steps 10 > [7]PETSC ERROR: -ts_max_time 300 > [7]PETSC ERROR: -ts_rtol 1e-2 > [7]PETSC ERROR: -ts_type beuler > > On Wed, Jul 7, 2021 at 4:07 AM Stefano Zampini > wrote: > >> This will select the CPU path >> >> -matmatmult_backend_cpu -matptap_backend_cpu >> >> On Jul 7, 2021, at 2:43 AM, Mark Adams wrote: >> >> Can I turn off using cuSprarse for RAP? >> >> On Tue, Jul 6, 2021 at 6:25 PM Barry Smith wrote: >> >>> >>> Stefano has mentioned this before. He reported cuSparse matrix-matrix >>> vector products use a very amount of memory. >>> >>> On Jul 6, 2021, at 4:33 PM, Mark Adams wrote: >>> >>> I am running out of memory in GAMG. It looks like this is from the new >>> cuSparse RAP. >>> I was able to run Hypre with twice as much work on the GPU as this run. >>> Are there parameters to tweek for this perhaps or can I disable it? >>> >>> Thanks, >>> Mark >>> >>> 0 SNES Function norm 5.442539952302e-04 >>> [2]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> [2]PETSC ERROR: GPU resources unavailable >>> [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of >>> memory. Reports alloc failed; this indicates the GPU has run out resources >>> [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >>> for trouble shooting. >>> [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e >>> GIT Date: 2021-07-06 03:22:54 -0700 >>> [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by madams >>> Tue Jul 6 13:37:43 2021 >>> [2]PETSC ERROR: Configure options >>> --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc >>> --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" >>> -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI >>> ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 >>> -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler >>> -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" >>> --FFLAGS=" -g " - >>> -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" >>> --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" >>> --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 >>> -- >>> download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc >>> [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at >>> /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/ >>> aijcusparse.cu:2622 >>> [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at >>> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 >>> [2]PETSC ERROR: #3 MatProductSymbolic() at >>> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 >>> [2]PETSC ERROR: #4 MatPtAP() at >>> /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 >>> [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at >>> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 >>> [2]PETSC ERROR: #6 PCSetUp_GAMG() at >>> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 >>> [2]PETSC ERROR: #7 PCSetUp() at >>> /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 >>> [2]PETSC ERROR: #8 KSPSetUp() at >>> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 >>> [2]PETSC ERROR: #9 KSPSolve_Private() at >>> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 >>> [2]PETSC ERROR: #10 KSPSolve() at >>> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 >>> [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at >>> /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 >>> [2]PETSC ERROR: #12 SNESSolve() at >>> /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 >>> [2]PETSC ERROR: #13 TSTheta_SNESSolve() at >>> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 >>> [2]PETSC ERROR: #14 TSStep_Theta() at >>> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 >>> [2]PETSC ERROR: #15 TSStep() at >>> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 >>> [2]PETSC ERROR: #16 TSSolve() at >>> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 >>> [2]PETSC ERROR: #17 main() at ex2.c:699 >>> >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Wed Jul 7 08:50:26 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 7 Jul 2021 15:50:26 +0200 Subject: [petsc-users] CUDA running out of memory in PtAP In-Reply-To: References: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> <3EC35263-6F44-46D2-A3CA-5C4537D3AF99@gmail.com> Message-ID: Do you want me to open an MR to handle the sequential case? > On Jul 7, 2021, at 3:39 PM, Mark Adams wrote: > > OK, I found where its not protected in sequential. > > On Wed, Jul 7, 2021 at 9:25 AM Mark Adams > wrote: > Thanks, but that did not work. > > It looks like this is just in MPIAIJ, but I am using SeqAIJ. ex2 (below) uses PETSC_COMM_SELF everywhere. > > + srun -G 1 -n 16 -c 1 --cpu-bind=cores --ntasks-per-core=2 /global/homes/m/madams/mps-wrapper.sh ../ex2 -dm_landau_device_type cuda -dm_mat_type aijcusparse -dm_vec_type cuda -log_view -pc_type gamg -ksp_type gmres -pc_gamg_reuse_interpolation -matmatmult_backend_cpu -matptap_backend_cpu -dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 -dm_landau_thermal_temps 1,1,1,1,1,1,1,1,1,1 -dm_landau_n 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 > 0 starting nvidia-cuda-mps-control on cgpu17 > mps ready: 2021-07-07T06:17:36-07:00 > masses: e= 9.109e-31; ions in proton mass units: 5.000e-04 1.000e+00 ... > charges: e=-1.602e-19; charges in elementary units: 1.000e+00 2.000e+00 > thermal T (K): e= 1.160e+07 i= 1.160e+07 imp= 1.160e+07. v_0= 1.326e+07 n_0= 1.000e+20 t_0= 5.787e-06 domain= 5.000e+00 > CalculateE j0=0. Ec = 0.050991 > 0 TS dt 1. time 0. > 0) species-0: charge density= -1.6054532569865e+01 z-momentum= -1.9059929215360e-19 energy= 2.4178543516210e+04 > 0) species-1: charge density= 8.0258396545108e+00 z-momentum= 7.0660527288120e-20 energy= 1.2082380663859e+04 > 0) species-2: charge density= 6.3912608577597e-05 z-momentum= -1.1513901010709e-24 energy= 3.5799558195524e-01 > 0) species-3: charge density= 9.5868912866395e-05 z-momentum= -1.1513901010709e-24 energy= 3.5799558195524e-01 > 0) species-4: charge density= 1.2782521715519e-04 z-momentum= -1.1513901010709e-24 energy= 3.5799558195524e-01 > [7]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [7]PETSC ERROR: GPU resources unavailable > [7]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. Reports alloc failed; this indicates the GPU has run out resources > [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [7]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e GIT Date: 2021-07-06 03:22:54 -0700 > [7]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu17 by madams Wed Jul 7 06:17:38 2021 > [7]PETSC ERROR: Configure options --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --FFLAGS=" -g " --COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 --download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc > [7]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu:2622 > [7]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1146 > [7]PETSC ERROR: #3 MatProductSymbolic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 > [7]PETSC ERROR: #4 MatPtAP() at /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 > [7]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 > [7]PETSC ERROR: #6 PCSetUp_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 > [7]PETSC ERROR: #7 PCSetUp() at /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 > [7]PETSC ERROR: #8 KSPSetUp() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 > [7]PETSC ERROR: #9 KSPSolve_Private() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 > [7]PETSC ERROR: #10 KSPSolve() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 > [7]PETSC ERROR: #11 SNESSolve_NEWTONLS() at /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 > [7]PETSC ERROR: #12 SNESSolve() at /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 > [7]PETSC ERROR: #13 TSTheta_SNESSolve() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 > [7]PETSC ERROR: #14 TSStep_Theta() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 > [7]PETSC ERROR: #15 TSStep() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 > [7]PETSC ERROR: #16 TSSolve() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 > [7]PETSC ERROR: #17 main() at ex2.c:699 > [7]PETSC ERROR: PETSc Option Table entries: > [7]PETSC ERROR: -dm_landau_amr_levels_max 0 > [7]PETSC ERROR: -dm_landau_amr_post_refine 5 > [7]PETSC ERROR: -dm_landau_device_type cuda > [7]PETSC ERROR: -dm_landau_domain_radius 5 > [7]PETSC ERROR: -dm_landau_Ez 0 > [7]PETSC ERROR: -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 > [7]PETSC ERROR: -dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 > [7]PETSC ERROR: -dm_landau_n 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 > [7]PETSC ERROR: -dm_landau_thermal_temps 1,1,1,1,1,1,1,1,1,1 > [7]PETSC ERROR: -dm_landau_type p4est > [7]PETSC ERROR: -dm_mat_type aijcusparse > [7]PETSC ERROR: -dm_preallocate_only > [7]PETSC ERROR: -dm_vec_type cuda > [7]PETSC ERROR: -ex2_connor_e_field_units > [7]PETSC ERROR: -ex2_impurity_index 1 > [7]PETSC ERROR: -ex2_plot_dt 200 > [7]PETSC ERROR: -ex2_test_type none > [7]PETSC ERROR: -ksp_type gmres > [7]PETSC ERROR: -log_view > [7]PETSC ERROR: -matmatmult_backend_cpu > [7]PETSC ERROR: -matptap_backend_cpu > [7]PETSC ERROR: -pc_gamg_reuse_interpolation > [7]PETSC ERROR: -pc_type gamg > [7]PETSC ERROR: -petscspace_degree 1 > [7]PETSC ERROR: -snes_max_it 15 > [7]PETSC ERROR: -snes_rtol 1.e-6 > [7]PETSC ERROR: -snes_stol 1.e-6 > [7]PETSC ERROR: -ts_adapt_scale_solve_failed 0.5 > [7]PETSC ERROR: -ts_adapt_time_step_increase_delay 5 > [7]PETSC ERROR: -ts_dt 1 > [7]PETSC ERROR: -ts_exact_final_time stepover > [7]PETSC ERROR: -ts_max_snes_failures -1 > [7]PETSC ERROR: -ts_max_steps 10 > [7]PETSC ERROR: -ts_max_time 300 > [7]PETSC ERROR: -ts_rtol 1e-2 > [7]PETSC ERROR: -ts_type beuler > > On Wed, Jul 7, 2021 at 4:07 AM Stefano Zampini > wrote: > This will select the CPU path > > -matmatmult_backend_cpu -matptap_backend_cpu > >> On Jul 7, 2021, at 2:43 AM, Mark Adams > wrote: >> >> Can I turn off using cuSprarse for RAP? >> >> On Tue, Jul 6, 2021 at 6:25 PM Barry Smith > wrote: >> >> Stefano has mentioned this before. He reported cuSparse matrix-matrix vector products use a very amount of memory. >> >>> On Jul 6, 2021, at 4:33 PM, Mark Adams > wrote: >>> >>> I am running out of memory in GAMG. It looks like this is from the new cuSparse RAP. >>> I was able to run Hypre with twice as much work on the GPU as this run. >>> Are there parameters to tweek for this perhaps or can I disable it? >>> >>> Thanks, >>> Mark >>> >>> 0 SNES Function norm 5.442539952302e-04 >>> [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> [2]PETSC ERROR: GPU resources unavailable >>> [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. Reports alloc failed; this indicates the GPU has run out resources >>> [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>> [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e GIT Date: 2021-07-06 03:22:54 -0700 >>> [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by madams Tue Jul 6 13:37:43 2021 >>> [2]PETSC ERROR: Configure options --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI >>> ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --FFLAGS=" -g " - >>> -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 -- >>> download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc >>> [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu:2622 >>> [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 >>> [2]PETSC ERROR: #3 MatProductSymbolic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 >>> [2]PETSC ERROR: #4 MatPtAP() at /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 >>> [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 >>> [2]PETSC ERROR: #6 PCSetUp_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 >>> [2]PETSC ERROR: #7 PCSetUp() at /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 >>> [2]PETSC ERROR: #8 KSPSetUp() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 >>> [2]PETSC ERROR: #9 KSPSolve_Private() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 >>> [2]PETSC ERROR: #10 KSPSolve() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 >>> [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 >>> [2]PETSC ERROR: #12 SNESSolve() at /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 >>> [2]PETSC ERROR: #13 TSTheta_SNESSolve() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 >>> [2]PETSC ERROR: #14 TSStep_Theta() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 >>> [2]PETSC ERROR: #15 TSStep() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 >>> [2]PETSC ERROR: #16 TSSolve() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 >>> [2]PETSC ERROR: #17 main() at ex2.c:699 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 09:24:24 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 10:24:24 -0400 Subject: [petsc-users] CUDA running out of memory in PtAP In-Reply-To: References: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> <3EC35263-6F44-46D2-A3CA-5C4537D3AF99@gmail.com> Message-ID: I think that is a good idea. I am trying to do it myself but it is getting messy. Thanks, On Wed, Jul 7, 2021 at 9:50 AM Stefano Zampini wrote: > Do you want me to open an MR to handle the sequential case? > > On Jul 7, 2021, at 3:39 PM, Mark Adams wrote: > > OK, I found where its not protected in sequential. > > On Wed, Jul 7, 2021 at 9:25 AM Mark Adams wrote: > >> Thanks, but that did not work. >> >> It looks like this is just in MPIAIJ, but I am using SeqAIJ. ex2 (below) >> uses PETSC_COMM_SELF everywhere. >> >> + srun -G 1 -n 16 -c 1 --cpu-bind=cores --ntasks-per-core=2 >> /global/homes/m/madams/mps-wrapper.sh ../ex2 -dm_landau_device_type cuda >> -dm_mat_type aijcusparse -dm_vec_type cuda -log_view -pc_type gamg >> -ksp_type gmres -pc_gamg_reuse_interpolation *-matmatmult_backend_cpu >> -matptap_backend_cpu *-dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 >> -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 -dm_landau_thermal_temps >> 1,1,1,1,1,1,1,1,1,1 -dm_landau_n >> 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 >> 0 starting nvidia-cuda-mps-control on cgpu17 >> mps ready: 2021-07-07T06:17:36-07:00 >> masses: e= 9.109e-31; ions in proton mass units: 5.000e-04 >> 1.000e+00 ... >> charges: e=-1.602e-19; charges in elementary units: 1.000e+00 >> 2.000e+00 >> thermal T (K): e= 1.160e+07 i= 1.160e+07 imp= 1.160e+07. v_0= 1.326e+07 >> n_0= 1.000e+20 t_0= 5.787e-06 domain= 5.000e+00 >> CalculateE j0=0. Ec = 0.050991 >> 0 TS dt 1. time 0. >> 0) species-0: charge density= -1.6054532569865e+01 z-momentum= >> -1.9059929215360e-19 energy= 2.4178543516210e+04 >> 0) species-1: charge density= 8.0258396545108e+00 z-momentum= >> 7.0660527288120e-20 energy= 1.2082380663859e+04 >> 0) species-2: charge density= 6.3912608577597e-05 z-momentum= >> -1.1513901010709e-24 energy= 3.5799558195524e-01 >> 0) species-3: charge density= 9.5868912866395e-05 z-momentum= >> -1.1513901010709e-24 energy= 3.5799558195524e-01 >> 0) species-4: charge density= 1.2782521715519e-04 z-momentum= >> -1.1513901010709e-24 energy= 3.5799558195524e-01 >> [7]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [7]PETSC ERROR: GPU resources unavailable >> [7]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. >> Reports alloc failed; this indicates the GPU has run out resources >> [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [7]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e >> GIT Date: 2021-07-06 03:22:54 -0700 >> [7]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu17 by madams >> Wed Jul 7 06:17:38 2021 >> [7]PETSC ERROR: Configure options >> --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc >> --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" >> -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g >> -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g >> -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 >> -DLANDAU_MAX_Q=4" --FFLAGS=" -g " --COPTFLAGS=" -O3" --CXXOPTFLAGS=" >> -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 >> --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 >> --with-cuda=1 --download-p4est=1 --download-hypre=1 --with-zlib=1 >> PETSC_ARCH=arch-cori-gpu-opt-gcc >> >> *[7]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at >> /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu:2622 >> *[7]PETSC ERROR: #2 >> MatProductSymbolic_ABC_Basic() at >> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1146 >> [7]PETSC ERROR: #3 MatProductSymbolic() at >> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 >> [7]PETSC ERROR: #4 MatPtAP() at >> /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 >> [7]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at >> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 >> [7]PETSC ERROR: #6 PCSetUp_GAMG() at >> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 >> [7]PETSC ERROR: #7 PCSetUp() at >> /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 >> [7]PETSC ERROR: #8 KSPSetUp() at >> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 >> [7]PETSC ERROR: #9 KSPSolve_Private() at >> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 >> [7]PETSC ERROR: #10 KSPSolve() at >> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 >> [7]PETSC ERROR: #11 SNESSolve_NEWTONLS() at >> /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 >> [7]PETSC ERROR: #12 SNESSolve() at >> /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 >> [7]PETSC ERROR: #13 TSTheta_SNESSolve() at >> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 >> [7]PETSC ERROR: #14 TSStep_Theta() at >> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 >> [7]PETSC ERROR: #15 TSStep() at >> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 >> [7]PETSC ERROR: #16 TSSolve() at >> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 >> [7]PETSC ERROR: #17 main() at ex2.c:699 >> [7]PETSC ERROR: PETSc Option Table entries: >> [7]PETSC ERROR: -dm_landau_amr_levels_max 0 >> [7]PETSC ERROR: -dm_landau_amr_post_refine 5 >> [7]PETSC ERROR: -dm_landau_device_type cuda >> [7]PETSC ERROR: -dm_landau_domain_radius 5 >> [7]PETSC ERROR: -dm_landau_Ez 0 >> [7]PETSC ERROR: -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 >> [7]PETSC ERROR: -dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 >> [7]PETSC ERROR: -dm_landau_n >> 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 >> [7]PETSC ERROR: -dm_landau_thermal_temps 1,1,1,1,1,1,1,1,1,1 >> [7]PETSC ERROR: -dm_landau_type p4est >> [7]PETSC ERROR: -dm_mat_type aijcusparse >> [7]PETSC ERROR: -dm_preallocate_only >> [7]PETSC ERROR: -dm_vec_type cuda >> [7]PETSC ERROR: -ex2_connor_e_field_units >> [7]PETSC ERROR: -ex2_impurity_index 1 >> [7]PETSC ERROR: -ex2_plot_dt 200 >> [7]PETSC ERROR: -ex2_test_type none >> [7]PETSC ERROR: -ksp_type gmres >> [7]PETSC ERROR: -log_view >> >> *[7]PETSC ERROR: -matmatmult_backend_cpu[7]PETSC ERROR: >> -matptap_backend_cpu* >> [7]PETSC ERROR: -pc_gamg_reuse_interpolation >> [7]PETSC ERROR: -pc_type gamg >> [7]PETSC ERROR: -petscspace_degree 1 >> [7]PETSC ERROR: -snes_max_it 15 >> [7]PETSC ERROR: -snes_rtol 1.e-6 >> [7]PETSC ERROR: -snes_stol 1.e-6 >> [7]PETSC ERROR: -ts_adapt_scale_solve_failed 0.5 >> [7]PETSC ERROR: -ts_adapt_time_step_increase_delay 5 >> [7]PETSC ERROR: -ts_dt 1 >> [7]PETSC ERROR: -ts_exact_final_time stepover >> [7]PETSC ERROR: -ts_max_snes_failures -1 >> [7]PETSC ERROR: -ts_max_steps 10 >> [7]PETSC ERROR: -ts_max_time 300 >> [7]PETSC ERROR: -ts_rtol 1e-2 >> [7]PETSC ERROR: -ts_type beuler >> >> On Wed, Jul 7, 2021 at 4:07 AM Stefano Zampini >> wrote: >> >>> This will select the CPU path >>> >>> -matmatmult_backend_cpu -matptap_backend_cpu >>> >>> On Jul 7, 2021, at 2:43 AM, Mark Adams wrote: >>> >>> Can I turn off using cuSprarse for RAP? >>> >>> On Tue, Jul 6, 2021 at 6:25 PM Barry Smith wrote: >>> >>>> >>>> Stefano has mentioned this before. He reported cuSparse matrix-matrix >>>> vector products use a very amount of memory. >>>> >>>> On Jul 6, 2021, at 4:33 PM, Mark Adams wrote: >>>> >>>> I am running out of memory in GAMG. It looks like this is from the new >>>> cuSparse RAP. >>>> I was able to run Hypre with twice as much work on the GPU as this run. >>>> Are there parameters to tweek for this perhaps or can I disable it? >>>> >>>> Thanks, >>>> Mark >>>> >>>> 0 SNES Function norm 5.442539952302e-04 >>>> [2]PETSC ERROR: --------------------- Error Message >>>> -------------------------------------------------------------- >>>> [2]PETSC ERROR: GPU resources unavailable >>>> [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of >>>> memory. Reports alloc failed; this indicates the GPU has run out resources >>>> [2]PETSC ERROR: See >>>> https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >>>> shooting. >>>> [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e >>>> GIT Date: 2021-07-06 03:22:54 -0700 >>>> [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by >>>> madams Tue Jul 6 13:37:43 2021 >>>> [2]PETSC ERROR: Configure options >>>> --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc >>>> --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" >>>> -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI >>>> ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 >>>> -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler >>>> -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" >>>> --FFLAGS=" -g " - >>>> -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" >>>> --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" >>>> --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 >>>> -- >>>> download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc >>>> [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() >>>> at /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/ >>>> aijcusparse.cu:2622 >>>> [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at >>>> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 >>>> [2]PETSC ERROR: #3 MatProductSymbolic() at >>>> /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 >>>> [2]PETSC ERROR: #4 MatPtAP() at >>>> /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 >>>> [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at >>>> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 >>>> [2]PETSC ERROR: #6 PCSetUp_GAMG() at >>>> /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 >>>> [2]PETSC ERROR: #7 PCSetUp() at >>>> /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 >>>> [2]PETSC ERROR: #8 KSPSetUp() at >>>> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 >>>> [2]PETSC ERROR: #9 KSPSolve_Private() at >>>> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 >>>> [2]PETSC ERROR: #10 KSPSolve() at >>>> /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 >>>> [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at >>>> /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 >>>> [2]PETSC ERROR: #12 SNESSolve() at >>>> /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 >>>> [2]PETSC ERROR: #13 TSTheta_SNESSolve() at >>>> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 >>>> [2]PETSC ERROR: #14 TSStep_Theta() at >>>> /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 >>>> [2]PETSC ERROR: #15 TSStep() at >>>> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 >>>> [2]PETSC ERROR: #16 TSSolve() at >>>> /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 >>>> [2]PETSC ERROR: #17 main() at ex2.c:699 >>>> >>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 10:04:17 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 11:04:17 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Thanks, 08:30 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc$ cd /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 -I${ROCM_PATH}/include" prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install Checking for shared library support... Building shared library libz.so.1.2.11 with cc. Checking for size_t... Yes. Checking for off64_t... Yes. Checking for fseeko... Yes. Checking for strerror... No. Checking for unistd.h... Yes. Checking for stdarg.h... Yes. Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf(). Checking for vsnprintf() in stdio.h... No. WARNING: vsnprintf() not found, falling back to vsprintf(). zlib can build but will be open to possible buffer-overflow security vulnerabilities. Checking for return value of vsprintf()... Yes. Checking for attribute(visibility) support... Yes. cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o test/example.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o test/minigzip.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/adler32.o adler32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/crc32.o crc32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/deflate.o deflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/infback.o infback.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inffast.o inffast.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inflate.o inflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inftrees.o inftrees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/trees.o trees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/zutil.o zutil.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/compress.o compress.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/uncompr.o uncompr.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzclose.o gzclose.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzlib.o gzlib.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzread.o gzread.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzwrite.o gzwrite.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o example64.o test/example.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o minigzip64.o test/minigzip.c ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo -lc cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. libz.a cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. libz.a ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:292: minigzip] Error 1 gmake: *** Waiting for unfinished jobs.... gmake: *** [Makefile:289: example] Error 1 ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:304: minigzip64] Error 1 ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:301: example64] Error 1 rm -f libz.so libz.so.1 ln -s libz.so.1.2.11 libz.so ln -s libz.so.1.2.11 libz.so.1 11:03 2 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley wrote: > It is hard to see the error. I suspect it is something crazy with the > install. Can you run the build by hand? > > cd > /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 > && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 > -I${ROCM_PATH}/include" > prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" > ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install > > and see what happens, and what the error code is? > > Thanks, > > Matt > > On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: > >> Also, this is in jczhang/fix-kokkos-includes >> >> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: >> >>> Apparently the same error with >>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>> >>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith wrote: >>> >>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >>>> % Total % Received % Xferd Average Speed Time Time Time >>>> Current >>>> Dload Upload Total Spent Left >>>> Speed >>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>> --:--:-- 834k >>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>> arch-demonstrate-network-parallel-build >>>> $ tar -zxf zlib-1.2.11.tar.gz >>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>> arch-demonstrate-network-parallel-build >>>> $ ls zlib-1.2.11 >>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>> inflate.h os400 watcom zlib.h >>>> ChangeLog amiga deflate.h gzwrite.c >>>> inftrees.c qnx win32 zlib.map >>>> FAQ compress.c doc infback.c >>>> inftrees.h test zconf.h zlib.pc.cmakein >>>> INDEX configure examples inffast.c >>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>> Makefile contrib gzclose.c inffast.h msdos >>>> trees.c zconf.h.in zlib2ansi >>>> Makefile.in crc32.c gzguts.h inffixed.h >>>> nintendods trees.h zlib.3 zutil.c >>>> README crc32.h gzlib.c inflate.c old >>>> uncompr.c zlib.3.pdf zutil.h >>>> >>>> >>>> >>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>> >>>> >>>> >>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: >>>> >>>>> >>>>> Mark, >>>>> >>>>> You can try what the configure error message should be suggesting >>>>> (it is not clear if that is being printed to your screen or no). >>>>> >>>>> ERROR: Unable to download package ZLIB from: >>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>> >>>> >>>> My browser can not open this and I could not see a download button on >>>> this site. >>>> >>>> Can you download this? >>>> >>>> >>>>> >>>>> * If URL specified manually - perhaps there is a typo? >>>>> * If your network is disconnected - please reconnect and rerun >>>>> ./configure >>>>> * Or perhaps you have a firewall blocking the download >>>>> * You can run with --with-packages-download-dir=/adirectory and >>>>> ./configure will instruct you what packages to download manually >>>>> * or you can download the above URL manually, to >>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>> and use the configure option: >>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>> >>>>> Barry >>>>> >>>>> >>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >>>>> > >>>>> > I am getting some sort of error in build zlib on Spock at ORNL. >>>>> > Other libraries are downloaded and I am sure the network is fine. >>>>> > Any ideas? >>>>> > Thanks, >>>>> > Mark >>>>> > >>>>> >>>>> >>>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Wed Jul 7 10:08:16 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 7 Jul 2021 17:08:16 +0200 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Mark On Spock, you can use https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as a template for your configuration. You need to add libraries as LDFLAGS to resolve the hsa symbols > On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: > > Thanks, > > 08:30 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc$ cd /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 -I${ROCM_PATH}/include" prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install > Checking for shared library support... > Building shared library libz.so.1.2.11 with cc. > Checking for size_t... Yes. > Checking for off64_t... Yes. > Checking for fseeko... Yes. > Checking for strerror... No. > Checking for unistd.h... Yes. > Checking for stdarg.h... Yes. > Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf(). > Checking for vsnprintf() in stdio.h... No. > WARNING: vsnprintf() not found, falling back to vsprintf(). zlib > can build but will be open to possible buffer-overflow security > vulnerabilities. > Checking for return value of vsprintf()... Yes. > Checking for attribute(visibility) support... Yes. > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o test/minigzip.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o example64.o test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o minigzip64.o test/minigzip.c > ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a > cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo -lc > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. libz.a > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see invocation) > clang-11: error: linker command failed with exit code 1 (use -v to see invocation) > gmake: *** [Makefile:292: minigzip] Error 1 > gmake: *** Waiting for unfinished jobs.... > gmake: *** [Makefile:289: example] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see invocation) > gmake: *** [Makefile:304: minigzip64] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see invocation) > gmake: *** [Makefile:301: example64] Error 1 > rm -f libz.so libz.so.1 > ln -s libz.so.1.2.11 libz.so > ln -s libz.so.1.2.11 libz.so.1 > 11:03 2 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ > > On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley > wrote: > It is hard to see the error. I suspect it is something crazy with the install. Can you run the build by hand? > > cd /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 -I${ROCM_PATH}/include" prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install > > and see what happens, and what the error code is? > > Thanks, > > Matt > > On Wed, Jul 7, 2021 at 8:48 AM Mark Adams > wrote: > Also, this is in jczhang/fix-kokkos-includes > > On Wed, Jul 7, 2021 at 8:46 AM Mark Adams > wrote: > Apparently the same error with --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz > > On Tue, Jul 6, 2021 at 11:53 PM Barry Smith > wrote: > $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz > % Total % Received % Xferd Average Speed Time Time Time Current > Dload Upload Total Spent Left Speed > 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- --:--:-- 834k > ~/Src/petsc (barry/2021-07-03/demonstrate-network-parallel-build=) arch-demonstrate-network-parallel-build > $ tar -zxf zlib-1.2.11.tar.gz > ~/Src/petsc (barry/2021-07-03/demonstrate-network-parallel-build=) arch-demonstrate-network-parallel-build > $ ls zlib-1.2.11 > CMakeLists.txt adler32.c deflate.c gzread.c inflate.h os400 watcom zlib.h > ChangeLog amiga deflate.h gzwrite.c inftrees.c qnx win32 zlib.map > FAQ compress.c doc infback.c inftrees.h test zconf.h zlib.pc.cmakein > INDEX configure examples inffast.c make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in > Makefile contrib gzclose.c inffast.h msdos trees.c zconf.h.in zlib2ansi > Makefile.in crc32.c gzguts.h inffixed.h nintendods trees.h zlib.3 zutil.c > README crc32.h gzlib.c inflate.c old uncompr.c zlib.3.pdf zutil.h > > > >> On Jul 6, 2021, at 7:57 PM, Mark Adams > wrote: >> >> >> >> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith > wrote: >> >> Mark, >> >> You can try what the configure error message should be suggesting (it is not clear if that is being printed to your screen or no). >> >> ERROR: Unable to download package ZLIB from: http://www.zlib.net/zlib-1.2.11.tar.gz >> >> My browser can not open this and I could not see a download button on this site. >> >> Can you download this? >> >> >> * If URL specified manually - perhaps there is a typo? >> * If your network is disconnected - please reconnect and rerun ./configure >> * Or perhaps you have a firewall blocking the download >> * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually >> * or you can download the above URL manually, to /yourselectedlocation/zlib-1.2.11.tar.gz >> and use the configure option: >> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >> >> Barry >> >> >> > On Jul 6, 2021, at 4:29 PM, Mark Adams > wrote: >> > >> > I am getting some sort of error in build zlib on Spock at ORNL. >> > Other libraries are downloaded and I am sure the network is fine. >> > Any ideas? >> > Thanks, >> > Mark >> > >> > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 7 10:49:02 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Jul 2021 11:49:02 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Cray is idiotic! We have to add MPI and who knows what other libraries to compile anything? Matt On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini wrote: > Mark > > On Spock, you can use > https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as > a template for your configuration. You need to add libraries as LDFLAGS to > resolve the hsa symbols > > On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: > > Thanks, > > 08:30 jczhang/fix-kokkos-includes= > /gpfs/alpine/csc314/scratch/adams/petsc$ cd > /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 > && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 > -I${ROCM_PATH}/include" > prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" > ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install > Checking for shared library support... > Building shared library libz.so.1.2.11 with cc. > Checking for size_t... Yes. > Checking for off64_t... Yes. > Checking for fseeko... Yes. > Checking for strerror... No. > Checking for unistd.h... Yes. > Checking for stdarg.h... Yes. > Checking whether to use vs[n]printf() or s[n]printf()... using > vs[n]printf(). > Checking for vsnprintf() in stdio.h... No. > WARNING: vsnprintf() not found, falling back to vsprintf(). zlib > can build but will be open to possible buffer-overflow security > vulnerabilities. > Checking for return value of vsprintf()... Yes. > Checking for attribute(visibility) support... Yes. > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o > test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o > test/minigzip.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > example64.o test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > minigzip64.o test/minigzip.c > ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o > inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o > gzwrite.o > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a > cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC > -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o > libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo > inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo > gzlib.lo gzread.lo gzwrite.lo -lc > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. > libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. > libz.a > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_iterate_agents > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_signal_load_scacquire > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_unlock > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_signal_destroy > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:292: minigzip] Error 1 > gmake: *** Waiting for unfinished jobs.... > gmake: *** [Makefile:289: example] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:304: minigzip64] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:301: example64] Error 1 > rm -f libz.so libz.so.1 > ln -s libz.so.1.2.11 libz.so > ln -s libz.so.1.2.11 libz.so.1 > 11:03 2 jczhang/fix-kokkos-includes= > /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ > > On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley wrote: > >> It is hard to see the error. I suspect it is something crazy with the >> install. Can you run the build by hand? >> >> cd >> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I${ROCM_PATH}/include" >> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >> >> and see what happens, and what the error code is? >> >> Thanks, >> >> Matt >> >> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: >> >>> Also, this is in jczhang/fix-kokkos-includes >>> >>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: >>> >>>> Apparently the same error with >>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>> >>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith wrote: >>>> >>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >>>>> % Total % Received % Xferd Average Speed Time Time >>>>> Time Current >>>>> Dload Upload Total Spent >>>>> Left Speed >>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>>> --:--:-- 834k >>>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>> arch-demonstrate-network-parallel-build >>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>> arch-demonstrate-network-parallel-build >>>>> $ ls zlib-1.2.11 >>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>> inflate.h os400 watcom zlib.h >>>>> ChangeLog amiga deflate.h gzwrite.c >>>>> inftrees.c qnx win32 zlib.map >>>>> FAQ compress.c doc infback.c >>>>> inftrees.h test zconf.h zlib.pc.cmakein >>>>> INDEX configure examples inffast.c >>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>> Makefile contrib gzclose.c inffast.h msdos >>>>> trees.c zconf.h.in zlib2ansi >>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>> nintendods trees.h zlib.3 zutil.c >>>>> README crc32.h gzlib.c inflate.c old >>>>> uncompr.c zlib.3.pdf zutil.h >>>>> >>>>> >>>>> >>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>>> >>>>> >>>>> >>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: >>>>> >>>>>> >>>>>> Mark, >>>>>> >>>>>> You can try what the configure error message should be suggesting >>>>>> (it is not clear if that is being printed to your screen or no). >>>>>> >>>>>> ERROR: Unable to download package ZLIB from: >>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>> >>>>> >>>>> My browser can not open this and I could not see a download button on >>>>> this site. >>>>> >>>>> Can you download this? >>>>> >>>>> >>>>>> >>>>>> * If URL specified manually - perhaps there is a typo? >>>>>> * If your network is disconnected - please reconnect and rerun >>>>>> ./configure >>>>>> * Or perhaps you have a firewall blocking the download >>>>>> * You can run with --with-packages-download-dir=/adirectory and >>>>>> ./configure will instruct you what packages to download manually >>>>>> * or you can download the above URL manually, to >>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>> and use the configure option: >>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >>>>>> > >>>>>> > I am getting some sort of error in build zlib on Spock at ORNL. >>>>>> > Other libraries are downloaded and I am sure the network is fine. >>>>>> > Any ideas? >>>>>> > Thanks, >>>>>> > Mark >>>>>> > >>>>>> >>>>>> >>>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 11:08:07 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 12:08:07 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Humm, I get this error (I just copied your whole file into here): 12:06 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py Traceback (most recent call last): File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', TypeError: bad operand type for unary +: 'str' On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini wrote: > Mark > > On Spock, you can use > https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as > a template for your configuration. You need to add libraries as LDFLAGS to > resolve the hsa symbols > > On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: > > Thanks, > > 08:30 jczhang/fix-kokkos-includes= > /gpfs/alpine/csc314/scratch/adams/petsc$ cd > /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 > && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 > -I${ROCM_PATH}/include" > prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" > ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install > Checking for shared library support... > Building shared library libz.so.1.2.11 with cc. > Checking for size_t... Yes. > Checking for off64_t... Yes. > Checking for fseeko... Yes. > Checking for strerror... No. > Checking for unistd.h... Yes. > Checking for stdarg.h... Yes. > Checking whether to use vs[n]printf() or s[n]printf()... using > vs[n]printf(). > Checking for vsnprintf() in stdio.h... No. > WARNING: vsnprintf() not found, falling back to vsprintf(). zlib > can build but will be open to possible buffer-overflow security > vulnerabilities. > Checking for return value of vsprintf()... Yes. > Checking for attribute(visibility) support... Yes. > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o > test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o > test/minigzip.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > example64.o test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > minigzip64.o test/minigzip.c > ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o > inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o > gzwrite.o > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a > cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC > -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o > libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo > inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo > gzlib.lo gzread.lo gzwrite.lo -lc > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. > libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. > libz.a > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_iterate_agents > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_signal_load_scacquire > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_unlock > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_signal_destroy > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:292: minigzip] Error 1 > gmake: *** Waiting for unfinished jobs.... > gmake: *** [Makefile:289: example] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:304: minigzip64] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:301: example64] Error 1 > rm -f libz.so libz.so.1 > ln -s libz.so.1.2.11 libz.so > ln -s libz.so.1.2.11 libz.so.1 > 11:03 2 jczhang/fix-kokkos-includes= > /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ > > On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley wrote: > >> It is hard to see the error. I suspect it is something crazy with the >> install. Can you run the build by hand? >> >> cd >> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I${ROCM_PATH}/include" >> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >> >> and see what happens, and what the error code is? >> >> Thanks, >> >> Matt >> >> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: >> >>> Also, this is in jczhang/fix-kokkos-includes >>> >>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: >>> >>>> Apparently the same error with >>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>> >>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith wrote: >>>> >>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >>>>> % Total % Received % Xferd Average Speed Time Time >>>>> Time Current >>>>> Dload Upload Total Spent >>>>> Left Speed >>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>>> --:--:-- 834k >>>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>> arch-demonstrate-network-parallel-build >>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>> arch-demonstrate-network-parallel-build >>>>> $ ls zlib-1.2.11 >>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>> inflate.h os400 watcom zlib.h >>>>> ChangeLog amiga deflate.h gzwrite.c >>>>> inftrees.c qnx win32 zlib.map >>>>> FAQ compress.c doc infback.c >>>>> inftrees.h test zconf.h zlib.pc.cmakein >>>>> INDEX configure examples inffast.c >>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>> Makefile contrib gzclose.c inffast.h msdos >>>>> trees.c zconf.h.in zlib2ansi >>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>> nintendods trees.h zlib.3 zutil.c >>>>> README crc32.h gzlib.c inflate.c old >>>>> uncompr.c zlib.3.pdf zutil.h >>>>> >>>>> >>>>> >>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>>> >>>>> >>>>> >>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: >>>>> >>>>>> >>>>>> Mark, >>>>>> >>>>>> You can try what the configure error message should be suggesting >>>>>> (it is not clear if that is being printed to your screen or no). >>>>>> >>>>>> ERROR: Unable to download package ZLIB from: >>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>> >>>>> >>>>> My browser can not open this and I could not see a download button on >>>>> this site. >>>>> >>>>> Can you download this? >>>>> >>>>> >>>>>> >>>>>> * If URL specified manually - perhaps there is a typo? >>>>>> * If your network is disconnected - please reconnect and rerun >>>>>> ./configure >>>>>> * Or perhaps you have a firewall blocking the download >>>>>> * You can run with --with-packages-download-dir=/adirectory and >>>>>> ./configure will instruct you what packages to download manually >>>>>> * or you can download the above URL manually, to >>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>> and use the configure option: >>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >>>>>> > >>>>>> > I am getting some sort of error in build zlib on Spock at ORNL. >>>>>> > Other libraries are downloaded and I am sure the network is fine. >>>>>> > Any ideas? >>>>>> > Thanks, >>>>>> > Mark >>>>>> > >>>>>> >>>>>> >>>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Wed Jul 7 11:13:13 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 7 Jul 2021 18:13:13 +0200 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: There's an extra comma Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: > Humm, I get this error (I just copied your whole file into here): > > 12:06 jczhang/fix-kokkos-includes= > /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py > Traceback (most recent call last): > File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in > > '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', > TypeError: bad operand type for unary +: 'str' > > On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini > wrote: > >> Mark >> >> On Spock, you can use >> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >> a template for your configuration. You need to add libraries as LDFLAGS to >> resolve the hsa symbols >> >> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >> >> Thanks, >> >> 08:30 jczhang/fix-kokkos-includes= >> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I${ROCM_PATH}/include" >> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >> Checking for shared library support... >> Building shared library libz.so.1.2.11 with cc. >> Checking for size_t... Yes. >> Checking for off64_t... Yes. >> Checking for fseeko... Yes. >> Checking for strerror... No. >> Checking for unistd.h... Yes. >> Checking for stdarg.h... Yes. >> Checking whether to use vs[n]printf() or s[n]printf()... using >> vs[n]printf(). >> Checking for vsnprintf() in stdio.h... No. >> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >> can build but will be open to possible buffer-overflow security >> vulnerabilities. >> Checking for return value of vsprintf()... Yes. >> Checking for attribute(visibility) support... Yes. >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >> test/example.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >> test/minigzip.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/adler32.o adler32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/crc32.o crc32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/deflate.o deflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/infback.o infback.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/inffast.o inffast.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/inflate.o inflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/inftrees.o inftrees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/trees.o trees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/zutil.o zutil.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/compress.o compress.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/uncompr.o uncompr.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/gzclose.o gzclose.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/gzlib.o gzlib.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/gzread.o gzread.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/gzwrite.o gzwrite.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >> example64.o test/example.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >> minigzip64.o test/minigzip.c >> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o >> inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o >> gzwrite.o >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >> -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >> gzlib.lo gzread.lo gzwrite.lo -lc >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >> libz.a >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >> libz.a >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_allocate >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agent_iterate_memory_pools >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_load_scacquire >> [--no-allow-shlib-undefined] >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_amd_memory_pool_allocate >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_amd_agent_iterate_memory_pools >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_iterate_agents >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_get_info >> [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_signal_load_scacquire >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_amd_memory_unlock >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_signal_destroy >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agents_allow_access >> [--no-allow-shlib-undefined] >> >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_get_info >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agents_allow_access >> [--no-allow-shlib-undefined] >> clang-11: error: linker command failed with exit code 1 (use -v to see >> invocation) >> clang-11: error: linker command failed with exit code 1 (use -v to see >> invocation) >> gmake: *** [Makefile:292: minigzip] Error 1 >> gmake: *** Waiting for unfinished jobs.... >> gmake: *** [Makefile:289: example] Error 1 >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_allocate >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agent_iterate_memory_pools >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_load_scacquire >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_get_info >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agents_allow_access >> [--no-allow-shlib-undefined] >> clang-11: error: linker command failed with exit code 1 (use -v to see >> invocation) >> gmake: *** [Makefile:304: minigzip64] Error 1 >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_allocate >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agent_iterate_memory_pools >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_load_scacquire >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_get_info >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agents_allow_access >> [--no-allow-shlib-undefined] >> clang-11: error: linker command failed with exit code 1 (use -v to see >> invocation) >> gmake: *** [Makefile:301: example64] Error 1 >> rm -f libz.so libz.so.1 >> ln -s libz.so.1.2.11 libz.so >> ln -s libz.so.1.2.11 libz.so.1 >> 11:03 2 jczhang/fix-kokkos-includes= >> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >> >> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley wrote: >> >>> It is hard to see the error. I suspect it is something crazy with the >>> install. Can you run the build by hand? >>> >>> cd >>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I${ROCM_PATH}/include" >>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>> >>> and see what happens, and what the error code is? >>> >>> Thanks, >>> >>> Matt >>> >>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: >>> >>>> Also, this is in jczhang/fix-kokkos-includes >>>> >>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: >>>> >>>>> Apparently the same error with >>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>> >>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith wrote: >>>>> >>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>> Time Current >>>>>> Dload Upload Total Spent >>>>>> Left Speed >>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>>>> --:--:-- 834k >>>>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>> arch-demonstrate-network-parallel-build >>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>> arch-demonstrate-network-parallel-build >>>>>> $ ls zlib-1.2.11 >>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>> inflate.h os400 watcom zlib.h >>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>> inftrees.c qnx win32 zlib.map >>>>>> FAQ compress.c doc infback.c >>>>>> inftrees.h test zconf.h zlib.pc.cmakein >>>>>> INDEX configure examples inffast.c >>>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>>> Makefile contrib gzclose.c inffast.h msdos >>>>>> trees.c zconf.h.in zlib2ansi >>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>> nintendods trees.h zlib.3 zutil.c >>>>>> README crc32.h gzlib.c inflate.c old >>>>>> uncompr.c zlib.3.pdf zutil.h >>>>>> >>>>>> >>>>>> >>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: >>>>>> >>>>>>> >>>>>>> Mark, >>>>>>> >>>>>>> You can try what the configure error message should be suggesting >>>>>>> (it is not clear if that is being printed to your screen or no). >>>>>>> >>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>> >>>>>> >>>>>> My browser can not open this and I could not see a download button on >>>>>> this site. >>>>>> >>>>>> Can you download this? >>>>>> >>>>>> >>>>>>> >>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>> * If your network is disconnected - please reconnect and rerun >>>>>>> ./configure >>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>> * You can run with --with-packages-download-dir=/adirectory and >>>>>>> ./configure will instruct you what packages to download manually >>>>>>> * or you can download the above URL manually, to >>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>> and use the configure option: >>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> >>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >>>>>>> > >>>>>>> > I am getting some sort of error in build zlib on Spock at ORNL. >>>>>>> > Other libraries are downloaded and I am sure the network is fine. >>>>>>> > Any ideas? >>>>>>> > Thanks, >>>>>>> > Mark >>>>>>> > >>>>>>> >>>>>>> >>>>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 11:29:39 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 12:29:39 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Ok, I tried that but now I get this error. On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini wrote: > There's an extra comma > > Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: > >> Humm, I get this error (I just copied your whole file into here): >> >> 12:06 jczhang/fix-kokkos-includes= >> /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py >> Traceback (most recent call last): >> File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in >> >> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', >> TypeError: bad operand type for unary +: 'str' >> >> On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini < >> stefano.zampini at gmail.com> wrote: >> >>> Mark >>> >>> On Spock, you can use >>> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >>> a template for your configuration. You need to add libraries as LDFLAGS to >>> resolve the hsa symbols >>> >>> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >>> >>> Thanks, >>> >>> 08:30 jczhang/fix-kokkos-includes= >>> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I${ROCM_PATH}/include" >>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>> Checking for shared library support... >>> Building shared library libz.so.1.2.11 with cc. >>> Checking for size_t... Yes. >>> Checking for off64_t... Yes. >>> Checking for fseeko... Yes. >>> Checking for strerror... No. >>> Checking for unistd.h... Yes. >>> Checking for stdarg.h... Yes. >>> Checking whether to use vs[n]printf() or s[n]printf()... using >>> vs[n]printf(). >>> Checking for vsnprintf() in stdio.h... No. >>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>> can build but will be open to possible buffer-overflow security >>> vulnerabilities. >>> Checking for return value of vsprintf()... Yes. >>> Checking for attribute(visibility) support... Yes. >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>> test/example.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>> test/minigzip.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/adler32.o adler32.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/crc32.o crc32.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/deflate.o deflate.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/infback.o infback.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/inffast.o inffast.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/inflate.o inflate.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/inftrees.o inftrees.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/trees.o trees.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/zutil.o zutil.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/compress.o compress.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/uncompr.o uncompr.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/gzclose.o gzclose.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/gzlib.o gzlib.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/gzread.o gzread.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/gzwrite.o gzwrite.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>> example64.o test/example.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>> minigzip64.o test/minigzip.c >>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o >>> inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o >>> gzwrite.o >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>> -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>> gzlib.lo gzread.lo gzwrite.lo -lc >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>> libz.a >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>> libz.a >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_allocate >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agent_iterate_memory_pools >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_load_scacquire >>> [--no-allow-shlib-undefined] >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_amd_memory_pool_allocate >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_amd_agent_iterate_memory_pools >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_iterate_agents >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_get_info >>> [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_signal_load_scacquire >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_amd_memory_unlock >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_signal_destroy >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agents_allow_access >>> [--no-allow-shlib-undefined] >>> >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_get_info >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agents_allow_access >>> [--no-allow-shlib-undefined] >>> clang-11: error: linker command failed with exit code 1 (use -v to see >>> invocation) >>> clang-11: error: linker command failed with exit code 1 (use -v to see >>> invocation) >>> gmake: *** [Makefile:292: minigzip] Error 1 >>> gmake: *** Waiting for unfinished jobs.... >>> gmake: *** [Makefile:289: example] Error 1 >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_allocate >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agent_iterate_memory_pools >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_load_scacquire >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_get_info >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agents_allow_access >>> [--no-allow-shlib-undefined] >>> clang-11: error: linker command failed with exit code 1 (use -v to see >>> invocation) >>> gmake: *** [Makefile:304: minigzip64] Error 1 >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_allocate >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agent_iterate_memory_pools >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_load_scacquire >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_get_info >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agents_allow_access >>> [--no-allow-shlib-undefined] >>> clang-11: error: linker command failed with exit code 1 (use -v to see >>> invocation) >>> gmake: *** [Makefile:301: example64] Error 1 >>> rm -f libz.so libz.so.1 >>> ln -s libz.so.1.2.11 libz.so >>> ln -s libz.so.1.2.11 libz.so.1 >>> 11:03 2 jczhang/fix-kokkos-includes= >>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >>> >>> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley >>> wrote: >>> >>>> It is hard to see the error. I suspect it is something crazy with the >>>> install. Can you run the build by hand? >>>> >>>> cd >>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I${ROCM_PATH}/include" >>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>> >>>> and see what happens, and what the error code is? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: >>>> >>>>> Also, this is in jczhang/fix-kokkos-includes >>>>> >>>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: >>>>> >>>>>> Apparently the same error with >>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>> >>>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith wrote: >>>>>> >>>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >>>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>>> Time Current >>>>>>> Dload Upload Total Spent >>>>>>> Left Speed >>>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>>>>> --:--:-- 834k >>>>>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>> arch-demonstrate-network-parallel-build >>>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>>> ~/Src/petsc* (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>> arch-demonstrate-network-parallel-build >>>>>>> $ ls zlib-1.2.11 >>>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>>> inflate.h os400 watcom zlib.h >>>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>>> inftrees.c qnx win32 zlib.map >>>>>>> FAQ compress.c doc infback.c >>>>>>> inftrees.h test zconf.h zlib.pc.cmakein >>>>>>> INDEX configure examples inffast.c >>>>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>>>> Makefile contrib gzclose.c inffast.h >>>>>>> msdos trees.c zconf.h.in zlib2ansi >>>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>>> nintendods trees.h zlib.3 zutil.c >>>>>>> README crc32.h gzlib.c inflate.c old >>>>>>> uncompr.c zlib.3.pdf zutil.h >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith wrote: >>>>>>> >>>>>>>> >>>>>>>> Mark, >>>>>>>> >>>>>>>> You can try what the configure error message should be >>>>>>>> suggesting (it is not clear if that is being printed to your screen or no). >>>>>>>> >>>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>>> >>>>>>> >>>>>>> My browser can not open this and I could not see a download button >>>>>>> on this site. >>>>>>> >>>>>>> Can you download this? >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>>> * If your network is disconnected - please reconnect and rerun >>>>>>>> ./configure >>>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>>> * You can run with --with-packages-download-dir=/adirectory and >>>>>>>> ./configure will instruct you what packages to download manually >>>>>>>> * or you can download the above URL manually, to >>>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>> and use the configure option: >>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> >>>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >>>>>>>> > >>>>>>>> > I am getting some sort of error in build zlib on Spock at ORNL. >>>>>>>> > Other libraries are downloaded and I am sure the network is fine. >>>>>>>> > Any ideas? >>>>>>>> > Thanks, >>>>>>>> > Mark >>>>>>>> > >>>>>>>> >>>>>>>> >>>>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 73890 bytes Desc: not available URL: From knepley at gmail.com Wed Jul 7 11:47:51 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Jul 2021 12:47:51 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Did you look in /sw/spock/spack-envs/views/rocm-4.1.0lib ? Matt On Wed, Jul 7, 2021 at 12:29 PM Mark Adams wrote: > Ok, I tried that but now I get this error. > > On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini > wrote: > >> There's an extra comma >> >> Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: >> >>> Humm, I get this error (I just copied your whole file into here): >>> >>> 12:06 jczhang/fix-kokkos-includes= >>> /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py >>> Traceback (most recent call last): >>> File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in >>> >>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', >>> TypeError: bad operand type for unary +: 'str' >>> >>> On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini < >>> stefano.zampini at gmail.com> wrote: >>> >>>> Mark >>>> >>>> On Spock, you can use >>>> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >>>> a template for your configuration. You need to add libraries as LDFLAGS to >>>> resolve the hsa symbols >>>> >>>> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >>>> >>>> Thanks, >>>> >>>> 08:30 jczhang/fix-kokkos-includes= >>>> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I${ROCM_PATH}/include" >>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>> Checking for shared library support... >>>> Building shared library libz.so.1.2.11 with cc. >>>> Checking for size_t... Yes. >>>> Checking for off64_t... Yes. >>>> Checking for fseeko... Yes. >>>> Checking for strerror... No. >>>> Checking for unistd.h... Yes. >>>> Checking for stdarg.h... Yes. >>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>> vs[n]printf(). >>>> Checking for vsnprintf() in stdio.h... No. >>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>> can build but will be open to possible buffer-overflow security >>>> vulnerabilities. >>>> Checking for return value of vsprintf()... Yes. >>>> Checking for attribute(visibility) support... Yes. >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>> test/example.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>> test/minigzip.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/adler32.o adler32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/crc32.o crc32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/deflate.o deflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/infback.o infback.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/inffast.o inffast.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/inflate.o inflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/inftrees.o inftrees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/trees.o trees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/zutil.o zutil.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/compress.o compress.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/uncompr.o uncompr.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzclose.o gzclose.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzlib.o gzlib.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzread.o gzread.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzwrite.o gzwrite.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>> example64.o test/example.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>> minigzip64.o test/minigzip.c >>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o >>>> inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o >>>> gzwrite.o >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>> -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>> libz.a >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>> libz.a >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined] >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_iterate_agents >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_unlock >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_signal_destroy >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>> gmake: *** Waiting for unfinished jobs.... >>>> gmake: *** [Makefile:289: example] Error 1 >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> gmake: *** [Makefile:301: example64] Error 1 >>>> rm -f libz.so libz.so.1 >>>> ln -s libz.so.1.2.11 libz.so >>>> ln -s libz.so.1.2.11 libz.so.1 >>>> 11:03 2 jczhang/fix-kokkos-includes= >>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >>>> >>>> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley >>>> wrote: >>>> >>>>> It is hard to see the error. I suspect it is something crazy with the >>>>> install. Can you run the build by hand? >>>>> >>>>> cd >>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I${ROCM_PATH}/include" >>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>> >>>>> and see what happens, and what the error code is? >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: >>>>> >>>>>> Also, this is in jczhang/fix-kokkos-includes >>>>>> >>>>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: >>>>>> >>>>>>> Apparently the same error with >>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>> >>>>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith >>>>>>> wrote: >>>>>>> >>>>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >>>>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>>>> Time Current >>>>>>>> Dload Upload Total Spent >>>>>>>> Left Speed >>>>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>>>>>> --:--:-- 834k >>>>>>>> ~/Src/petsc* >>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>>>> ~/Src/petsc* >>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>> $ ls zlib-1.2.11 >>>>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>>>> inflate.h os400 watcom zlib.h >>>>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>>>> inftrees.c qnx win32 zlib.map >>>>>>>> FAQ compress.c doc infback.c >>>>>>>> inftrees.h test zconf.h zlib.pc.cmakein >>>>>>>> INDEX configure examples inffast.c >>>>>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>>>>> Makefile contrib gzclose.c inffast.h >>>>>>>> msdos trees.c zconf.h.in zlib2ansi >>>>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>>>> nintendods trees.h zlib.3 zutil.c >>>>>>>> README crc32.h gzlib.c inflate.c old >>>>>>>> uncompr.c zlib.3.pdf zutil.h >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith >>>>>>>> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> Mark, >>>>>>>>> >>>>>>>>> You can try what the configure error message should be >>>>>>>>> suggesting (it is not clear if that is being printed to your screen or no). >>>>>>>>> >>>>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>>>> >>>>>>>> >>>>>>>> My browser can not open this and I could not see a download button >>>>>>>> on this site. >>>>>>>> >>>>>>>> Can you download this? >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>>>> * If your network is disconnected - please reconnect and rerun >>>>>>>>> ./configure >>>>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>>>> * You can run with --with-packages-download-dir=/adirectory and >>>>>>>>> ./configure will instruct you what packages to download manually >>>>>>>>> * or you can download the above URL manually, to >>>>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>> and use the configure option: >>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>> >>>>>>>>> Barry >>>>>>>>> >>>>>>>>> >>>>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >>>>>>>>> > >>>>>>>>> > I am getting some sort of error in build zlib on Spock at ORNL. >>>>>>>>> > Other libraries are downloaded and I am sure the network is fine. >>>>>>>>> > Any ideas? >>>>>>>>> > Thanks, >>>>>>>>> > Mark >>>>>>>>> > >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >>>> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 12:05:49 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 13:05:49 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Thanks, it was missing the / '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', On Wed, Jul 7, 2021 at 12:48 PM Matthew Knepley wrote: > Did you look in /sw/spock/spack-envs/views/rocm-4.1.0lib ? > > Matt > > On Wed, Jul 7, 2021 at 12:29 PM Mark Adams wrote: > >> Ok, I tried that but now I get this error. >> >> On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini < >> stefano.zampini at gmail.com> wrote: >> >>> There's an extra comma >>> >>> Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: >>> >>>> Humm, I get this error (I just copied your whole file into here): >>>> >>>> 12:06 jczhang/fix-kokkos-includes= >>>> /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py >>>> Traceback (most recent call last): >>>> File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in >>>> >>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', >>>> TypeError: bad operand type for unary +: 'str' >>>> >>>> On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini < >>>> stefano.zampini at gmail.com> wrote: >>>> >>>>> Mark >>>>> >>>>> On Spock, you can use >>>>> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >>>>> a template for your configuration. You need to add libraries as LDFLAGS to >>>>> resolve the hsa symbols >>>>> >>>>> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >>>>> >>>>> Thanks, >>>>> >>>>> 08:30 jczhang/fix-kokkos-includes= >>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I${ROCM_PATH}/include" >>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>> Checking for shared library support... >>>>> Building shared library libz.so.1.2.11 with cc. >>>>> Checking for size_t... Yes. >>>>> Checking for off64_t... Yes. >>>>> Checking for fseeko... Yes. >>>>> Checking for strerror... No. >>>>> Checking for unistd.h... Yes. >>>>> Checking for stdarg.h... Yes. >>>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>>> vs[n]printf(). >>>>> Checking for vsnprintf() in stdio.h... No. >>>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>>> can build but will be open to possible buffer-overflow security >>>>> vulnerabilities. >>>>> Checking for return value of vsprintf()... Yes. >>>>> Checking for attribute(visibility) support... Yes. >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>>> test/example.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>>> test/minigzip.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/adler32.o adler32.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/crc32.o crc32.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/deflate.o deflate.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/infback.o infback.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/inffast.o inffast.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/inflate.o inflate.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/inftrees.o inftrees.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/trees.o trees.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/zutil.o zutil.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/compress.o compress.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/uncompr.o uncompr.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/gzclose.o gzclose.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/gzlib.o gzlib.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/gzread.o gzread.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>> -c -o objs/gzwrite.o gzwrite.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>> example64.o test/example.c >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>> minigzip64.o test/minigzip.c >>>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o >>>>> inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o >>>>> gzwrite.o >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>>> -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>>> libz.a >>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>>> libz.a >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_signal_load_scacquire >>>>> [--no-allow-shlib-undefined] >>>>> ld.lldld.lld: : error: error: >>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>> hsa_amd_memory_pool_allocate >>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>> >>>>> ld.lldld.lld: : error: error: >>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>> hsa_amd_agent_iterate_memory_pools >>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>> >>>>> ld.lldld.lld: : error: error: >>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>> hsa_iterate_agents >>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>> [--no-allow-shlib-undefined] >>>>> >>>>> ld.lldld.lld: : error: error: >>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>> hsa_signal_load_scacquire >>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>> >>>>> ld.lldld.lld: : error: error: >>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>> hsa_amd_memory_unlock >>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>> >>>>> ld.lldld.lld: : error: error: >>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>> hsa_signal_destroy >>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_agents_allow_access >>>>> [--no-allow-shlib-undefined] >>>>> >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_agents_allow_access >>>>> [--no-allow-shlib-undefined] >>>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>>> invocation) >>>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>>> invocation) >>>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>>> gmake: *** Waiting for unfinished jobs.... >>>>> gmake: *** [Makefile:289: example] Error 1 >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_signal_load_scacquire >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_agents_allow_access >>>>> [--no-allow-shlib-undefined] >>>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>>> invocation) >>>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_signal_load_scacquire >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>> [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>> undefined reference to hsa_amd_agents_allow_access >>>>> [--no-allow-shlib-undefined] >>>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>>> invocation) >>>>> gmake: *** [Makefile:301: example64] Error 1 >>>>> rm -f libz.so libz.so.1 >>>>> ln -s libz.so.1.2.11 libz.so >>>>> ln -s libz.so.1.2.11 libz.so.1 >>>>> 11:03 2 jczhang/fix-kokkos-includes= >>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >>>>> >>>>> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley >>>>> wrote: >>>>> >>>>>> It is hard to see the error. I suspect it is something crazy with the >>>>>> install. Can you run the build by hand? >>>>>> >>>>>> cd >>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I${ROCM_PATH}/include" >>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>> >>>>>> and see what happens, and what the error code is? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: >>>>>> >>>>>>> Also, this is in jczhang/fix-kokkos-includes >>>>>>> >>>>>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: >>>>>>> >>>>>>>> Apparently the same error with >>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>> >>>>>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith >>>>>>>> wrote: >>>>>>>> >>>>>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >>>>>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>>>>> Time Current >>>>>>>>> Dload Upload Total Spent >>>>>>>>> Left Speed >>>>>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>>>>>>> --:--:-- 834k >>>>>>>>> ~/Src/petsc* >>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>>>>> ~/Src/petsc* >>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>> $ ls zlib-1.2.11 >>>>>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>>>>> inflate.h os400 watcom zlib.h >>>>>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>>>>> inftrees.c qnx win32 zlib.map >>>>>>>>> FAQ compress.c doc infback.c >>>>>>>>> inftrees.h test zconf.h zlib.pc.cmakein >>>>>>>>> INDEX configure examples inffast.c >>>>>>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>>>>>> Makefile contrib gzclose.c inffast.h >>>>>>>>> msdos trees.c zconf.h.in zlib2ansi >>>>>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>>>>> nintendods trees.h zlib.3 zutil.c >>>>>>>>> README crc32.h gzlib.c inflate.c >>>>>>>>> old uncompr.c zlib.3.pdf zutil.h >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Mark, >>>>>>>>>> >>>>>>>>>> You can try what the configure error message should be >>>>>>>>>> suggesting (it is not clear if that is being printed to your screen or no). >>>>>>>>>> >>>>>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>>>>> >>>>>>>>> >>>>>>>>> My browser can not open this and I could not see a download button >>>>>>>>> on this site. >>>>>>>>> >>>>>>>>> Can you download this? >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>>>>> * If your network is disconnected - please reconnect and rerun >>>>>>>>>> ./configure >>>>>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>>>>> * You can run with --with-packages-download-dir=/adirectory and >>>>>>>>>> ./configure will instruct you what packages to download manually >>>>>>>>>> * or you can download the above URL manually, to >>>>>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>> and use the configure option: >>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>> >>>>>>>>>> Barry >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams wrote: >>>>>>>>>> > >>>>>>>>>> > I am getting some sort of error in build zlib on Spock at ORNL. >>>>>>>>>> > Other libraries are downloaded and I am sure the network is >>>>>>>>>> fine. >>>>>>>>>> > Any ideas? >>>>>>>>>> > Thanks, >>>>>>>>>> > Mark >>>>>>>>>> > >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>> >>>>>> >>>>> >>>>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 12:18:56 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 13:18:56 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: Well, still getting these hsa errors: 13:07 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc$ !136 cd /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 -I${ROCM_PATH}/include" prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install Checking for shared library support... Building shared library libz.so.1.2.11 with cc. Checking for size_t... Yes. Checking for off64_t... Yes. Checking for fseeko... Yes. Checking for strerror... No. Checking for unistd.h... Yes. Checking for stdarg.h... Yes. Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf(). Checking for vsnprintf() in stdio.h... No. WARNING: vsnprintf() not found, falling back to vsprintf(). zlib can build but will be open to possible buffer-overflow security vulnerabilities. Checking for return value of vsprintf()... Yes. Checking for attribute(visibility) support... Yes. cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o test/example.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o test/minigzip.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/adler32.o adler32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/crc32.o crc32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/deflate.o deflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/infback.o infback.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inffast.o inffast.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inflate.o inflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inftrees.o inftrees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/trees.o trees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/zutil.o zutil.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/compress.o compress.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/uncompr.o uncompr.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzclose.o gzclose.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzlib.o gzlib.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzread.o gzread.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzwrite.o gzwrite.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o example64.o test/example.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o minigzip64.o test/minigzip.c ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo -lc cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. libz.a cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. libz.a ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:289: example] Error 1 gmake: *** Waiting for unfinished jobs.... gmake: *** [Makefile:292: minigzip] Error 1 ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:304: minigzip64] Error 1 ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:301: example64] Error 1 rm -f libz.so libz.so.1 ln -s libz.so.1.2.11 libz.so ln -s libz.so.1.2.11 libz.so.1 On Wed, Jul 7, 2021 at 1:05 PM Mark Adams wrote: > Thanks, it was missing the / > > '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', > > On Wed, Jul 7, 2021 at 12:48 PM Matthew Knepley wrote: > >> Did you look in /sw/spock/spack-envs/views/rocm-4.1.0lib ? >> >> Matt >> >> On Wed, Jul 7, 2021 at 12:29 PM Mark Adams wrote: >> >>> Ok, I tried that but now I get this error. >>> >>> On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini < >>> stefano.zampini at gmail.com> wrote: >>> >>>> There's an extra comma >>>> >>>> Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: >>>> >>>>> Humm, I get this error (I just copied your whole file into here): >>>>> >>>>> 12:06 jczhang/fix-kokkos-includes= >>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py >>>>> Traceback (most recent call last): >>>>> File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in >>>>> >>>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', >>>>> TypeError: bad operand type for unary +: 'str' >>>>> >>>>> On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini < >>>>> stefano.zampini at gmail.com> wrote: >>>>> >>>>>> Mark >>>>>> >>>>>> On Spock, you can use >>>>>> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >>>>>> a template for your configuration. You need to add libraries as LDFLAGS to >>>>>> resolve the hsa symbols >>>>>> >>>>>> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >>>>>> >>>>>> Thanks, >>>>>> >>>>>> 08:30 jczhang/fix-kokkos-includes= >>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I${ROCM_PATH}/include" >>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>> Checking for shared library support... >>>>>> Building shared library libz.so.1.2.11 with cc. >>>>>> Checking for size_t... Yes. >>>>>> Checking for off64_t... Yes. >>>>>> Checking for fseeko... Yes. >>>>>> Checking for strerror... No. >>>>>> Checking for unistd.h... Yes. >>>>>> Checking for stdarg.h... Yes. >>>>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>>>> vs[n]printf(). >>>>>> Checking for vsnprintf() in stdio.h... No. >>>>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>>>> can build but will be open to possible buffer-overflow security >>>>>> vulnerabilities. >>>>>> Checking for return value of vsprintf()... Yes. >>>>>> Checking for attribute(visibility) support... Yes. >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>>>> test/example.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>>>> test/minigzip.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/adler32.o adler32.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/crc32.o crc32.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/deflate.o deflate.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/infback.o infback.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/inffast.o inffast.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/inflate.o inflate.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/inftrees.o inftrees.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/trees.o trees.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/zutil.o zutil.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/compress.o compress.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/uncompr.o uncompr.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/gzclose.o gzclose.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/gzlib.o gzlib.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/gzread.o gzread.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>> -c -o objs/gzwrite.o gzwrite.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>> example64.o test/example.c >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>> minigzip64.o test/minigzip.c >>>>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o >>>>>> inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o >>>>>> gzread.o gzwrite.o >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>>>> -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>>>> libz.a >>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>>>> libz.a >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_signal_load_scacquire >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lldld.lld: : error: error: >>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>> hsa_amd_memory_pool_allocate >>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>> >>>>>> ld.lldld.lld: : error: error: >>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>> hsa_amd_agent_iterate_memory_pools >>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>> >>>>>> ld.lldld.lld: : error: error: >>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>> hsa_iterate_agents >>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>> [--no-allow-shlib-undefined] >>>>>> >>>>>> ld.lldld.lld: : error: error: >>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>> hsa_signal_load_scacquire >>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>> >>>>>> ld.lldld.lld: : error: error: >>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>> hsa_amd_memory_unlock >>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>> >>>>>> ld.lldld.lld: : error: error: >>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>> hsa_signal_destroy >>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>> [--no-allow-shlib-undefined] >>>>>> >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>> [--no-allow-shlib-undefined] >>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>> see invocation) >>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>> see invocation) >>>>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>>>> gmake: *** Waiting for unfinished jobs.... >>>>>> gmake: *** [Makefile:289: example] Error 1 >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_signal_load_scacquire >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>> [--no-allow-shlib-undefined] >>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>> see invocation) >>>>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_signal_load_scacquire >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>> [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>> [--no-allow-shlib-undefined] >>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>> see invocation) >>>>>> gmake: *** [Makefile:301: example64] Error 1 >>>>>> rm -f libz.so libz.so.1 >>>>>> ln -s libz.so.1.2.11 libz.so >>>>>> ln -s libz.so.1.2.11 libz.so.1 >>>>>> 11:03 2 jczhang/fix-kokkos-includes= >>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >>>>>> >>>>>> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley >>>>>> wrote: >>>>>> >>>>>>> It is hard to see the error. I suspect it is something crazy with >>>>>>> the install. Can you run the build by hand? >>>>>>> >>>>>>> cd >>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I${ROCM_PATH}/include" >>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>> >>>>>>> and see what happens, and what the error code is? >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: >>>>>>> >>>>>>>> Also, this is in jczhang/fix-kokkos-includes >>>>>>>> >>>>>>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: >>>>>>>> >>>>>>>>> Apparently the same error with >>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>> >>>>>>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > >>>>>>>>>> zlib-1.2.11.tar.gz >>>>>>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>>>>>> Time Current >>>>>>>>>> Dload Upload Total Spent >>>>>>>>>> Left Speed >>>>>>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>>>>>>>> --:--:-- 834k >>>>>>>>>> ~/Src/petsc* >>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>>>>>> ~/Src/petsc* >>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>> $ ls zlib-1.2.11 >>>>>>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>>>>>> inflate.h os400 watcom zlib.h >>>>>>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>>>>>> inftrees.c qnx win32 zlib.map >>>>>>>>>> FAQ compress.c doc infback.c >>>>>>>>>> inftrees.h test zconf.h zlib.pc.cmakein >>>>>>>>>> INDEX configure examples inffast.c >>>>>>>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>>>>>>> Makefile contrib gzclose.c inffast.h >>>>>>>>>> msdos trees.c zconf.h.in zlib2ansi >>>>>>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>>>>>> nintendods trees.h zlib.3 zutil.c >>>>>>>>>> README crc32.h gzlib.c inflate.c >>>>>>>>>> old uncompr.c zlib.3.pdf zutil.h >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Mark, >>>>>>>>>>> >>>>>>>>>>> You can try what the configure error message should be >>>>>>>>>>> suggesting (it is not clear if that is being printed to your screen or no). >>>>>>>>>>> >>>>>>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> My browser can not open this and I could not see a download >>>>>>>>>> button on this site. >>>>>>>>>> >>>>>>>>>> Can you download this? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>>>>>> * If your network is disconnected - please reconnect and rerun >>>>>>>>>>> ./configure >>>>>>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>>>>>> * You can run with --with-packages-download-dir=/adirectory and >>>>>>>>>>> ./configure will instruct you what packages to download manually >>>>>>>>>>> * or you can download the above URL manually, to >>>>>>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>> and use the configure option: >>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>> >>>>>>>>>>> Barry >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams >>>>>>>>>>> wrote: >>>>>>>>>>> > >>>>>>>>>>> > I am getting some sort of error in build zlib on Spock at ORNL. >>>>>>>>>>> > Other libraries are downloaded and I am sure the network is >>>>>>>>>>> fine. >>>>>>>>>>> > Any ideas? >>>>>>>>>>> > Thanks, >>>>>>>>>>> > Mark >>>>>>>>>>> > >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>> >>>>>>> >>>>>> >>>>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1445691 bytes Desc: not available URL: From bsmith at petsc.dev Wed Jul 7 12:26:26 2021 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 7 Jul 2021 12:26:26 -0500 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: You will need to pass the -L arguments appropriately to zlib's ./configure so it can link its shared library appropriately. That is, the zlib configure requires the value obtained with L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', > On Jul 7, 2021, at 12:18 PM, Mark Adams wrote: > > Well, still getting these hsa errors: > > 13:07 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc$ !136 > cd /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 -I${ROCM_PATH}/include" prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install > Checking for shared library support... > Building shared library libz.so.1.2.11 with cc. > Checking for size_t... Yes. > Checking for off64_t... Yes. > Checking for fseeko... Yes. > Checking for strerror... No. > Checking for unistd.h... Yes. > Checking for stdarg.h... Yes. > Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf(). > Checking for vsnprintf() in stdio.h... No. > WARNING: vsnprintf() not found, falling back to vsprintf(). zlib > can build but will be open to possible buffer-overflow security > vulnerabilities. > Checking for return value of vsprintf()... Yes. > Checking for attribute(visibility) support... Yes. > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o test/minigzip.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o example64.o test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o minigzip64.o test/minigzip.c > ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a > cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo -lc > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. libz.a > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see invocation) > clang-11: error: linker command failed with exit code 1 (use -v to see invocation) > gmake: *** [Makefile:289: example] Error 1 > gmake: *** Waiting for unfinished jobs.... > gmake: *** [Makefile:292: minigzip] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see invocation) > gmake: *** [Makefile:304: minigzip64] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see invocation) > gmake: *** [Makefile:301: example64] Error 1 > rm -f libz.so libz.so.1 > ln -s libz.so.1.2.11 libz.so > ln -s libz.so.1.2.11 libz.so.1 > > On Wed, Jul 7, 2021 at 1:05 PM Mark Adams > wrote: > Thanks, it was missing the / > > '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', > > On Wed, Jul 7, 2021 at 12:48 PM Matthew Knepley > wrote: > Did you look in /sw/spock/spack-envs/views/rocm-4.1.0lib ? > > Matt > > On Wed, Jul 7, 2021 at 12:29 PM Mark Adams > wrote: > Ok, I tried that but now I get this error. > > On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini > wrote: > There's an extra comma > > Il Mer 7 Lug 2021, 18:08 Mark Adams > ha scritto: > Humm, I get this error (I just copied your whole file into here): > > 12:06 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py > Traceback (most recent call last): > File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in > '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', > TypeError: bad operand type for unary +: 'str' > > On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini > wrote: > Mark > > On Spock, you can use https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as a template for your configuration. You need to add libraries as LDFLAGS to resolve the hsa symbols > >> On Jul 7, 2021, at 5:04 PM, Mark Adams > wrote: >> >> Thanks, >> >> 08:30 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc$ cd /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 -I${ROCM_PATH}/include" prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >> Checking for shared library support... >> Building shared library libz.so.1.2.11 with cc. >> Checking for size_t... Yes. >> Checking for off64_t... Yes. >> Checking for fseeko... Yes. >> Checking for strerror... No. >> Checking for unistd.h... Yes. >> Checking for stdarg.h... Yes. >> Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf(). >> Checking for vsnprintf() in stdio.h... No. >> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >> can build but will be open to possible buffer-overflow security >> vulnerabilities. >> Checking for return value of vsprintf()... Yes. >> Checking for attribute(visibility) support... Yes. >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o test/example.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o test/minigzip.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/adler32.o adler32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/crc32.o crc32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/deflate.o deflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/infback.o infback.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inffast.o inffast.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inflate.o inflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inftrees.o inftrees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/trees.o trees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/zutil.o zutil.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/compress.o compress.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/uncompr.o uncompr.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzclose.o gzclose.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzlib.o gzlib.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzread.o gzread.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzwrite.o gzwrite.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o example64.o test/example.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o minigzip64.o test/minigzip.c >> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo -lc >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. libz.a >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. libz.a >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] >> ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >> >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >> clang-11: error: linker command failed with exit code 1 (use -v to see invocation) >> clang-11: error: linker command failed with exit code 1 (use -v to see invocation) >> gmake: *** [Makefile:292: minigzip] Error 1 >> gmake: *** Waiting for unfinished jobs.... >> gmake: *** [Makefile:289: example] Error 1 >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >> clang-11: error: linker command failed with exit code 1 (use -v to see invocation) >> gmake: *** [Makefile:304: minigzip64] Error 1 >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >> clang-11: error: linker command failed with exit code 1 (use -v to see invocation) >> gmake: *** [Makefile:301: example64] Error 1 >> rm -f libz.so libz.so.1 >> ln -s libz.so.1.2.11 libz.so >> ln -s libz.so.1.2.11 libz.so.1 >> 11:03 2 jczhang/fix-kokkos-includes= /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >> >> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley > wrote: >> It is hard to see the error. I suspect it is something crazy with the install. Can you run the build by hand? >> >> cd /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 -I${ROCM_PATH}/include" prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >> >> and see what happens, and what the error code is? >> >> Thanks, >> >> Matt >> >> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams > wrote: >> Also, this is in jczhang/fix-kokkos-includes >> >> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams > wrote: >> Apparently the same error with --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >> >> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith > wrote: >> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > zlib-1.2.11.tar.gz >> % Total % Received % Xferd Average Speed Time Time Time Current >> Dload Upload Total Spent Left Speed >> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- --:--:-- 834k >> ~/Src/petsc (barry/2021-07-03/demonstrate-network-parallel-build=) arch-demonstrate-network-parallel-build >> $ tar -zxf zlib-1.2.11.tar.gz >> ~/Src/petsc (barry/2021-07-03/demonstrate-network-parallel-build=) arch-demonstrate-network-parallel-build >> $ ls zlib-1.2.11 >> CMakeLists.txt adler32.c deflate.c gzread.c inflate.h os400 watcom zlib.h >> ChangeLog amiga deflate.h gzwrite.c inftrees.c qnx win32 zlib.map >> FAQ compress.c doc infback.c inftrees.h test zconf.h zlib.pc.cmakein >> INDEX configure examples inffast.c make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >> Makefile contrib gzclose.c inffast.h msdos trees.c zconf.h.in zlib2ansi >> Makefile.in crc32.c gzguts.h inffixed.h nintendods trees.h zlib.3 zutil.c >> README crc32.h gzlib.c inflate.c old uncompr.c zlib.3.pdf zutil.h >> >> >> >>> On Jul 6, 2021, at 7:57 PM, Mark Adams > wrote: >>> >>> >>> >>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith > wrote: >>> >>> Mark, >>> >>> You can try what the configure error message should be suggesting (it is not clear if that is being printed to your screen or no). >>> >>> ERROR: Unable to download package ZLIB from: http://www.zlib.net/zlib-1.2.11.tar.gz >>> >>> My browser can not open this and I could not see a download button on this site. >>> >>> Can you download this? >>> >>> >>> * If URL specified manually - perhaps there is a typo? >>> * If your network is disconnected - please reconnect and rerun ./configure >>> * Or perhaps you have a firewall blocking the download >>> * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually >>> * or you can download the above URL manually, to /yourselectedlocation/zlib-1.2.11.tar.gz >>> and use the configure option: >>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>> >>> Barry >>> >>> >>> > On Jul 6, 2021, at 4:29 PM, Mark Adams > wrote: >>> > >>> > I am getting some sort of error in build zlib on Spock at ORNL. >>> > Other libraries are downloaded and I am sure the network is fine. >>> > Any ideas? >>> > Thanks, >>> > Mark >>> > >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 12:40:18 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 13:40:18 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: On Wed, Jul 7, 2021 at 1:26 PM Barry Smith wrote: > > You will need to pass the -L arguments appropriately to zlib's > ./configure so it can link its shared library appropriately. That is, the > zlib configure requires the value obtained with > L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', > It's not clear to me how to do that. I added the -L to my configure script. It is not clear to me how to modify Matt's command. > On Jul 7, 2021, at 12:18 PM, Mark Adams wrote: > > Well, still getting these hsa errors: > > 13:07 jczhang/fix-kokkos-includes= > /gpfs/alpine/csc314/scratch/adams/petsc$ !136 > cd > /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 > && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 > -I${ROCM_PATH}/include" > prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" > ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install > Checking for shared library support... > Building shared library libz.so.1.2.11 with cc. > Checking for size_t... Yes. > Checking for off64_t... Yes. > Checking for fseeko... Yes. > Checking for strerror... No. > Checking for unistd.h... Yes. > Checking for stdarg.h... Yes. > Checking whether to use vs[n]printf() or s[n]printf()... using > vs[n]printf(). > Checking for vsnprintf() in stdio.h... No. > WARNING: vsnprintf() not found, falling back to vsprintf(). zlib > can build but will be open to possible buffer-overflow security > vulnerabilities. > Checking for return value of vsprintf()... Yes. > Checking for attribute(visibility) support... Yes. > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o > test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o > test/minigzip.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > example64.o test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > minigzip64.o test/minigzip.c > ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o > inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o > gzwrite.o > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a > cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC > -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o > libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo > inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo > gzlib.lo gzread.lo gzwrite.lo -lc > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. > libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. > libz.a > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_iterate_agents > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_signal_load_scacquire > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_unlock > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_signal_destroy > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_lock > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_pool_free > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:289: example] Error 1 > gmake: *** Waiting for unfinished jobs.... > gmake: *** [Makefile:292: minigzip] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:304: minigzip64] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:301: example64] Error 1 > rm -f libz.so libz.so.1 > ln -s libz.so.1.2.11 libz.so > ln -s libz.so.1.2.11 libz.so.1 > > On Wed, Jul 7, 2021 at 1:05 PM Mark Adams wrote: > >> Thanks, it was missing the / >> >> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', >> >> On Wed, Jul 7, 2021 at 12:48 PM Matthew Knepley >> wrote: >> >>> Did you look in /sw/spock/spack-envs/views/rocm-4.1.0lib ? >>> >>> Matt >>> >>> On Wed, Jul 7, 2021 at 12:29 PM Mark Adams wrote: >>> >>>> Ok, I tried that but now I get this error. >>>> >>>> On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini < >>>> stefano.zampini at gmail.com> wrote: >>>> >>>>> There's an extra comma >>>>> >>>>> Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: >>>>> >>>>>> Humm, I get this error (I just copied your whole file into here): >>>>>> >>>>>> 12:06 jczhang/fix-kokkos-includes= >>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py >>>>>> Traceback (most recent call last): >>>>>> File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in >>>>>> >>>>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', >>>>>> TypeError: bad operand type for unary +: 'str' >>>>>> >>>>>> On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini < >>>>>> stefano.zampini at gmail.com> wrote: >>>>>> >>>>>>> Mark >>>>>>> >>>>>>> On Spock, you can use >>>>>>> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >>>>>>> a template for your configuration. You need to add libraries as LDFLAGS to >>>>>>> resolve the hsa symbols >>>>>>> >>>>>>> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> 08:30 jczhang/fix-kokkos-includes= >>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I${ROCM_PATH}/include" >>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>> Checking for shared library support... >>>>>>> Building shared library libz.so.1.2.11 with cc. >>>>>>> Checking for size_t... Yes. >>>>>>> Checking for off64_t... Yes. >>>>>>> Checking for fseeko... Yes. >>>>>>> Checking for strerror... No. >>>>>>> Checking for unistd.h... Yes. >>>>>>> Checking for stdarg.h... Yes. >>>>>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>>>>> vs[n]printf(). >>>>>>> Checking for vsnprintf() in stdio.h... No. >>>>>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>>>>> can build but will be open to possible buffer-overflow security >>>>>>> vulnerabilities. >>>>>>> Checking for return value of vsprintf()... Yes. >>>>>>> Checking for attribute(visibility) support... Yes. >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>>>>> test/example.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>>>>> test/minigzip.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/adler32.o adler32.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/crc32.o crc32.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/deflate.o deflate.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/infback.o infback.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/inffast.o inffast.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/inflate.o inflate.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/inftrees.o inftrees.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/trees.o trees.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/zutil.o zutil.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/compress.o compress.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/uncompr.o uncompr.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/gzclose.o gzclose.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/gzlib.o gzlib.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/gzread.o gzread.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>> -c -o objs/gzwrite.o gzwrite.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>> example64.o test/example.c >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>> minigzip64.o test/minigzip.c >>>>>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o >>>>>>> inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o >>>>>>> gzread.o gzwrite.o >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>>>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>>>>> -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>>>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>>>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>>>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>>>>> libz.a >>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>>>>> libz.a >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_signal_load_scacquire >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lldld.lld: : error: error: >>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>> hsa_amd_memory_pool_allocate >>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>> >>>>>>> ld.lldld.lld: : error: error: >>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>> hsa_amd_agent_iterate_memory_pools >>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>> >>>>>>> ld.lldld.lld: : error: error: >>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>> hsa_iterate_agents >>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>> [--no-allow-shlib-undefined] >>>>>>> >>>>>>> ld.lldld.lld: : error: error: >>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>> hsa_signal_load_scacquire >>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>> >>>>>>> ld.lldld.lld: : error: error: >>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>> hsa_amd_memory_unlock >>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>> >>>>>>> ld.lldld.lld: : error: error: >>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>> hsa_signal_destroy >>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>> [--no-allow-shlib-undefined] >>>>>>> >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>> [--no-allow-shlib-undefined] >>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>> see invocation) >>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>> see invocation) >>>>>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>>>>> gmake: *** Waiting for unfinished jobs.... >>>>>>> gmake: *** [Makefile:289: example] Error 1 >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_signal_load_scacquire >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>> [--no-allow-shlib-undefined] >>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>> see invocation) >>>>>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_signal_load_scacquire >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>> [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>> [--no-allow-shlib-undefined] >>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>> see invocation) >>>>>>> gmake: *** [Makefile:301: example64] Error 1 >>>>>>> rm -f libz.so libz.so.1 >>>>>>> ln -s libz.so.1.2.11 libz.so >>>>>>> ln -s libz.so.1.2.11 libz.so.1 >>>>>>> 11:03 2 jczhang/fix-kokkos-includes= >>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >>>>>>> >>>>>>> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley >>>>>>> wrote: >>>>>>> >>>>>>>> It is hard to see the error. I suspect it is something crazy with >>>>>>>> the install. Can you run the build by hand? >>>>>>>> >>>>>>>> cd >>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I${ROCM_PATH}/include" >>>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>>> >>>>>>>> and see what happens, and what the error code is? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Matt >>>>>>>> >>>>>>>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: >>>>>>>> >>>>>>>>> Also, this is in jczhang/fix-kokkos-includes >>>>>>>>> >>>>>>>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams wrote: >>>>>>>>> >>>>>>>>>> Apparently the same error with >>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>> >>>>>>>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > >>>>>>>>>>> zlib-1.2.11.tar.gz >>>>>>>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>>>>>>> Time Current >>>>>>>>>>> Dload Upload Total Spent >>>>>>>>>>> Left Speed >>>>>>>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>>>>>>>>> --:--:-- 834k >>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>> $ ls zlib-1.2.11 >>>>>>>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>>>>>>> inflate.h os400 watcom zlib.h >>>>>>>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>>>>>>> inftrees.c qnx win32 zlib.map >>>>>>>>>>> FAQ compress.c doc infback.c >>>>>>>>>>> inftrees.h test zconf.h zlib.pc.cmakein >>>>>>>>>>> INDEX configure examples inffast.c >>>>>>>>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>>>>>>>> Makefile contrib gzclose.c inffast.h >>>>>>>>>>> msdos trees.c zconf.h.in zlib2ansi >>>>>>>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>>>>>>> nintendods trees.h zlib.3 zutil.c >>>>>>>>>>> README crc32.h gzlib.c inflate.c >>>>>>>>>>> old uncompr.c zlib.3.pdf zutil.h >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Mark, >>>>>>>>>>>> >>>>>>>>>>>> You can try what the configure error message should be >>>>>>>>>>>> suggesting (it is not clear if that is being printed to your screen or no). >>>>>>>>>>>> >>>>>>>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> My browser can not open this and I could not see a download >>>>>>>>>>> button on this site. >>>>>>>>>>> >>>>>>>>>>> Can you download this? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>>>>>>> * If your network is disconnected - please reconnect and rerun >>>>>>>>>>>> ./configure >>>>>>>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>>>>>>> * You can run with --with-packages-download-dir=/adirectory and >>>>>>>>>>>> ./configure will instruct you what packages to download manually >>>>>>>>>>>> * or you can download the above URL manually, to >>>>>>>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>> and use the configure option: >>>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>> >>>>>>>>>>>> Barry >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams >>>>>>>>>>>> wrote: >>>>>>>>>>>> > >>>>>>>>>>>> > I am getting some sort of error in build zlib on Spock at >>>>>>>>>>>> ORNL. >>>>>>>>>>>> > Other libraries are downloaded and I am sure the network is >>>>>>>>>>>> fine. >>>>>>>>>>>> > Any ideas? >>>>>>>>>>>> > Thanks, >>>>>>>>>>>> > Mark >>>>>>>>>>>> > >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> What most experimenters take for granted before they begin their >>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>> experiments lead. >>>>>>>> -- Norbert Wiener >>>>>>>> >>>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zjorti at lanl.gov Wed Jul 7 12:51:09 2021 From: zjorti at lanl.gov (Jorti, Zakariae) Date: Wed, 7 Jul 2021 17:51:09 +0000 Subject: [petsc-users] Problem with PCFIELDSPLIT Message-ID: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> Hi, I am trying to build a PCFIELDSPLIT preconditioner for a matrix J = [A00 A01] [A10 A11] that has the following shape: M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I 0] [0 I] [0 ksp(T)] [-A10 ksp(A00) I ] where T is a user-defined Schur complement approximation that replaces the true Schur complement S:= A11 - A10 ksp(A00) A01. I am trying to do something similar to this example (lines 41--45 and 116--121): https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html The problem I have is that I manage to replace S with T on a separate single linear system but not for the linear systems generated by my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} correctly, the T matrix gets replaced by S in the preconditioner once I call TSSolve. Do you have any suggestions how to fix this knowing that the matrix J does not change over time? Many thanks. Best regards, Zakariae -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 7 13:02:55 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Jul 2021 14:02:55 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: On Wed, Jul 7, 2021 at 1:40 PM Mark Adams wrote: > > > On Wed, Jul 7, 2021 at 1:26 PM Barry Smith wrote: > >> >> You will need to pass the -L arguments appropriately to zlib's >> ./configure so it can link its shared library appropriately. That is, the >> zlib configure requires the value obtained with >> L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', >> > > It's not clear to me how to do that. I added the -L to my configure > script. It is not clear to me how to modify Matt's command. > Can you try this? knepley/feature-orientation-rethink *$:/PETSc3/petsc/petsc-dev$ git diff config/BuildSystem/config/packages/zlib.py diff --git a/config/BuildSystem/config/packages/zlib.py b/config/BuildSystem/config/packages/zlib.py index fbf9bdf4a0a..b76d3625364 100644 --- a/config/BuildSystem/config/packages/zlib.py +++ b/config/BuildSystem/config/packages/zlib.py @@ -25,6 +25,7 @@ class Configure(config.package.Package): self.pushLanguage('C') args.append('CC="'+self.getCompiler()+'"') args.append('CFLAGS="'+self.updatePackageCFlags(self.getCompilerFlags())+'"') + args.append('LDFLAGS="'+self.getLinkerFlags()+'"') args.append('prefix="'+self.installDir+'"') self.popLanguage() args=' '.join(args) Matt > >> On Jul 7, 2021, at 12:18 PM, Mark Adams wrote: >> >> Well, still getting these hsa errors: >> >> 13:07 jczhang/fix-kokkos-includes= >> /gpfs/alpine/csc314/scratch/adams/petsc$ !136 >> cd >> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I${ROCM_PATH}/include" >> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >> Checking for shared library support... >> Building shared library libz.so.1.2.11 with cc. >> Checking for size_t... Yes. >> Checking for off64_t... Yes. >> Checking for fseeko... Yes. >> Checking for strerror... No. >> Checking for unistd.h... Yes. >> Checking for stdarg.h... Yes. >> Checking whether to use vs[n]printf() or s[n]printf()... using >> vs[n]printf(). >> Checking for vsnprintf() in stdio.h... No. >> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >> can build but will be open to possible buffer-overflow security >> vulnerabilities. >> Checking for return value of vsprintf()... Yes. >> Checking for attribute(visibility) support... Yes. >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >> test/example.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >> test/minigzip.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/adler32.o adler32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/crc32.o crc32.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/deflate.o deflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/infback.o infback.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/inffast.o inffast.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/inflate.o inflate.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/inftrees.o inftrees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/trees.o trees.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/zutil.o zutil.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/compress.o compress.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/uncompr.o uncompr.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/gzclose.o gzclose.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/gzlib.o gzlib.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/gzread.o gzread.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >> -c -o objs/gzwrite.o gzwrite.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >> example64.o test/example.c >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >> minigzip64.o test/minigzip.c >> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o >> inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o >> gzwrite.o >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >> -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >> gzlib.lo gzread.lo gzwrite.lo -lc >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >> libz.a >> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >> libz.a >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_allocate >> [--no-allow-shlib-undefined] >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_amd_memory_pool_allocate >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agent_iterate_memory_pools >> [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_amd_agent_iterate_memory_pools >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_iterate_agents >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_load_scacquire >> [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_signal_load_scacquire >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_amd_memory_unlock >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_signal_destroy >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_get_info >> [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_amd_memory_pool_get_info >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_amd_memory_lock >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> >> ld.lldld.lld: : error: error: >> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >> hsa_amd_memory_pool_free >> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agents_allow_access >> [--no-allow-shlib-undefined] >> >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agents_allow_access >> [--no-allow-shlib-undefined] >> clang-11: error: linker command failed with exit code 1 (use -v to see >> invocation) >> clang-11: error: linker command failed with exit code 1 (use -v to see >> invocation) >> gmake: *** [Makefile:289: example] Error 1 >> gmake: *** Waiting for unfinished jobs.... >> gmake: *** [Makefile:292: minigzip] Error 1 >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_allocate >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agent_iterate_memory_pools >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_load_scacquire >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_get_info >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agents_allow_access >> [--no-allow-shlib-undefined] >> clang-11: error: linker command failed with exit code 1 (use -v to see >> invocation) >> gmake: *** [Makefile:304: minigzip64] Error 1 >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_allocate >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agent_iterate_memory_pools >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_load_scacquire >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_get_info >> [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >> undefined reference to hsa_amd_agents_allow_access >> [--no-allow-shlib-undefined] >> clang-11: error: linker command failed with exit code 1 (use -v to see >> invocation) >> gmake: *** [Makefile:301: example64] Error 1 >> rm -f libz.so libz.so.1 >> ln -s libz.so.1.2.11 libz.so >> ln -s libz.so.1.2.11 libz.so.1 >> >> On Wed, Jul 7, 2021 at 1:05 PM Mark Adams wrote: >> >>> Thanks, it was missing the / >>> >>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', >>> >>> On Wed, Jul 7, 2021 at 12:48 PM Matthew Knepley >>> wrote: >>> >>>> Did you look in /sw/spock/spack-envs/views/rocm-4.1.0lib ? >>>> >>>> Matt >>>> >>>> On Wed, Jul 7, 2021 at 12:29 PM Mark Adams wrote: >>>> >>>>> Ok, I tried that but now I get this error. >>>>> >>>>> On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini < >>>>> stefano.zampini at gmail.com> wrote: >>>>> >>>>>> There's an extra comma >>>>>> >>>>>> Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: >>>>>> >>>>>>> Humm, I get this error (I just copied your whole file into here): >>>>>>> >>>>>>> 12:06 jczhang/fix-kokkos-includes= >>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py >>>>>>> Traceback (most recent call last): >>>>>>> File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in >>>>>>> >>>>>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', >>>>>>> TypeError: bad operand type for unary +: 'str' >>>>>>> >>>>>>> On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini < >>>>>>> stefano.zampini at gmail.com> wrote: >>>>>>> >>>>>>>> Mark >>>>>>>> >>>>>>>> On Spock, you can use >>>>>>>> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >>>>>>>> a template for your configuration. You need to add libraries as LDFLAGS to >>>>>>>> resolve the hsa symbols >>>>>>>> >>>>>>>> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> 08:30 jczhang/fix-kokkos-includes= >>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I${ROCM_PATH}/include" >>>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>>> Checking for shared library support... >>>>>>>> Building shared library libz.so.1.2.11 with cc. >>>>>>>> Checking for size_t... Yes. >>>>>>>> Checking for off64_t... Yes. >>>>>>>> Checking for fseeko... Yes. >>>>>>>> Checking for strerror... No. >>>>>>>> Checking for unistd.h... Yes. >>>>>>>> Checking for stdarg.h... Yes. >>>>>>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>>>>>> vs[n]printf(). >>>>>>>> Checking for vsnprintf() in stdio.h... No. >>>>>>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>>>>>> can build but will be open to possible buffer-overflow security >>>>>>>> vulnerabilities. >>>>>>>> Checking for return value of vsprintf()... Yes. >>>>>>>> Checking for attribute(visibility) support... Yes. >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>>>>>> test/example.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>>>>>> test/minigzip.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/adler32.o adler32.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/crc32.o crc32.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/deflate.o deflate.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/infback.o infback.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/inffast.o inffast.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/inflate.o inflate.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/inftrees.o inftrees.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/trees.o trees.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/zutil.o zutil.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/compress.o compress.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/uncompr.o uncompr.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/gzclose.o gzclose.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/gzlib.o gzlib.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/gzread.o gzread.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>> -c -o objs/gzwrite.o gzwrite.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>>> example64.o test/example.c >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>>> minigzip64.o test/minigzip.c >>>>>>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o >>>>>>>> inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o >>>>>>>> gzread.o gzwrite.o >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>>>>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>>>>>> -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>>>>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>>>>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>>>>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>>>>>> libz.a >>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>>>>>> libz.a >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_signal_load_scacquire >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lldld.lld: : error: error: >>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>> hsa_amd_memory_pool_allocate >>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>> >>>>>>>> ld.lldld.lld: : error: error: >>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>> hsa_amd_agent_iterate_memory_pools >>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>> >>>>>>>> ld.lldld.lld: : error: error: >>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>> hsa_iterate_agents >>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> >>>>>>>> ld.lldld.lld: : error: error: >>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>> hsa_signal_load_scacquire >>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>> >>>>>>>> ld.lldld.lld: : error: error: >>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>> hsa_amd_memory_unlock >>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>> >>>>>>>> ld.lldld.lld: : error: error: >>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>> hsa_signal_destroy >>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>>> see invocation) >>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>>> see invocation) >>>>>>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>>>>>> gmake: *** Waiting for unfinished jobs.... >>>>>>>> gmake: *** [Makefile:289: example] Error 1 >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_signal_load_scacquire >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>>> see invocation) >>>>>>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_signal_load_scacquire >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>> [--no-allow-shlib-undefined] >>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>>> see invocation) >>>>>>>> gmake: *** [Makefile:301: example64] Error 1 >>>>>>>> rm -f libz.so libz.so.1 >>>>>>>> ln -s libz.so.1.2.11 libz.so >>>>>>>> ln -s libz.so.1.2.11 libz.so.1 >>>>>>>> 11:03 2 jczhang/fix-kokkos-includes= >>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >>>>>>>> >>>>>>>> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley >>>>>>>> wrote: >>>>>>>> >>>>>>>>> It is hard to see the error. I suspect it is something crazy with >>>>>>>>> the install. Can you run the build by hand? >>>>>>>>> >>>>>>>>> cd >>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I${ROCM_PATH}/include" >>>>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>>>> >>>>>>>>> and see what happens, and what the error code is? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> Matt >>>>>>>>> >>>>>>>>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams wrote: >>>>>>>>> >>>>>>>>>> Also, this is in jczhang/fix-kokkos-includes >>>>>>>>>> >>>>>>>>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Apparently the same error with >>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>> >>>>>>>>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > >>>>>>>>>>>> zlib-1.2.11.tar.gz >>>>>>>>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>>>>>>>> Time Current >>>>>>>>>>>> Dload Upload Total Spent >>>>>>>>>>>> Left Speed >>>>>>>>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- --:--:-- >>>>>>>>>>>> --:--:-- 834k >>>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>>> $ ls zlib-1.2.11 >>>>>>>>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>>>>>>>> inflate.h os400 watcom zlib.h >>>>>>>>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>>>>>>>> inftrees.c qnx win32 zlib.map >>>>>>>>>>>> FAQ compress.c doc infback.c >>>>>>>>>>>> inftrees.h test zconf.h >>>>>>>>>>>> zlib.pc.cmakein >>>>>>>>>>>> INDEX configure examples inffast.c >>>>>>>>>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>>>>>>>>> Makefile contrib gzclose.c inffast.h >>>>>>>>>>>> msdos trees.c zconf.h.in zlib2ansi >>>>>>>>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>>>>>>>> nintendods trees.h zlib.3 zutil.c >>>>>>>>>>>> README crc32.h gzlib.c inflate.c >>>>>>>>>>>> old uncompr.c zlib.3.pdf zutil.h >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Mark, >>>>>>>>>>>>> >>>>>>>>>>>>> You can try what the configure error message should be >>>>>>>>>>>>> suggesting (it is not clear if that is being printed to your screen or no). >>>>>>>>>>>>> >>>>>>>>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>>>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> My browser can not open this and I could not see a download >>>>>>>>>>>> button on this site. >>>>>>>>>>>> >>>>>>>>>>>> Can you download this? >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>>>>>>>> * If your network is disconnected - please reconnect and rerun >>>>>>>>>>>>> ./configure >>>>>>>>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>>>>>>>> * You can run with --with-packages-download-dir=/adirectory >>>>>>>>>>>>> and ./configure will instruct you what packages to download manually >>>>>>>>>>>>> * or you can download the above URL manually, to >>>>>>>>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>> and use the configure option: >>>>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>> >>>>>>>>>>>>> Barry >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> > >>>>>>>>>>>>> > I am getting some sort of error in build zlib on Spock at >>>>>>>>>>>>> ORNL. >>>>>>>>>>>>> > Other libraries are downloaded and I am sure the network is >>>>>>>>>>>>> fine. >>>>>>>>>>>>> > Any ideas? >>>>>>>>>>>>> > Thanks, >>>>>>>>>>>>> > Mark >>>>>>>>>>>>> > >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>> experiments lead. >>>>>>>>> -- Norbert Wiener >>>>>>>>> >>>>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 7 13:11:10 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Jul 2021 14:11:10 -0400 Subject: [petsc-users] Problem with PCFIELDSPLIT In-Reply-To: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> Message-ID: On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi, > > > I am trying to build a PCFIELDSPLIT preconditioner for a matrix > > J = [A00 A01] > > [A10 A11] > > that has the following shape: > > > M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I > 0] > > [0 I] [0 > ksp(T)] [-A10 ksp(A00) I ] > > > where T is a user-defined Schur complement approximation that replaces the > true Schur complement S:= A11 - A10 ksp(A00) A01. > > > I am trying to do something similar to this example (lines 41--45 and > 116--121): > https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html > > > The problem I have is that I manage to replace S with T on a > separate single linear system but not for the linear systems generated by > my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} > correctly, the T matrix gets replaced by S in the preconditioner once I > call TSSolve. > > Do you have any suggestions how to fix this knowing that the matrix J does > not change over time? > > I don't like how it is done in that example for this very reason. When I want to use a custom preconditioning matrix for the Schur complement, I always give a preconditioning matrix M to the outer solve. Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for the Schur complement, for that preconditioning matrix without extra code. Can you do this? Thanks, Matt > Many thanks. > > > Best regards, > > > Zakariae > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From adenchfi at hawk.iit.edu Wed Jul 7 13:13:06 2021 From: adenchfi at hawk.iit.edu (Adam Denchfield) Date: Wed, 7 Jul 2021 13:13:06 -0500 Subject: [petsc-users] [Ext] Re: [SLEPc] Computing Smallest Eigenvalue+Eigenvector of Many Small Matrices In-Reply-To: References: <4051E7AF-6A72-4797-A025-03EB63875795@gmail.com> <8735srfc7b.fsf@jedbrown.org> Message-ID: syevjBatched from cuSolver is quite good once it's configured fine. It's a direct solve for all eigenpairs, works for batches of small matrices with sizes up to (I believe) 32x32. The default CUDA example using it works except if you have "too many" small matrices, in which case you'll overload the GPU memory and need to further batch the calls. I found it to be fast enough for my needs. Regards, *Adam Denchfield* *Ph.D Student, Physics* University of Illinois in Chicago B.S. Applied Physics (2018) Illinois Institute of Technology Email: adenchfi at hawk.iit.edu On Wed, Jul 7, 2021 at 2:31 AM Jose E. Roman wrote: > cuSolver has syevjBatched, which seems to fit your purpose. But I have > never used it. > > Lanczos is not competitive for such small matrices. > > Jose > > > > El 6 jul 2021, a las 21:56, Jed Brown escribi?: > > > > Have you tried just calling LAPACK directly? (You could try dsyevx to > see if there's something to gain by computing less than all the > eigenvalues.) I'm not aware of a batched interface at this time, but that's > what you'd want for performance. > > > > Jacob Faibussowitsch writes: > > > >> Hello PETSc/SLEPc users, > >> > >> Similar to a recent question I am looking for an algorithm to compute > the smallest eigenvalue and eigenvector for a bunch of matrices however I > have a few extra ?restrictions?. All matrices have the following properties: > >> > >> - All matrices are the same size > >> - All matrices are small (perhaps no larger than 12x12) > >> - All matrices are SPD > >> - I only need the smallest eigenpair > >> > >> So far my best bet seems to be Lanczos but I?m wondering if there is > some wunder method I?ve overlooked. > >> > >> Best regards, > >> > >> Jacob Faibussowitsch > >> (Jacob Fai - booss - oh - vitch) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zjorti at lanl.gov Wed Jul 7 13:33:16 2021 From: zjorti at lanl.gov (Jorti, Zakariae) Date: Wed, 7 Jul 2021 18:33:16 +0000 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov>, Message-ID: Hi Matt, Thanks for your quick reply. I have not completely understood your suggestion, could you please elaborate a bit more? For your convenience, here is how I am proceeding for the moment in my code: TSGetKSP(ts,&ksp); KSPGetPC(ksp,&pc); PCSetType(pc,PCFIELDSPLIT); PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); PCSetUp(pc); PCFieldSplitGetSubKSP(pc, &n, &subksp); KSPGetPC(subksp[1], &(subpc[1])); KSPSetOperators(subksp[1],T,T); KSPSetUp(subksp[1]); PetscFree(subksp); TSSolve(ts,X); Thank you. Best, Zakariae ________________________________ From: Matthew Knepley Sent: Wednesday, July 7, 2021 12:11:10 PM To: Jorti, Zakariae Cc: petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu Subject: [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users > wrote: Hi, I am trying to build a PCFIELDSPLIT preconditioner for a matrix J = [A00 A01] [A10 A11] that has the following shape: M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I 0] [0 I] [0 ksp(T)] [-A10 ksp(A00) I ] where T is a user-defined Schur complement approximation that replaces the true Schur complement S:= A11 - A10 ksp(A00) A01. I am trying to do something similar to this example (lines 41--45 and 116--121): https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html The problem I have is that I manage to replace S with T on a separate single linear system but not for the linear systems generated by my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} correctly, the T matrix gets replaced by S in the preconditioner once I call TSSolve. Do you have any suggestions how to fix this knowing that the matrix J does not change over time? I don't like how it is done in that example for this very reason. When I want to use a custom preconditioning matrix for the Schur complement, I always give a preconditioning matrix M to the outer solve. Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for the Schur complement, for that preconditioning matrix without extra code. Can you do this? Thanks, Matt Many thanks. Best regards, Zakariae -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Wed Jul 7 13:40:53 2021 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Wed, 7 Jul 2021 20:40:53 +0200 Subject: [petsc-users] Scaling of the Petsc Binary Viewer Message-ID: Dear all, I have been having issues with large Vec (based on DMPLex) and massive MPI I/O ... it looks like the data that is written by the Petsc Binary Viewer is gibberish for large meshes split on a high number of processes. For instance, I am using a mesh that has around 50 million cells, split on 1024 processors. The computation seems to run fine, the timestep computed from the data makes sense so I think internally everything is fine. But when I look at the solution (one example attached) it's noise - at this point it should show a bow shock developing on the left near the step. The piece of code I use is below for the output : call DMGetOutputSequenceNumber(dm, save_seqnum, save_seqval, ierr); CHKERRA(ierr) call DMSetOutputSequenceNumber(dm, -1, 0.d0, ierr); CHKERRA(ierr) write(filename,'(A,I8.8,A)') "restart_", stepnum, ".bin" call PetscViewerCreate(PETSC_COMM_WORLD, binViewer, ierr); CHKERRA(ierr) call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, ierr); CHKERRA(ierr) call PetscViewerFileSetMode(binViewer, FILE_MODE_WRITE, ierr); CHKERRA(ierr); call PetscViewerBinarySetUseMPIIO(binViewer, PETSC_TRUE, ierr); CHKERRA(ierr); call PetscViewerFileSetName(binViewer, trim(filename), ierr); CHKERRA(ierr) call VecView(X, binViewer, ierr); CHKERRA(ierr) call PetscViewerDestroy(binViewer, ierr); CHKERRA(ierr) call DMSetOutputSequenceNumber(dm, save_seqnum, save_seqval, ierr); CHKERRA(ierr) I do not think there is anything wrong with it but of course I would be happy to hear your feedback. Nonetheless my question was : how far have you tested the binary mpi i/o of a Vec ? Does it make some sense that for a 50 million cell mesh split on 1024 processes, it could somehow fail ? Or is it my python drawing method that is completely incapable of handling this dataset ? (paraview displays the same thing though so I'm not sure ...) Thank you very much for your advice and help !!! Thibault -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: restart_00001000.png Type: image/png Size: 20597 bytes Desc: not available URL: From dave.mayhem23 at gmail.com Wed Jul 7 13:54:33 2021 From: dave.mayhem23 at gmail.com (Dave May) Date: Wed, 7 Jul 2021 20:54:33 +0200 Subject: [petsc-users] Scaling of the Petsc Binary Viewer In-Reply-To: References: Message-ID: On Wed 7. Jul 2021 at 20:41, Thibault Bridel-Bertomeu < thibault.bridelbertomeu at gmail.com> wrote: > Dear all, > > I have been having issues with large Vec (based on DMPLex) and massive MPI > I/O ... it looks like the data that is written by the Petsc Binary Viewer > is gibberish for large meshes split on a high number of processes. For > instance, I am using a mesh that has around 50 million cells, split on 1024 > processors. > The computation seems to run fine, the timestep computed from the data > makes sense so I think internally everything is fine. But when I look at > the solution (one example attached) it's noise - at this point it should > show a bow shock developing on the left near the step. > The piece of code I use is below for the output : > > call DMGetOutputSequenceNumber(dm, save_seqnum, > save_seqval, ierr); CHKERRA(ierr) > call DMSetOutputSequenceNumber(dm, -1, 0.d0, ierr); > CHKERRA(ierr) > write(filename,'(A,I8.8,A)') "restart_", stepnum, ".bin" > call PetscViewerCreate(PETSC_COMM_WORLD, binViewer, ierr); > CHKERRA(ierr) > call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, > ierr); CHKERRA(ierr) > call PetscViewerFileSetMode(binViewer, FILE_MODE_WRITE, > ierr); CHKERRA(ierr); > call PetscViewerBinarySetUseMPIIO(binViewer, PETSC_TRUE, > ierr); CHKERRA(ierr); > > Do you get the correct output if you don?t call the function above (or equivalently use PETSC_FALSE) call PetscViewerFileSetName(binViewer, trim(filename), ierr); CHKERRA(ierr) > call VecView(X, binViewer, ierr); CHKERRA(ierr) > call PetscViewerDestroy(binViewer, ierr); CHKERRA(ierr) > call DMSetOutputSequenceNumber(dm, save_seqnum, > save_seqval, ierr); CHKERRA(ierr) > > I do not think there is anything wrong with it but of course I would be > happy to hear your feedback. > Nonetheless my question was : how far have you tested the binary mpi i/o > of a Vec ? Does it make some sense that for a 50 million cell mesh split on > 1024 processes, it could somehow fail ? > Or is it my python drawing method that is completely incapable of handling > this dataset ? (paraview displays the same thing though so I'm not sure ...) > Are you using the python provided tools within petsc to load the Vec from file? Thanks, Dave > Thank you very much for your advice and help !!! > > Thibault > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 14:45:06 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 15:45:06 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: No diffs. I added this: diff --git a/config/BuildSystem/config/packages/zlib.py b/config/BuildSystem/config/packages/zlib.py index fbf9bdf4a0..b76d362536 100644 --- a/config/BuildSystem/config/packages/zlib.py +++ b/config/BuildSystem/config/packages/zlib.py @@ -25,6 +25,7 @@ class Configure(config.package.Package): self.pushLanguage('C') args.append('CC="'+self.getCompiler()+'"') args.append('CFLAGS="'+self.updatePackageCFlags(self.getCompilerFlags())+'"') + args.append('LDFLAGS="'+self.getLinkerFlags()+'"') args.append('prefix="'+self.installDir+'"') self.popLanguage() args=' '.join(args) lines 1-12/12 (END) but it still fails. 15:20 jczhang/fix-kokkos-includes *= /gpfs/alpine/csc314/scratch/adams/petsc$ cd /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 -I${ROCM_PATH}/include" prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install Checking for shared library support... Building shared library libz.so.1.2.11 with cc. Checking for size_t... Yes. Checking for off64_t... Yes. Checking for fseeko... Yes. Checking for strerror... No. Checking for unistd.h... Yes. Checking for stdarg.h... Yes. Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf(). Checking for vsnprintf() in stdio.h... No. WARNING: vsnprintf() not found, falling back to vsprintf(). zlib can build but will be open to possible buffer-overflow security vulnerabilities. Checking for return value of vsprintf()... Yes. Checking for attribute(visibility) support... Yes. cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o test/example.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o test/minigzip.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/adler32.o adler32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/crc32.o crc32.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/deflate.o deflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/infback.o infback.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inffast.o inffast.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inflate.o inflate.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/inftrees.o inftrees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/trees.o trees.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/zutil.o zutil.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/compress.o compress.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/uncompr.o uncompr.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzclose.o gzclose.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzlib.o gzlib.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzread.o gzread.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC -c -o objs/gzwrite.o gzwrite.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o example64.o test/example.c cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o minigzip64.o test/minigzip.c ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo -lc cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. libz.a cc -fPIC -fstack-protector -Qunused-arguments -g -O0 -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. libz.a ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lldld.lld: : error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lld: ld.lld: error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined]ld.lld : ld.lld: error: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:292: minigzip] Error 1 gmake: *** Waiting for unfinished jobs.... gmake: *** [Makefile:289: example] Error 1 ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:304: minigzip64] Error 1 ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_load_scacquire [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to hsa_amd_agents_allow_access [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake: *** [Makefile:301: example64] Error 1 rm -f libz.so libz.so.1 ln -s libz.so.1.2.11 libz.so On Wed, Jul 7, 2021 at 2:03 PM Matthew Knepley wrote: > On Wed, Jul 7, 2021 at 1:40 PM Mark Adams wrote: > >> >> >> On Wed, Jul 7, 2021 at 1:26 PM Barry Smith wrote: >> >>> >>> You will need to pass the -L arguments appropriately to zlib's >>> ./configure so it can link its shared library appropriately. That is, the >>> zlib configure requires the value obtained with >>> L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', >>> >> >> It's not clear to me how to do that. I added the -L to my configure >> script. It is not clear to me how to modify Matt's command. >> > > Can you try this? > > knepley/feature-orientation-rethink *$:/PETSc3/petsc/petsc-dev$ git diff > config/BuildSystem/config/packages/zlib.py > diff --git a/config/BuildSystem/config/packages/zlib.py > b/config/BuildSystem/config/packages/zlib.py > index fbf9bdf4a0a..b76d3625364 100644 > --- a/config/BuildSystem/config/packages/zlib.py > +++ b/config/BuildSystem/config/packages/zlib.py > @@ -25,6 +25,7 @@ class Configure(config.package.Package): > > self.pushLanguage('C') > args.append('CC="'+self.getCompiler()+'"') > > args.append('CFLAGS="'+self.updatePackageCFlags(self.getCompilerFlags())+'"') > + args.append('LDFLAGS="'+self.getLinkerFlags()+'"') > args.append('prefix="'+self.installDir+'"') > self.popLanguage() > args=' '.join(args) > > Matt > > >> >>> On Jul 7, 2021, at 12:18 PM, Mark Adams wrote: >>> >>> Well, still getting these hsa errors: >>> >>> 13:07 jczhang/fix-kokkos-includes= >>> /gpfs/alpine/csc314/scratch/adams/petsc$ !136 >>> cd >>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I${ROCM_PATH}/include" >>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>> Checking for shared library support... >>> Building shared library libz.so.1.2.11 with cc. >>> Checking for size_t... Yes. >>> Checking for off64_t... Yes. >>> Checking for fseeko... Yes. >>> Checking for strerror... No. >>> Checking for unistd.h... Yes. >>> Checking for stdarg.h... Yes. >>> Checking whether to use vs[n]printf() or s[n]printf()... using >>> vs[n]printf(). >>> Checking for vsnprintf() in stdio.h... No. >>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>> can build but will be open to possible buffer-overflow security >>> vulnerabilities. >>> Checking for return value of vsprintf()... Yes. >>> Checking for attribute(visibility) support... Yes. >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>> test/example.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>> test/minigzip.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/adler32.o adler32.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/crc32.o crc32.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/deflate.o deflate.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/infback.o infback.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/inffast.o inffast.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/inflate.o inflate.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/inftrees.o inftrees.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/trees.o trees.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/zutil.o zutil.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/compress.o compress.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/uncompr.o uncompr.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/gzclose.o gzclose.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/gzlib.o gzlib.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/gzread.o gzread.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>> -c -o objs/gzwrite.o gzwrite.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>> example64.o test/example.c >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>> minigzip64.o test/minigzip.c >>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o >>> inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o >>> gzwrite.o >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>> -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>> gzlib.lo gzread.lo gzwrite.lo -lc >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>> libz.a >>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>> libz.a >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_allocate >>> [--no-allow-shlib-undefined] >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_amd_memory_pool_allocate >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agent_iterate_memory_pools >>> [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_amd_agent_iterate_memory_pools >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_iterate_agents >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_load_scacquire >>> [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_signal_load_scacquire >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_amd_memory_unlock >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_signal_destroy >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_get_info >>> [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_amd_memory_pool_get_info >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_amd_memory_lock >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>> >>> ld.lldld.lld: : error: error: >>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>> hsa_amd_memory_pool_free >>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agents_allow_access >>> [--no-allow-shlib-undefined] >>> >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agents_allow_access >>> [--no-allow-shlib-undefined] >>> clang-11: error: linker command failed with exit code 1 (use -v to see >>> invocation) >>> clang-11: error: linker command failed with exit code 1 (use -v to see >>> invocation) >>> gmake: *** [Makefile:289: example] Error 1 >>> gmake: *** Waiting for unfinished jobs.... >>> gmake: *** [Makefile:292: minigzip] Error 1 >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_allocate >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agent_iterate_memory_pools >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_load_scacquire >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_get_info >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agents_allow_access >>> [--no-allow-shlib-undefined] >>> clang-11: error: linker command failed with exit code 1 (use -v to see >>> invocation) >>> gmake: *** [Makefile:304: minigzip64] Error 1 >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_allocate >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agent_iterate_memory_pools >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_load_scacquire >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_get_info >>> [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>> undefined reference to hsa_amd_agents_allow_access >>> [--no-allow-shlib-undefined] >>> clang-11: error: linker command failed with exit code 1 (use -v to see >>> invocation) >>> gmake: *** [Makefile:301: example64] Error 1 >>> rm -f libz.so libz.so.1 >>> ln -s libz.so.1.2.11 libz.so >>> ln -s libz.so.1.2.11 libz.so.1 >>> >>> On Wed, Jul 7, 2021 at 1:05 PM Mark Adams wrote: >>> >>>> Thanks, it was missing the / >>>> >>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', >>>> >>>> On Wed, Jul 7, 2021 at 12:48 PM Matthew Knepley >>>> wrote: >>>> >>>>> Did you look in /sw/spock/spack-envs/views/rocm-4.1.0lib ? >>>>> >>>>> Matt >>>>> >>>>> On Wed, Jul 7, 2021 at 12:29 PM Mark Adams wrote: >>>>> >>>>>> Ok, I tried that but now I get this error. >>>>>> >>>>>> On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini < >>>>>> stefano.zampini at gmail.com> wrote: >>>>>> >>>>>>> There's an extra comma >>>>>>> >>>>>>> Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: >>>>>>> >>>>>>>> Humm, I get this error (I just copied your whole file into here): >>>>>>>> >>>>>>>> 12:06 jczhang/fix-kokkos-includes= >>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py >>>>>>>> Traceback (most recent call last): >>>>>>>> File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, in >>>>>>>> >>>>>>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', >>>>>>>> TypeError: bad operand type for unary +: 'str' >>>>>>>> >>>>>>>> On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini < >>>>>>>> stefano.zampini at gmail.com> wrote: >>>>>>>> >>>>>>>>> Mark >>>>>>>>> >>>>>>>>> On Spock, you can use >>>>>>>>> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >>>>>>>>> a template for your configuration. You need to add libraries as LDFLAGS to >>>>>>>>> resolve the hsa symbols >>>>>>>>> >>>>>>>>> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> 08:30 jczhang/fix-kokkos-includes= >>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I${ROCM_PATH}/include" >>>>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>>>> Checking for shared library support... >>>>>>>>> Building shared library libz.so.1.2.11 with cc. >>>>>>>>> Checking for size_t... Yes. >>>>>>>>> Checking for off64_t... Yes. >>>>>>>>> Checking for fseeko... Yes. >>>>>>>>> Checking for strerror... No. >>>>>>>>> Checking for unistd.h... Yes. >>>>>>>>> Checking for stdarg.h... Yes. >>>>>>>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>>>>>>> vs[n]printf(). >>>>>>>>> Checking for vsnprintf() in stdio.h... No. >>>>>>>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>>>>>>> can build but will be open to possible buffer-overflow security >>>>>>>>> vulnerabilities. >>>>>>>>> Checking for return value of vsprintf()... Yes. >>>>>>>>> Checking for attribute(visibility) support... Yes. >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>>>>>>> test/example.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>>>>>>> test/minigzip.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/adler32.o adler32.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/crc32.o crc32.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/deflate.o deflate.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/infback.o infback.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/inffast.o inffast.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/inflate.o inflate.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/inftrees.o inftrees.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/trees.o trees.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/zutil.o zutil.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/compress.o compress.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/uncompr.o uncompr.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/gzclose.o gzclose.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/gzlib.o gzlib.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/gzread.o gzread.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>> -c -o objs/gzwrite.o gzwrite.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>>>> example64.o test/example.c >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>>>> minigzip64.o test/minigzip.c >>>>>>>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o >>>>>>>>> inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o >>>>>>>>> gzread.o gzwrite.o >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>>>>>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>>>>>>> -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>>>>>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>>>>>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>>>>>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>>>>>>> libz.a >>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>>>>>>> libz.a >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_signal_load_scacquire >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>> hsa_amd_memory_pool_allocate >>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>>> >>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>> hsa_amd_agent_iterate_memory_pools >>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>>> >>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>> hsa_iterate_agents >>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> >>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>> hsa_signal_load_scacquire >>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>> >>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>> hsa_amd_memory_unlock >>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>> >>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>> hsa_signal_destroy >>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>>>> see invocation) >>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>>>> see invocation) >>>>>>>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>>>>>>> gmake: *** Waiting for unfinished jobs.... >>>>>>>>> gmake: *** [Makefile:289: example] Error 1 >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_signal_load_scacquire >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>>>> see invocation) >>>>>>>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_allocate >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_signal_load_scacquire >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v to >>>>>>>>> see invocation) >>>>>>>>> gmake: *** [Makefile:301: example64] Error 1 >>>>>>>>> rm -f libz.so libz.so.1 >>>>>>>>> ln -s libz.so.1.2.11 libz.so >>>>>>>>> ln -s libz.so.1.2.11 libz.so.1 >>>>>>>>> 11:03 2 jczhang/fix-kokkos-includes= >>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >>>>>>>>> >>>>>>>>> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> It is hard to see the error. I suspect it is something crazy with >>>>>>>>>> the install. Can you run the build by hand? >>>>>>>>>> >>>>>>>>>> cd >>>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I${ROCM_PATH}/include" >>>>>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>>>>> >>>>>>>>>> and see what happens, and what the error code is? >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> Matt >>>>>>>>>> >>>>>>>>>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Also, this is in jczhang/fix-kokkos-includes >>>>>>>>>>> >>>>>>>>>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> Apparently the same error with >>>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>> >>>>>>>>>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > >>>>>>>>>>>>> zlib-1.2.11.tar.gz >>>>>>>>>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>>>>>>>>> Time Current >>>>>>>>>>>>> Dload Upload Total >>>>>>>>>>>>> Spent Left Speed >>>>>>>>>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- >>>>>>>>>>>>> --:--:-- --:--:-- 834k >>>>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>>>> $ ls zlib-1.2.11 >>>>>>>>>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>>>>>>>>> inflate.h os400 watcom zlib.h >>>>>>>>>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>>>>>>>>> inftrees.c qnx win32 zlib.map >>>>>>>>>>>>> FAQ compress.c doc infback.c >>>>>>>>>>>>> inftrees.h test zconf.h >>>>>>>>>>>>> zlib.pc.cmakein >>>>>>>>>>>>> INDEX configure examples inffast.c >>>>>>>>>>>>> make_vms.com treebuild.xml zconf.h.cmakein zlib.pc.in >>>>>>>>>>>>> Makefile contrib gzclose.c inffast.h >>>>>>>>>>>>> msdos trees.c zconf.h.in zlib2ansi >>>>>>>>>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>>>>>>>>> nintendods trees.h zlib.3 zutil.c >>>>>>>>>>>>> README crc32.h gzlib.c inflate.c >>>>>>>>>>>>> old uncompr.c zlib.3.pdf zutil.h >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Mark, >>>>>>>>>>>>>> >>>>>>>>>>>>>> You can try what the configure error message should be >>>>>>>>>>>>>> suggesting (it is not clear if that is being printed to your screen or no). >>>>>>>>>>>>>> >>>>>>>>>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>>>>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> My browser can not open this and I could not see a download >>>>>>>>>>>>> button on this site. >>>>>>>>>>>>> >>>>>>>>>>>>> Can you download this? >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>>>>>>>>> * If your network is disconnected - please reconnect and >>>>>>>>>>>>>> rerun ./configure >>>>>>>>>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>>>>>>>>> * You can run with --with-packages-download-dir=/adirectory >>>>>>>>>>>>>> and ./configure will instruct you what packages to download manually >>>>>>>>>>>>>> * or you can download the above URL manually, to >>>>>>>>>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>>> and use the configure option: >>>>>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>>> >>>>>>>>>>>>>> Barry >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > I am getting some sort of error in build zlib on Spock at >>>>>>>>>>>>>> ORNL. >>>>>>>>>>>>>> > Other libraries are downloaded and I am sure the network is >>>>>>>>>>>>>> fine. >>>>>>>>>>>>>> > Any ideas? >>>>>>>>>>>>>> > Thanks, >>>>>>>>>>>>>> > Mark >>>>>>>>>>>>>> > >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>> experiments lead. >>>>>>>>>> -- Norbert Wiener >>>>>>>>>> >>>>>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >>> >>> >>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 7 14:45:21 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Jul 2021 15:45:21 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: I do see this: 15:43 /sw/spock/spack-envs/views/rocm-4.1.0/lib$ nm libhsa-runtime64.so | grep -n hsa_signal_load_scacquir 349:0000000000074de0 T hsa_signal_load_scacquire 1491:000000000004bef0 t _ZN4rocr3HSA25hsa_signal_load_scacquireE12hsa_signal_s 15:43 /sw/spock/spack-envs/views/rocm-4.1.0/lib$ On Wed, Jul 7, 2021 at 3:45 PM Mark Adams wrote: > No diffs. I added this: > > diff --git a/config/BuildSystem/config/packages/zlib.py > b/config/BuildSystem/config/packages/zlib.py > index fbf9bdf4a0..b76d362536 100644 > --- a/config/BuildSystem/config/packages/zlib.py > +++ b/config/BuildSystem/config/packages/zlib.py > @@ -25,6 +25,7 @@ class Configure(config.package.Package): > self.pushLanguage('C') > args.append('CC="'+self.getCompiler()+'"') > > args.append('CFLAGS="'+self.updatePackageCFlags(self.getCompilerFlags())+'"') > + args.append('LDFLAGS="'+self.getLinkerFlags()+'"') > args.append('prefix="'+self.installDir+'"') > self.popLanguage() > args=' '.join(args) > lines 1-12/12 (END) > > but it still fails. > > 15:20 jczhang/fix-kokkos-includes *= > /gpfs/alpine/csc314/scratch/adams/petsc$ cd > /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 > && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 > -I${ROCM_PATH}/include" > prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" > ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install > Checking for shared library support... > Building shared library libz.so.1.2.11 with cc. > Checking for size_t... Yes. > Checking for off64_t... Yes. > Checking for fseeko... Yes. > Checking for strerror... No. > Checking for unistd.h... Yes. > Checking for stdarg.h... Yes. > Checking whether to use vs[n]printf() or s[n]printf()... using > vs[n]printf(). > Checking for vsnprintf() in stdio.h... No. > WARNING: vsnprintf() not found, falling back to vsprintf(). zlib > can build but will be open to possible buffer-overflow security > vulnerabilities. > Checking for return value of vsprintf()... Yes. > Checking for attribute(visibility) support... Yes. > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o > test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o > test/minigzip.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > example64.o test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > minigzip64.o test/minigzip.c > ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o > inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o > gzwrite.o > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a > cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC > -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o > libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo > inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo > gzlib.lo gzread.lo gzwrite.lo -lc > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. > libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. > libz.a > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_lock > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > > ld.lld: ld.lld: error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined]ld.lld > : ld.lld: error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:292: minigzip] Error 1 > gmake: *** Waiting for unfinished jobs.... > gmake: *** [Makefile:289: example] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:304: minigzip64] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:301: example64] Error 1 > rm -f libz.so libz.so.1 > ln -s libz.so.1.2.11 libz.so > > On Wed, Jul 7, 2021 at 2:03 PM Matthew Knepley wrote: > >> On Wed, Jul 7, 2021 at 1:40 PM Mark Adams wrote: >> >>> >>> >>> On Wed, Jul 7, 2021 at 1:26 PM Barry Smith wrote: >>> >>>> >>>> You will need to pass the -L arguments appropriately to zlib's >>>> ./configure so it can link its shared library appropriately. That is, the >>>> zlib configure requires the value obtained with >>>> L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', >>>> >>> >>> It's not clear to me how to do that. I added the -L to my configure >>> script. It is not clear to me how to modify Matt's command. >>> >> >> Can you try this? >> >> knepley/feature-orientation-rethink *$:/PETSc3/petsc/petsc-dev$ git diff >> config/BuildSystem/config/packages/zlib.py >> diff --git a/config/BuildSystem/config/packages/zlib.py >> b/config/BuildSystem/config/packages/zlib.py >> index fbf9bdf4a0a..b76d3625364 100644 >> --- a/config/BuildSystem/config/packages/zlib.py >> +++ b/config/BuildSystem/config/packages/zlib.py >> @@ -25,6 +25,7 @@ class Configure(config.package.Package): >> >> self.pushLanguage('C') >> args.append('CC="'+self.getCompiler()+'"') >> >> args.append('CFLAGS="'+self.updatePackageCFlags(self.getCompilerFlags())+'"') >> + args.append('LDFLAGS="'+self.getLinkerFlags()+'"') >> args.append('prefix="'+self.installDir+'"') >> self.popLanguage() >> args=' '.join(args) >> >> Matt >> >> >>> >>>> On Jul 7, 2021, at 12:18 PM, Mark Adams wrote: >>>> >>>> Well, still getting these hsa errors: >>>> >>>> 13:07 jczhang/fix-kokkos-includes= >>>> /gpfs/alpine/csc314/scratch/adams/petsc$ !136 >>>> cd >>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I${ROCM_PATH}/include" >>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>> Checking for shared library support... >>>> Building shared library libz.so.1.2.11 with cc. >>>> Checking for size_t... Yes. >>>> Checking for off64_t... Yes. >>>> Checking for fseeko... Yes. >>>> Checking for strerror... No. >>>> Checking for unistd.h... Yes. >>>> Checking for stdarg.h... Yes. >>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>> vs[n]printf(). >>>> Checking for vsnprintf() in stdio.h... No. >>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>> can build but will be open to possible buffer-overflow security >>>> vulnerabilities. >>>> Checking for return value of vsprintf()... Yes. >>>> Checking for attribute(visibility) support... Yes. >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>> test/example.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>> test/minigzip.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/adler32.o adler32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/crc32.o crc32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/deflate.o deflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/infback.o infback.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/inffast.o inffast.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/inflate.o inflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/inftrees.o inftrees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/trees.o trees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/zutil.o zutil.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/compress.o compress.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/uncompr.o uncompr.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzclose.o gzclose.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzlib.o gzlib.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzread.o gzread.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzwrite.o gzwrite.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>> example64.o test/example.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>> minigzip64.o test/minigzip.c >>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o >>>> inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o >>>> gzwrite.o >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>> -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>> libz.a >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>> libz.a >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined] >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_iterate_agents >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_unlock >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_signal_destroy >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_lock >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_pool_free >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> gmake: *** [Makefile:289: example] Error 1 >>>> gmake: *** Waiting for unfinished jobs.... >>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> gmake: *** [Makefile:301: example64] Error 1 >>>> rm -f libz.so libz.so.1 >>>> ln -s libz.so.1.2.11 libz.so >>>> ln -s libz.so.1.2.11 libz.so.1 >>>> >>>> On Wed, Jul 7, 2021 at 1:05 PM Mark Adams wrote: >>>> >>>>> Thanks, it was missing the / >>>>> >>>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', >>>>> >>>>> On Wed, Jul 7, 2021 at 12:48 PM Matthew Knepley >>>>> wrote: >>>>> >>>>>> Did you look in /sw/spock/spack-envs/views/rocm-4.1.0lib ? >>>>>> >>>>>> Matt >>>>>> >>>>>> On Wed, Jul 7, 2021 at 12:29 PM Mark Adams wrote: >>>>>> >>>>>>> Ok, I tried that but now I get this error. >>>>>>> >>>>>>> On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini < >>>>>>> stefano.zampini at gmail.com> wrote: >>>>>>> >>>>>>>> There's an extra comma >>>>>>>> >>>>>>>> Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: >>>>>>>> >>>>>>>>> Humm, I get this error (I just copied your whole file into here): >>>>>>>>> >>>>>>>>> 12:06 jczhang/fix-kokkos-includes= >>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py >>>>>>>>> Traceback (most recent call last): >>>>>>>>> File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, >>>>>>>>> in >>>>>>>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', >>>>>>>>> TypeError: bad operand type for unary +: 'str' >>>>>>>>> >>>>>>>>> On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini < >>>>>>>>> stefano.zampini at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Mark >>>>>>>>>> >>>>>>>>>> On Spock, you can use >>>>>>>>>> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >>>>>>>>>> a template for your configuration. You need to add libraries as LDFLAGS to >>>>>>>>>> resolve the hsa symbols >>>>>>>>>> >>>>>>>>>> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> 08:30 jczhang/fix-kokkos-includes= >>>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >>>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I${ROCM_PATH}/include" >>>>>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>>>>> Checking for shared library support... >>>>>>>>>> Building shared library libz.so.1.2.11 with cc. >>>>>>>>>> Checking for size_t... Yes. >>>>>>>>>> Checking for off64_t... Yes. >>>>>>>>>> Checking for fseeko... Yes. >>>>>>>>>> Checking for strerror... No. >>>>>>>>>> Checking for unistd.h... Yes. >>>>>>>>>> Checking for stdarg.h... Yes. >>>>>>>>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>>>>>>>> vs[n]printf(). >>>>>>>>>> Checking for vsnprintf() in stdio.h... No. >>>>>>>>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>>>>>>>> can build but will be open to possible buffer-overflow security >>>>>>>>>> vulnerabilities. >>>>>>>>>> Checking for return value of vsprintf()... Yes. >>>>>>>>>> Checking for attribute(visibility) support... Yes. >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>>>>>>>> test/example.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>>>>>>>> test/minigzip.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/adler32.o adler32.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/crc32.o crc32.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/deflate.o deflate.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/infback.o infback.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/inffast.o inffast.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/inflate.o inflate.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/inftrees.o inftrees.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/trees.o trees.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/zutil.o zutil.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/compress.o compress.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/uncompr.o uncompr.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/gzclose.o gzclose.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/gzlib.o gzlib.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/gzread.o gzread.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/gzwrite.o gzwrite.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>>>>> example64.o test/example.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>>>>> minigzip64.o test/minigzip.c >>>>>>>>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o >>>>>>>>>> inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o >>>>>>>>>> gzread.o gzwrite.o >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>>>>>>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>>>>>>>> -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>>>>>>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>>>>>>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>>>>>>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>>>>>>>> libz.a >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>>>>>>>> libz.a >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_load_scacquire [--no-allow-shlib-undefined] >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_allocate >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agent_iterate_memory_pools >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_iterate_agents >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_load_scacquire >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_unlock >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_destroy >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >>>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v >>>>>>>>>> to see invocation) >>>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v >>>>>>>>>> to see invocation) >>>>>>>>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>>>>>>>> gmake: *** Waiting for unfinished jobs.... >>>>>>>>>> gmake: *** [Makefile:289: example] Error 1 >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_load_scacquire [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >>>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v >>>>>>>>>> to see invocation) >>>>>>>>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_load_scacquire [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >>>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v >>>>>>>>>> to see invocation) >>>>>>>>>> gmake: *** [Makefile:301: example64] Error 1 >>>>>>>>>> rm -f libz.so libz.so.1 >>>>>>>>>> ln -s libz.so.1.2.11 libz.so >>>>>>>>>> ln -s libz.so.1.2.11 libz.so.1 >>>>>>>>>> 11:03 2 jczhang/fix-kokkos-includes= >>>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >>>>>>>>>> >>>>>>>>>> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> It is hard to see the error. I suspect it is something crazy >>>>>>>>>>> with the install. Can you run the build by hand? >>>>>>>>>>> >>>>>>>>>>> cd >>>>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>>> -I${ROCM_PATH}/include" >>>>>>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>>>>>> >>>>>>>>>>> and see what happens, and what the error code is? >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> >>>>>>>>>>> Matt >>>>>>>>>>> >>>>>>>>>>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> Also, this is in jczhang/fix-kokkos-includes >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Apparently the same error with >>>>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > >>>>>>>>>>>>>> zlib-1.2.11.tar.gz >>>>>>>>>>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>>>>>>>>>> Time Current >>>>>>>>>>>>>> Dload Upload Total >>>>>>>>>>>>>> Spent Left Speed >>>>>>>>>>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- >>>>>>>>>>>>>> --:--:-- --:--:-- 834k >>>>>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>>>>> $ ls zlib-1.2.11 >>>>>>>>>>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>>>>>>>>>> inflate.h os400 watcom zlib.h >>>>>>>>>>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>>>>>>>>>> inftrees.c qnx win32 zlib.map >>>>>>>>>>>>>> FAQ compress.c doc infback.c >>>>>>>>>>>>>> inftrees.h test zconf.h >>>>>>>>>>>>>> zlib.pc.cmakein >>>>>>>>>>>>>> INDEX configure examples inffast.c >>>>>>>>>>>>>> make_vms.com treebuild.xml zconf.h.cmakein >>>>>>>>>>>>>> zlib.pc.in >>>>>>>>>>>>>> Makefile contrib gzclose.c inffast.h >>>>>>>>>>>>>> msdos trees.c zconf.h.in zlib2ansi >>>>>>>>>>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>>>>>>>>>> nintendods trees.h zlib.3 zutil.c >>>>>>>>>>>>>> README crc32.h gzlib.c inflate.c >>>>>>>>>>>>>> old uncompr.c zlib.3.pdf zutil.h >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Mark, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> You can try what the configure error message should be >>>>>>>>>>>>>>> suggesting (it is not clear if that is being printed to your screen or no). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>>>>>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> My browser can not open this and I could not see a download >>>>>>>>>>>>>> button on this site. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Can you download this? >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>>>>>>>>>> * If your network is disconnected - please reconnect and >>>>>>>>>>>>>>> rerun ./configure >>>>>>>>>>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>>>>>>>>>> * You can run with --with-packages-download-dir=/adirectory >>>>>>>>>>>>>>> and ./configure will instruct you what packages to download manually >>>>>>>>>>>>>>> * or you can download the above URL manually, to >>>>>>>>>>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>>>> and use the configure option: >>>>>>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Barry >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams >>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > I am getting some sort of error in build zlib on Spock at >>>>>>>>>>>>>>> ORNL. >>>>>>>>>>>>>>> > Other libraries are downloaded and I am sure the network >>>>>>>>>>>>>>> is fine. >>>>>>>>>>>>>>> > Any ideas? >>>>>>>>>>>>>>> > Thanks, >>>>>>>>>>>>>>> > Mark >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>>> experiments lead. >>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>> >>>>>>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>> >>>>>> >>>>> >>>> >>>> >>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Wed Jul 7 14:49:39 2021 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Wed, 7 Jul 2021 21:49:39 +0200 Subject: [petsc-users] Scaling of the Petsc Binary Viewer In-Reply-To: References: Message-ID: Hi Dave, Thank you for your fast answer. To postprocess the files in python, I use the PetscBinaryIO package that is provided with PETSc, yes. I load the file like this : *import numpy as npimport meshioimport PetscBinaryIO as pioimport matplotlib as mplimport matplotlib.pyplot as pltimport matplotlib.cm as cmmpl.use('Agg')* *restartname = "restart_00001001.bin"print("Reading {} ...".format(restartname))io = pio.PetscBinaryIO()fh = open(restartname)objecttype = io.readObjectType(fh)data = Noneif objecttype == 'Vec': data = io.readVec(fh)print("Size of data = ", data.size)print("Size of a single variable (4 variables) = ", data.size / 4)assert(np.isclose(data.size / 4.0, np.floor(data.size / 4.0)))* Then I load the mesh (it's from Gmsh so I use the meshio package) : *meshname = "ForwardFacing.msh"print("Reading {} ...".format(meshname))mesh = meshio.read(meshname)print("Number of vertices = ", mesh.points.shape[0])print("Number of cells = ", mesh.cells_dict['quad'].shape[0])* >From the 'data' and the 'mesh' I use tricontourf from matplotlib to plot the figure. I removed the call to ...SetUseMPIIO... and it gives the same kind of data yes (I attached a figure of the data obtained with the binary viewer without MPI I/O). Maybe it's just a connectivity issue ? Maybe the way the Vec is written by the PETSc viewer somehow does not match the connectivity from the ori Gmsh file but some other connectivity of the partitionned DMPlex ? If so, is there a way to get the latter ? I know the binary viewer does not work on DMPlex, the VTK viewer yields a corrupted dataset and I have issues with HDF5 viewer with MPI (see another recent thread of mine) ... Thanks again for your help !! Thibault Le mer. 7 juil. 2021 ? 20:54, Dave May a ?crit : > > > On Wed 7. Jul 2021 at 20:41, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > >> Dear all, >> >> I have been having issues with large Vec (based on DMPLex) and massive >> MPI I/O ... it looks like the data that is written by the Petsc Binary >> Viewer is gibberish for large meshes split on a high number of processes. >> For instance, I am using a mesh that has around 50 million cells, split on >> 1024 processors. >> The computation seems to run fine, the timestep computed from the data >> makes sense so I think internally everything is fine. But when I look at >> the solution (one example attached) it's noise - at this point it should >> show a bow shock developing on the left near the step. >> The piece of code I use is below for the output : >> >> call DMGetOutputSequenceNumber(dm, save_seqnum, >> save_seqval, ierr); CHKERRA(ierr) >> call DMSetOutputSequenceNumber(dm, -1, 0.d0, ierr); >> CHKERRA(ierr) >> write(filename,'(A,I8.8,A)') "restart_", stepnum, ".bin" >> call PetscViewerCreate(PETSC_COMM_WORLD, binViewer, >> ierr); CHKERRA(ierr) >> call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, >> ierr); CHKERRA(ierr) >> call PetscViewerFileSetMode(binViewer, FILE_MODE_WRITE, >> ierr); CHKERRA(ierr); >> call PetscViewerBinarySetUseMPIIO(binViewer, PETSC_TRUE, >> ierr); CHKERRA(ierr); >> >> > > Do you get the correct output if you don?t call the function above (or > equivalently use PETSC_FALSE) > > > call PetscViewerFileSetName(binViewer, trim(filename), ierr); CHKERRA(ierr) >> call VecView(X, binViewer, ierr); CHKERRA(ierr) >> call PetscViewerDestroy(binViewer, ierr); CHKERRA(ierr) >> call DMSetOutputSequenceNumber(dm, save_seqnum, >> save_seqval, ierr); CHKERRA(ierr) >> >> I do not think there is anything wrong with it but of course I would be >> happy to hear your feedback. >> Nonetheless my question was : how far have you tested the binary mpi i/o >> of a Vec ? Does it make some sense that for a 50 million cell mesh split on >> 1024 processes, it could somehow fail ? >> Or is it my python drawing method that is completely incapable of >> handling this dataset ? (paraview displays the same thing though so I'm not >> sure ...) >> > > Are you using the python provided tools within petsc to load the Vec from > file? > > > Thanks, > Dave > > > >> Thank you very much for your advice and help !!! >> >> Thibault >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: restart_00001001.png Type: image/png Size: 20605 bytes Desc: not available URL: From knepley at gmail.com Wed Jul 7 16:45:52 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Jul 2021 17:45:52 -0400 Subject: [petsc-users] Scaling of the Petsc Binary Viewer In-Reply-To: References: Message-ID: On Wed, Jul 7, 2021 at 3:49 PM Thibault Bridel-Bertomeu < thibault.bridelbertomeu at gmail.com> wrote: > Hi Dave, > > Thank you for your fast answer. > > To postprocess the files in python, I use the PetscBinaryIO package that > is provided with PETSc, yes. > > I load the file like this : > > > > > > > > > *import numpy as npimport meshioimport PetscBinaryIO as pioimport > matplotlib as mplimport matplotlib.pyplot as pltimport matplotlib.cm > as cmmpl.use('Agg')* > > > > > > > > > > > > *restartname = "restart_00001001.bin"print("Reading {} > ...".format(restartname))io = pio.PetscBinaryIO()fh = > open(restartname)objecttype = io.readObjectType(fh)data = Noneif objecttype > == 'Vec': data = io.readVec(fh)print("Size of data = ", > data.size)print("Size of a single variable (4 variables) = ", data.size / > 4)assert(np.isclose(data.size / 4.0, np.floor(data.size / 4.0)))* > > Then I load the mesh (it's from Gmsh so I use the meshio package) : > > > > > > *meshname = "ForwardFacing.msh"print("Reading {} > ...".format(meshname))mesh = meshio.read(meshname)print("Number of vertices > = ", mesh.points.shape[0])print("Number of cells = ", > mesh.cells_dict['quad'].shape[0])* > > From the 'data' and the 'mesh' I use tricontourf from matplotlib to plot > the figure. > > I removed the call to ...SetUseMPIIO... and it gives the same kind of data > yes (I attached a figure of the data obtained with the binary viewer > without MPI I/O). > > Maybe it's just a connectivity issue ? Maybe the way the Vec is written by > the PETSc viewer somehow does not match the connectivity from the ori Gmsh > file but some other connectivity of the partitionned DMPlex ? > Yes, when you distribute the mesh, it gets permuted so that each piece is contiguous. This happens on all meshes (DMDA, DMStag, DMPlex, DMForest). When it is written out, it just concatenates that ordering, or what we usually call the "global order" since it is the order of a global vector. > If so, is there a way to get the latter ? > If you call https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetUseNatural.html before distribution, then a mapping back to the original ordering will be saved. You can use that mapping with a global vector and an original vector https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexGlobalToNaturalBegin.html to get a vector in the original ordering. However, you would also need to understand how you want that output. The ExodusII viewer uses this by default since people how use it (Blaise) generally want that. Most people using HDF5 (me) want the new order since it is faster. Plex ex15 and ex26 show some manipulations using this mapping. > I know the binary viewer does not work on DMPlex, > You can output the Vec in native format, which will use the GlobalToNatural reordering. It will not output the Plex, but you will have the values in the order you expect. > the VTK viewer yields a corrupted dataset > VTK is not supported. We support Paraview through the Xdmf extension to HDF5. > and I have issues with HDF5 viewer with MPI (see another recent thread of > mine) ... > I have not been able to reproduce this yet. Thanks, Matt > Thanks again for your help !! > > Thibault > > Le mer. 7 juil. 2021 ? 20:54, Dave May a ?crit : > >> >> >> On Wed 7. Jul 2021 at 20:41, Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> wrote: >> >>> Dear all, >>> >>> I have been having issues with large Vec (based on DMPLex) and massive >>> MPI I/O ... it looks like the data that is written by the Petsc Binary >>> Viewer is gibberish for large meshes split on a high number of processes. >>> For instance, I am using a mesh that has around 50 million cells, split on >>> 1024 processors. >>> The computation seems to run fine, the timestep computed from the data >>> makes sense so I think internally everything is fine. But when I look at >>> the solution (one example attached) it's noise - at this point it should >>> show a bow shock developing on the left near the step. >>> The piece of code I use is below for the output : >>> >>> call DMGetOutputSequenceNumber(dm, save_seqnum, >>> save_seqval, ierr); CHKERRA(ierr) >>> call DMSetOutputSequenceNumber(dm, -1, 0.d0, ierr); >>> CHKERRA(ierr) >>> write(filename,'(A,I8.8,A)') "restart_", stepnum, ".bin" >>> call PetscViewerCreate(PETSC_COMM_WORLD, binViewer, >>> ierr); CHKERRA(ierr) >>> call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, >>> ierr); CHKERRA(ierr) >>> call PetscViewerFileSetMode(binViewer, FILE_MODE_WRITE, >>> ierr); CHKERRA(ierr); >>> call PetscViewerBinarySetUseMPIIO(binViewer, PETSC_TRUE, >>> ierr); CHKERRA(ierr); >>> >>> >> >> Do you get the correct output if you don?t call the function above (or >> equivalently use PETSC_FALSE) >> >> >> call PetscViewerFileSetName(binViewer, trim(filename), ierr); >>> CHKERRA(ierr) >>> call VecView(X, binViewer, ierr); CHKERRA(ierr) >>> call PetscViewerDestroy(binViewer, ierr); CHKERRA(ierr) >>> call DMSetOutputSequenceNumber(dm, save_seqnum, >>> save_seqval, ierr); CHKERRA(ierr) >>> >>> I do not think there is anything wrong with it but of course I would be >>> happy to hear your feedback. >>> Nonetheless my question was : how far have you tested the binary mpi i/o >>> of a Vec ? Does it make some sense that for a 50 million cell mesh split on >>> 1024 processes, it could somehow fail ? >>> Or is it my python drawing method that is completely incapable of >>> handling this dataset ? (paraview displays the same thing though so I'm not >>> sure ...) >>> >> >> Are you using the python provided tools within petsc to load the Vec from >> file? >> >> >> Thanks, >> Dave >> >> >> >>> Thank you very much for your advice and help !!! >>> >>> Thibault >>> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 7 16:49:42 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Jul 2021 17:49:42 -0400 Subject: [petsc-users] download zlib error In-Reply-To: References: <28B88C0F-5927-4A86-AD7E-C20DD53F3105@petsc.dev> <9EE154E1-E603-4D54-9570-7EE21EE38FB3@petsc.dev> Message-ID: On Wed, Jul 7, 2021 at 3:45 PM Mark Adams wrote: > No diffs. I added this: > > diff --git a/config/BuildSystem/config/packages/zlib.py > b/config/BuildSystem/config/packages/zlib.py > index fbf9bdf4a0..b76d362536 100644 > --- a/config/BuildSystem/config/packages/zlib.py > +++ b/config/BuildSystem/config/packages/zlib.py > @@ -25,6 +25,7 @@ class Configure(config.package.Package): > self.pushLanguage('C') > args.append('CC="'+self.getCompiler()+'"') > > args.append('CFLAGS="'+self.updatePackageCFlags(self.getCompilerFlags())+'"') > + args.append('LDFLAGS="'+self.getLinkerFlags()+'"') > args.append('prefix="'+self.installDir+'"') > self.popLanguage() > args=' '.join(args) > lines 1-12/12 (END) > > but it still fails. > It is hard to understand how that was added. You can see that 'CC', 'CFLAGS', and 'prefix' are present on the line below, but 'LDFLAGS' is not. That is difficult to reconcile. Thanks, Matt > 15:20 jczhang/fix-kokkos-includes *= > /gpfs/alpine/csc314/scratch/adams/petsc$ cd > /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 > && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 > -I${ROCM_PATH}/include" > prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" > ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install > Checking for shared library support... > Building shared library libz.so.1.2.11 with cc. > Checking for size_t... Yes. > Checking for off64_t... Yes. > Checking for fseeko... Yes. > Checking for strerror... No. > Checking for unistd.h... Yes. > Checking for stdarg.h... Yes. > Checking whether to use vs[n]printf() or s[n]printf()... using > vs[n]printf(). > Checking for vsnprintf() in stdio.h... No. > WARNING: vsnprintf() not found, falling back to vsprintf(). zlib > can build but will be open to possible buffer-overflow security > vulnerabilities. > Checking for return value of vsprintf()... Yes. > Checking for attribute(visibility) support... Yes. > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o > test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o > test/minigzip.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/adler32.o adler32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/crc32.o crc32.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/deflate.o deflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/infback.o infback.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inffast.o inffast.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inflate.o inflate.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/inftrees.o inftrees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/trees.o trees.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/zutil.o zutil.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/compress.o compress.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/uncompr.o uncompr.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzclose.o gzclose.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzlib.o gzlib.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzread.o gzread.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC > -c -o objs/gzwrite.o gzwrite.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > example64.o test/example.c > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o > minigzip64.o test/minigzip.c > ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o > inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o > gzwrite.o > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a > cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC > -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC > -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o > libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo > inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo > gzlib.lo gzread.lo gzwrite.lo -lc > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. > libz.a > cc -fPIC -fstack-protector -Qunused-arguments -g -O0 > -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 > -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. > libz.a > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lldld.lld: : error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_lock > [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > > ld.lld: ld.lld: error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined]ld.lld > : ld.lld: error: error: > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_amd_agents_allow_access [--no-allow-shlib-undefined] > /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to > hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:292: minigzip] Error 1 > gmake: *** Waiting for unfinished jobs.... > gmake: *** [Makefile:289: example] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:304: minigzip64] Error 1 > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_allocate > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agent_iterate_memory_pools > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_load_scacquire > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_get_info > [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] > ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: > undefined reference to hsa_amd_agents_allow_access > [--no-allow-shlib-undefined] > clang-11: error: linker command failed with exit code 1 (use -v to see > invocation) > gmake: *** [Makefile:301: example64] Error 1 > rm -f libz.so libz.so.1 > ln -s libz.so.1.2.11 libz.so > > On Wed, Jul 7, 2021 at 2:03 PM Matthew Knepley wrote: > >> On Wed, Jul 7, 2021 at 1:40 PM Mark Adams wrote: >> >>> >>> >>> On Wed, Jul 7, 2021 at 1:26 PM Barry Smith wrote: >>> >>>> >>>> You will need to pass the -L arguments appropriately to zlib's >>>> ./configure so it can link its shared library appropriately. That is, the >>>> zlib configure requires the value obtained with >>>> L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', >>>> >>> >>> It's not clear to me how to do that. I added the -L to my configure >>> script. It is not clear to me how to modify Matt's command. >>> >> >> Can you try this? >> >> knepley/feature-orientation-rethink *$:/PETSc3/petsc/petsc-dev$ git diff >> config/BuildSystem/config/packages/zlib.py >> diff --git a/config/BuildSystem/config/packages/zlib.py >> b/config/BuildSystem/config/packages/zlib.py >> index fbf9bdf4a0a..b76d3625364 100644 >> --- a/config/BuildSystem/config/packages/zlib.py >> +++ b/config/BuildSystem/config/packages/zlib.py >> @@ -25,6 +25,7 @@ class Configure(config.package.Package): >> >> self.pushLanguage('C') >> args.append('CC="'+self.getCompiler()+'"') >> >> args.append('CFLAGS="'+self.updatePackageCFlags(self.getCompilerFlags())+'"') >> + args.append('LDFLAGS="'+self.getLinkerFlags()+'"') >> args.append('prefix="'+self.installDir+'"') >> self.popLanguage() >> args=' '.join(args) >> >> Matt >> >> >>> >>>> On Jul 7, 2021, at 12:18 PM, Mark Adams wrote: >>>> >>>> Well, still getting these hsa errors: >>>> >>>> 13:07 jczhang/fix-kokkos-includes= >>>> /gpfs/alpine/csc314/scratch/adams/petsc$ !136 >>>> cd >>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I${ROCM_PATH}/include" >>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>> Checking for shared library support... >>>> Building shared library libz.so.1.2.11 with cc. >>>> Checking for size_t... Yes. >>>> Checking for off64_t... Yes. >>>> Checking for fseeko... Yes. >>>> Checking for strerror... No. >>>> Checking for unistd.h... Yes. >>>> Checking for stdarg.h... Yes. >>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>> vs[n]printf(). >>>> Checking for vsnprintf() in stdio.h... No. >>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>> can build but will be open to possible buffer-overflow security >>>> vulnerabilities. >>>> Checking for return value of vsprintf()... Yes. >>>> Checking for attribute(visibility) support... Yes. >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>> test/example.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>> test/minigzip.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/adler32.o adler32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/crc32.o crc32.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/deflate.o deflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/infback.o infback.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/inffast.o inffast.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/inflate.o inflate.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/inftrees.o inftrees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/trees.o trees.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/zutil.o zutil.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/compress.o compress.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/uncompr.o uncompr.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzclose.o gzclose.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzlib.o gzlib.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzread.o gzread.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>> -c -o objs/gzwrite.o gzwrite.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>> example64.o test/example.c >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>> minigzip64.o test/minigzip.c >>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o >>>> inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o >>>> gzwrite.o >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>> -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>> libz.a >>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>> libz.a >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined] >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_iterate_agents >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_unlock >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_signal_destroy >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_lock >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> >>>> ld.lldld.lld: : error: error: >>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>> hsa_amd_memory_pool_free >>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> gmake: *** [Makefile:289: example] Error 1 >>>> gmake: *** Waiting for unfinished jobs.... >>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_allocate >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agent_iterate_memory_pools >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_iterate_agents [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_load_scacquire >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_get_info >>>> [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>> ld.lld: error: /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>> undefined reference to hsa_amd_agents_allow_access >>>> [--no-allow-shlib-undefined] >>>> clang-11: error: linker command failed with exit code 1 (use -v to see >>>> invocation) >>>> gmake: *** [Makefile:301: example64] Error 1 >>>> rm -f libz.so libz.so.1 >>>> ln -s libz.so.1.2.11 libz.so >>>> ln -s libz.so.1.2.11 libz.so.1 >>>> >>>> On Wed, Jul 7, 2021 at 1:05 PM Mark Adams wrote: >>>> >>>>> Thanks, it was missing the / >>>>> >>>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'/lib -lhsa-runtime64', >>>>> >>>>> On Wed, Jul 7, 2021 at 12:48 PM Matthew Knepley >>>>> wrote: >>>>> >>>>>> Did you look in /sw/spock/spack-envs/views/rocm-4.1.0lib ? >>>>>> >>>>>> Matt >>>>>> >>>>>> On Wed, Jul 7, 2021 at 12:29 PM Mark Adams wrote: >>>>>> >>>>>>> Ok, I tried that but now I get this error. >>>>>>> >>>>>>> On Wed, Jul 7, 2021 at 12:13 PM Stefano Zampini < >>>>>>> stefano.zampini at gmail.com> wrote: >>>>>>> >>>>>>>> There's an extra comma >>>>>>>> >>>>>>>> Il Mer 7 Lug 2021, 18:08 Mark Adams ha scritto: >>>>>>>> >>>>>>>>> Humm, I get this error (I just copied your whole file into here): >>>>>>>>> >>>>>>>>> 12:06 jczhang/fix-kokkos-includes= >>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ ~/arch-spock-dbg-cray-kokkos.py >>>>>>>>> Traceback (most recent call last): >>>>>>>>> File "/ccs/home/adams/arch-spock-dbg-cray-kokkos.py", line 27, >>>>>>>>> in >>>>>>>>> '--LDFLAGS=-L'+os.environ['ROCM_PATH'],+'lib -lhsa-runtime64', >>>>>>>>> TypeError: bad operand type for unary +: 'str' >>>>>>>>> >>>>>>>>> On Wed, Jul 7, 2021 at 11:08 AM Stefano Zampini < >>>>>>>>> stefano.zampini at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Mark >>>>>>>>>> >>>>>>>>>> On Spock, you can use >>>>>>>>>> https://gitlab.com/petsc/petsc/-/blob/main/config/examples/arch-olcf-spock.py as >>>>>>>>>> a template for your configuration. You need to add libraries as LDFLAGS to >>>>>>>>>> resolve the hsa symbols >>>>>>>>>> >>>>>>>>>> On Jul 7, 2021, at 5:04 PM, Mark Adams wrote: >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> 08:30 jczhang/fix-kokkos-includes= >>>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc$ cd >>>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I${ROCM_PATH}/include" >>>>>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>>>>> Checking for shared library support... >>>>>>>>>> Building shared library libz.so.1.2.11 with cc. >>>>>>>>>> Checking for size_t... Yes. >>>>>>>>>> Checking for off64_t... Yes. >>>>>>>>>> Checking for fseeko... Yes. >>>>>>>>>> Checking for strerror... No. >>>>>>>>>> Checking for unistd.h... Yes. >>>>>>>>>> Checking for stdarg.h... Yes. >>>>>>>>>> Checking whether to use vs[n]printf() or s[n]printf()... using >>>>>>>>>> vs[n]printf(). >>>>>>>>>> Checking for vsnprintf() in stdio.h... No. >>>>>>>>>> WARNING: vsnprintf() not found, falling back to vsprintf(). zlib >>>>>>>>>> can build but will be open to possible buffer-overflow security >>>>>>>>>> vulnerabilities. >>>>>>>>>> Checking for return value of vsprintf()... Yes. >>>>>>>>>> Checking for attribute(visibility) support... Yes. >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o example.o >>>>>>>>>> test/example.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o adler32.o adler32.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o crc32.o crc32.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o deflate.o deflate.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o infback.o infback.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inffast.o inffast.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inflate.o inflate.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o inftrees.o inftrees.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o trees.o trees.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o zutil.o zutil.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o compress.o compress.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o uncompr.o uncompr.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzclose.o gzclose.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzlib.o gzlib.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzread.o gzread.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -c -o minigzip.o >>>>>>>>>> test/minigzip.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/adler32.o adler32.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/crc32.o crc32.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/deflate.o deflate.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/infback.o infback.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/inffast.o inffast.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/inflate.o inflate.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/inftrees.o inftrees.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/trees.o trees.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/zutil.o zutil.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/compress.o compress.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/uncompr.o uncompr.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/gzclose.o gzclose.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/gzlib.o gzlib.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/gzread.o gzread.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -DPIC >>>>>>>>>> -c -o objs/gzwrite.o gzwrite.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>>>>> example64.o test/example.c >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -I. -D_FILE_OFFSET_BITS=64 -c -o >>>>>>>>>> minigzip64.o test/minigzip.c >>>>>>>>>> ar rc libz.a adler32.o crc32.o deflate.o infback.o inffast.o >>>>>>>>>> inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o >>>>>>>>>> gzread.o gzwrite.o >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example example.o -L. libz.a >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a >>>>>>>>>> cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map -fPIC >>>>>>>>>> -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -fPIC >>>>>>>>>> -D_LARGEFILE64_SOURCE=1 -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o >>>>>>>>>> libz.so.1.2.11 adler32.lo crc32.lo deflate.lo infback.lo inffast.lo >>>>>>>>>> inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo >>>>>>>>>> gzlib.lo gzread.lo gzwrite.lo -lc >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o minigzip64 minigzip64.o -L. >>>>>>>>>> libz.a >>>>>>>>>> cc -fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>> -I/sw/spock/spack-envs/views/rocm-4.1.0/include -D_LARGEFILE64_SOURCE=1 >>>>>>>>>> -DNO_STRERROR -DNO_vsnprintf -DHAVE_HIDDEN -o example64 example64.o -L. >>>>>>>>>> libz.a >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_load_scacquire [--no-allow-shlib-undefined] >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_allocate >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agent_iterate_memory_pools >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_iterate_agents >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_memory_pool_get_info >>>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_load_scacquire >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_unlock >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lldld.lld: : error: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_destroy >>>>>>>>>> [--no-allow-shlib-undefined]/opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: >>>>>>>>>> undefined reference to hsa_amd_agents_allow_access >>>>>>>>>> [--no-allow-shlib-undefined] >>>>>>>>>> >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >>>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v >>>>>>>>>> to see invocation) >>>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v >>>>>>>>>> to see invocation) >>>>>>>>>> gmake: *** [Makefile:292: minigzip] Error 1 >>>>>>>>>> gmake: *** Waiting for unfinished jobs.... >>>>>>>>>> gmake: *** [Makefile:289: example] Error 1 >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_load_scacquire [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >>>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v >>>>>>>>>> to see invocation) >>>>>>>>>> gmake: *** [Makefile:304: minigzip64] Error 1 >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_allocate [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agent_iterate_memory_pools [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_iterate_agents [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_load_scacquire [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_unlock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_signal_destroy [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_get_info [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_lock [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_memory_pool_free [--no-allow-shlib-undefined] >>>>>>>>>> ld.lld: error: >>>>>>>>>> /opt/cray/pe/mpich/8.1.4/gtl/lib/libmpi_gtl_hsa.so: undefined reference to >>>>>>>>>> hsa_amd_agents_allow_access [--no-allow-shlib-undefined] >>>>>>>>>> clang-11: error: linker command failed with exit code 1 (use -v >>>>>>>>>> to see invocation) >>>>>>>>>> gmake: *** [Makefile:301: example64] Error 1 >>>>>>>>>> rm -f libz.so libz.so.1 >>>>>>>>>> ln -s libz.so.1.2.11 libz.so >>>>>>>>>> ln -s libz.so.1.2.11 libz.so.1 >>>>>>>>>> 11:03 2 jczhang/fix-kokkos-includes= >>>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11$ >>>>>>>>>> >>>>>>>>>> On Wed, Jul 7, 2021 at 9:18 AM Matthew Knepley >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> It is hard to see the error. I suspect it is something crazy >>>>>>>>>>> with the install. Can you run the build by hand? >>>>>>>>>>> >>>>>>>>>>> cd >>>>>>>>>>> /gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos/externalpackages/zlib-1.2.11 >>>>>>>>>>> && CC="cc" CFLAGS="-fPIC -fstack-protector -Qunused-arguments -g -O0 >>>>>>>>>>> -I${ROCM_PATH}/include" >>>>>>>>>>> prefix="/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray-kokkos" >>>>>>>>>>> ./configure && /usr/bin/gmake -j8 -l307.2 && /usr/bin/gmake install >>>>>>>>>>> >>>>>>>>>>> and see what happens, and what the error code is? >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> >>>>>>>>>>> Matt >>>>>>>>>>> >>>>>>>>>>> On Wed, Jul 7, 2021 at 8:48 AM Mark Adams >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> Also, this is in jczhang/fix-kokkos-includes >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Jul 7, 2021 at 8:46 AM Mark Adams >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Apparently the same error with >>>>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, Jul 6, 2021 at 11:53 PM Barry Smith >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> $ curl http://www.zlib.net/zlib-1.2.11.tar.gz > >>>>>>>>>>>>>> zlib-1.2.11.tar.gz >>>>>>>>>>>>>> % Total % Received % Xferd Average Speed Time Time >>>>>>>>>>>>>> Time Current >>>>>>>>>>>>>> Dload Upload Total >>>>>>>>>>>>>> Spent Left Speed >>>>>>>>>>>>>> 100 593k 100 593k 0 0 835k 0 --:--:-- >>>>>>>>>>>>>> --:--:-- --:--:-- 834k >>>>>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>>>>> $ tar -zxf zlib-1.2.11.tar.gz >>>>>>>>>>>>>> ~/Src/petsc* >>>>>>>>>>>>>> (barry/2021-07-03/demonstrate-network-parallel-build=)* >>>>>>>>>>>>>> arch-demonstrate-network-parallel-build >>>>>>>>>>>>>> $ ls zlib-1.2.11 >>>>>>>>>>>>>> CMakeLists.txt adler32.c deflate.c gzread.c >>>>>>>>>>>>>> inflate.h os400 watcom zlib.h >>>>>>>>>>>>>> ChangeLog amiga deflate.h gzwrite.c >>>>>>>>>>>>>> inftrees.c qnx win32 zlib.map >>>>>>>>>>>>>> FAQ compress.c doc infback.c >>>>>>>>>>>>>> inftrees.h test zconf.h >>>>>>>>>>>>>> zlib.pc.cmakein >>>>>>>>>>>>>> INDEX configure examples inffast.c >>>>>>>>>>>>>> make_vms.com treebuild.xml zconf.h.cmakein >>>>>>>>>>>>>> zlib.pc.in >>>>>>>>>>>>>> Makefile contrib gzclose.c inffast.h >>>>>>>>>>>>>> msdos trees.c zconf.h.in zlib2ansi >>>>>>>>>>>>>> Makefile.in crc32.c gzguts.h inffixed.h >>>>>>>>>>>>>> nintendods trees.h zlib.3 zutil.c >>>>>>>>>>>>>> README crc32.h gzlib.c inflate.c >>>>>>>>>>>>>> old uncompr.c zlib.3.pdf zutil.h >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Jul 6, 2021, at 7:57 PM, Mark Adams >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, Jul 6, 2021 at 6:42 PM Barry Smith >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Mark, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> You can try what the configure error message should be >>>>>>>>>>>>>>> suggesting (it is not clear if that is being printed to your screen or no). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ERROR: Unable to download package ZLIB from: >>>>>>>>>>>>>>> http://www.zlib.net/zlib-1.2.11.tar.gz >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> My browser can not open this and I could not see a download >>>>>>>>>>>>>> button on this site. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Can you download this? >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> * If URL specified manually - perhaps there is a typo? >>>>>>>>>>>>>>> * If your network is disconnected - please reconnect and >>>>>>>>>>>>>>> rerun ./configure >>>>>>>>>>>>>>> * Or perhaps you have a firewall blocking the download >>>>>>>>>>>>>>> * You can run with --with-packages-download-dir=/adirectory >>>>>>>>>>>>>>> and ./configure will instruct you what packages to download manually >>>>>>>>>>>>>>> * or you can download the above URL manually, to >>>>>>>>>>>>>>> /yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>>>> and use the configure option: >>>>>>>>>>>>>>> --download-zlib=/yourselectedlocation/zlib-1.2.11.tar.gz >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Barry >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> > On Jul 6, 2021, at 4:29 PM, Mark Adams >>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > I am getting some sort of error in build zlib on Spock at >>>>>>>>>>>>>>> ORNL. >>>>>>>>>>>>>>> > Other libraries are downloaded and I am sure the network >>>>>>>>>>>>>>> is fine. >>>>>>>>>>>>>>> > Any ideas? >>>>>>>>>>>>>>> > Thanks, >>>>>>>>>>>>>>> > Mark >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> What most experimenters take for granted before they begin their >>>>>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>>>>> experiments lead. >>>>>>>>>>> -- Norbert Wiener >>>>>>>>>>> >>>>>>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>> >>>>>> >>>>> >>>> >>>> >>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 7 16:54:03 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Jul 2021 17:54:03 -0400 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> Message-ID: On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae wrote: > Hi Matt, > > > Thanks for your quick reply. > > I have not completely understood your suggestion, could you please > elaborate a bit more? > > For your convenience, here is how I am proceeding for the moment in my > code: > > > TSGetKSP(ts,&ksp); > > KSPGetPC(ksp,&pc); > > PCSetType(pc,PCFIELDSPLIT); > > PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); > > PCSetUp(pc); > > PCFieldSplitGetSubKSP(pc, &n, &subksp); > > KSPGetPC(subksp[1], &(subpc[1])); > I do not like the two lines above. We should not have to do this. > KSPSetOperators(subksp[1],T,T); > In the above line, I want you to use a separate preconditioning matrix M, instead of T. That way, it will provide the preconditioning matrix for your Schur complement problem. Thanks, Matt > KSPSetUp(subksp[1]); > > PetscFree(subksp); > > TSSolve(ts,X); > > > Thank you. > > Best, > > > Zakariae > ------------------------------ > *From:* Matthew Knepley > *Sent:* Wednesday, July 7, 2021 12:11:10 PM > *To:* Jorti, Zakariae > *Cc:* petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu > *Subject:* [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT > > On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> Hi, >> >> >> I am trying to build a PCFIELDSPLIT preconditioner for a matrix >> >> J = [A00 A01] >> >> [A10 A11] >> >> that has the following shape: >> >> >> M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I >> 0] >> >> [0 I] [0 >> ksp(T)] [-A10 ksp(A00) I ] >> >> >> where T is a user-defined Schur complement approximation that replaces >> the true Schur complement S:= A11 - A10 ksp(A00) A01. >> >> >> I am trying to do something similar to this example (lines 41--45 and >> 116--121): >> https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html >> >> >> The problem I have is that I manage to replace S with T on a >> separate single linear system but not for the linear systems generated by >> my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} >> correctly, the T matrix gets replaced by S in the preconditioner once I >> call TSSolve. >> >> Do you have any suggestions how to fix this knowing that the matrix J >> does not change over time? >> >> I don't like how it is done in that example for this very reason. > > When I want to use a custom preconditioning matrix for the Schur > complement, I always give a preconditioning matrix M to the outer solve. > Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for > the Schur complement, for that preconditioning matrix without > extra code. Can you do this? > > Thanks, > > Matt > >> Many thanks. >> >> >> Best regards, >> >> >> Zakariae >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Thu Jul 8 06:34:48 2021 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Thu, 8 Jul 2021 13:34:48 +0200 Subject: [petsc-users] Scaling of the Petsc Binary Viewer In-Reply-To: References: Message-ID: Hi Matthew, Thank you for your answer ! So I tried to add those steps, and I have the same behavior as the one described in this thread : https://lists.mcs.anl.gov/pipermail/petsc-dev/2015-July/017978.html *[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------[0]PETSC ERROR: Object is in wrong state[0]PETSC ERROR: DM global to natural SF was not created.You must call DMSetUseNatural() before DMPlexDistribute().[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.[0]PETSC ERROR: Petsc Development GIT revision: v3.14.4-671-g707297fd510 GIT Date: 2021-02-24 22:50:05 +0000[0]PETSC ERROR: /ccc/work/cont001/ocre/bridelbert/EULERIAN2D/bin/eulerian2D on a named inti1401 by bridelbert Thu Jul 8 07:50:24 2021[0]PETSC ERROR: Configure options --with-clean=1 --prefix=/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=INTI_UNS3D --with-fc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort --with-cc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc --with-cxx=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx --with-openmp=0 --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p1.tar.gz --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default[0]PETSC ERROR: #1 DMPlexGlobalToNaturalBegin() line 247 in /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/dm/impls/plex/plexnatural.c[0]PETSC ERROR: #2 User provided function() line 0 in User file* The creation of my DM is as follow : * ! Read mesh from file name 'meshname' call DMPlexCreateFromFile(PETSC_COMM_WORLD, meshname, PETSC_TRUE, dm, ierr); CHKERRA(ierr) ! Distribute on processors call DMSetUseNatural(dm, PETSC_TRUE, ierr) ; CHKERRA(ierr) ! Start with connectivity call DMSetBasicAdjacency(dm, PETSC_TRUE, PETSC_FALSE, ierr) ; CHKERRA(ierr) ! Distribute on processors call DMPlexDistribute(dm, overlap, PETSC_NULL_SF, dmDist, ierr) ; CHKERRA(ierr) ! Security check if (dmDist /= PETSC_NULL_DM) then ! Destroy previous dm call DMDestroy(dm, ierr) ; CHKERRA(ierr) ! Replace with dmDist dm = dmDist end if ! Finalize setup of the object call DMSetFromOptions(dm, ierr) ; CHKERRA(ierr) ! Boundary condition with ghost cells call DMPlexConstructGhostCells(dm, PETSC_NULL_CHARACTER, PETSC_NULL_INTEGER, dmGhost, ierr); CHKERRA(ierr) ! Security check if (dmGhost /= PETSC_NULL_DM) then ! Destroy previous dm call DMDestroy(dm, ierr) ; CHKERRA(ierr) ! Replace with dmGhost dm = dmGhost end if* And I write my vector as follow : * call DMCreateGlobalVector(dm, Xnat, ierr); CHKERRA(ierr) call PetscObjectSetName(Xnat, "NaturalSolution", ierr); CHKERRA(ierr) call DMPlexGlobalToNaturalBegin(dm, X, Xnat, ierr); CHKERRA(ierr) call DMPlexGlobalToNaturalEnd(dm, X, Xnat, ierr); CHKERRA(ierr) call DMGetOutputSequenceNumber(dm, save_seqnum, save_seqval, ierr); CHKERRA(ierr) call DMSetOutputSequenceNumber(dm, -1, 0.d0, ierr); CHKERRA(ierr) write(filename,'(A,I8.8,A)') "restart_", stepnum, ".bin" call PetscViewerCreate(PETSC_COMM_WORLD, binViewer, ierr); CHKERRA(ierr) call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, ierr); CHKERRA(ierr) call PetscViewerFileSetMode(binViewer, FILE_MODE_WRITE, ierr); CHKERRA(ierr); call PetscViewerBinarySetUseMPIIO(binViewer, PETSC_TRUE, ierr); CHKERRA(ierr); call PetscViewerFileSetName(binViewer, trim(filename), ierr); CHKERRA(ierr) ! call VecView(X, binViewer, ierr); CHKERRA(ierr) call VecView(Xnat, binViewer, ierr); CHKERRA(ierr) call PetscViewerDestroy(binViewer, ierr); CHKERRA(ierr) call DMSetOutputSequenceNumber(dm, save_seqnum, save_seqval, ierr); CHKERRA(ierr)* Did you find the time to fix the bug you are mentioning in the thread above regarding the passing of the natural property when calling DMPlexConstructGhostCells ? Thanks !! Thibault Le mer. 7 juil. 2021 ? 23:46, Matthew Knepley a ?crit : > On Wed, Jul 7, 2021 at 3:49 PM Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > >> Hi Dave, >> >> Thank you for your fast answer. >> >> To postprocess the files in python, I use the PetscBinaryIO package that >> is provided with PETSc, yes. >> >> I load the file like this : >> >> >> >> >> >> >> >> >> *import numpy as npimport meshioimport PetscBinaryIO as pioimport >> matplotlib as mplimport matplotlib.pyplot as pltimport matplotlib.cm >> as cmmpl.use('Agg')* >> >> >> >> >> >> >> >> >> >> >> >> *restartname = "restart_00001001.bin"print("Reading {} >> ...".format(restartname))io = pio.PetscBinaryIO()fh = >> open(restartname)objecttype = io.readObjectType(fh)data = Noneif objecttype >> == 'Vec': data = io.readVec(fh)print("Size of data = ", >> data.size)print("Size of a single variable (4 variables) = ", data.size / >> 4)assert(np.isclose(data.size / 4.0, np.floor(data.size / 4.0)))* >> >> Then I load the mesh (it's from Gmsh so I use the meshio package) : >> >> >> >> >> >> *meshname = "ForwardFacing.msh"print("Reading {} >> ...".format(meshname))mesh = meshio.read(meshname)print("Number of vertices >> = ", mesh.points.shape[0])print("Number of cells = ", >> mesh.cells_dict['quad'].shape[0])* >> >> From the 'data' and the 'mesh' I use tricontourf from matplotlib to plot >> the figure. >> >> I removed the call to ...SetUseMPIIO... and it gives the same kind of >> data yes (I attached a figure of the data obtained with the binary viewer >> without MPI I/O). >> >> Maybe it's just a connectivity issue ? Maybe the way the Vec is written >> by the PETSc viewer somehow does not match the connectivity from the ori >> Gmsh file but some other connectivity of the partitionned DMPlex ? >> > > Yes, when you distribute the mesh, it gets permuted so that each piece is > contiguous. This happens on all meshes (DMDA, DMStag, DMPlex, DMForest). > When it is written out, it just concatenates that ordering, or what we > usually call the "global order" since it is the order of a global vector. > > >> If so, is there a way to get the latter ? >> > > If you call > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetUseNatural.html > > before distribution, then a mapping back to the original ordering will be > saved. You can use > that mapping with a global vector and an original vector > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexGlobalToNaturalBegin.html > > to get a vector in the original ordering. However, you would also need to > understand how you want that output. > The ExodusII viewer uses this by default since people how use it (Blaise) > generally want that. Most people using > HDF5 (me) want the new order since it is faster. Plex ex15 and ex26 show > some manipulations using this mapping. > > >> I know the binary viewer does not work on DMPlex, >> > > You can output the Vec in native format, which will use the > GlobalToNatural reordering. It will not output the Plex, > but you will have the values in the order you expect. > > >> the VTK viewer yields a corrupted dataset >> > > VTK is not supported. We support Paraview through the Xdmf extension to > HDF5. > > >> and I have issues with HDF5 viewer with MPI (see another recent thread of >> mine) ... >> > > I have not been able to reproduce this yet. > > Thanks, > > Matt > > >> Thanks again for your help !! >> >> Thibault >> >> Le mer. 7 juil. 2021 ? 20:54, Dave May a >> ?crit : >> >>> >>> >>> On Wed 7. Jul 2021 at 20:41, Thibault Bridel-Bertomeu < >>> thibault.bridelbertomeu at gmail.com> wrote: >>> >>>> Dear all, >>>> >>>> I have been having issues with large Vec (based on DMPLex) and massive >>>> MPI I/O ... it looks like the data that is written by the Petsc Binary >>>> Viewer is gibberish for large meshes split on a high number of processes. >>>> For instance, I am using a mesh that has around 50 million cells, split on >>>> 1024 processors. >>>> The computation seems to run fine, the timestep computed from the data >>>> makes sense so I think internally everything is fine. But when I look at >>>> the solution (one example attached) it's noise - at this point it should >>>> show a bow shock developing on the left near the step. >>>> The piece of code I use is below for the output : >>>> >>>> call DMGetOutputSequenceNumber(dm, save_seqnum, >>>> save_seqval, ierr); CHKERRA(ierr) >>>> call DMSetOutputSequenceNumber(dm, -1, 0.d0, ierr); >>>> CHKERRA(ierr) >>>> write(filename,'(A,I8.8,A)') "restart_", stepnum, ".bin" >>>> call PetscViewerCreate(PETSC_COMM_WORLD, binViewer, >>>> ierr); CHKERRA(ierr) >>>> call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, >>>> ierr); CHKERRA(ierr) >>>> call PetscViewerFileSetMode(binViewer, FILE_MODE_WRITE, >>>> ierr); CHKERRA(ierr); >>>> call PetscViewerBinarySetUseMPIIO(binViewer, >>>> PETSC_TRUE, ierr); CHKERRA(ierr); >>>> >>>> >>> >>> Do you get the correct output if you don?t call the function above (or >>> equivalently use PETSC_FALSE) >>> >>> >>> call PetscViewerFileSetName(binViewer, trim(filename), ierr); >>>> CHKERRA(ierr) >>>> call VecView(X, binViewer, ierr); CHKERRA(ierr) >>>> call PetscViewerDestroy(binViewer, ierr); CHKERRA(ierr) >>>> call DMSetOutputSequenceNumber(dm, save_seqnum, >>>> save_seqval, ierr); CHKERRA(ierr) >>>> >>>> I do not think there is anything wrong with it but of course I would be >>>> happy to hear your feedback. >>>> Nonetheless my question was : how far have you tested the binary mpi >>>> i/o of a Vec ? Does it make some sense that for a 50 million cell mesh >>>> split on 1024 processes, it could somehow fail ? >>>> Or is it my python drawing method that is completely incapable of >>>> handling this dataset ? (paraview displays the same thing though so I'm not >>>> sure ...) >>>> >>> >>> Are you using the python provided tools within petsc to load the Vec >>> from file? >>> >>> >>> Thanks, >>> Dave >>> >>> >>> >>>> Thank you very much for your advice and help !!! >>>> >>>> Thibault >>>> >>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacob.fai at gmail.com Thu Jul 8 07:22:55 2021 From: jacob.fai at gmail.com (Jacob Faibussowitsch) Date: Thu, 8 Jul 2021 08:22:55 -0400 Subject: [petsc-users] [Ext] [SLEPc] Computing Smallest Eigenvalue+Eigenvector of Many Small Matrices In-Reply-To: References: <4051E7AF-6A72-4797-A025-03EB63875795@gmail.com> <8735srfc7b.fsf@jedbrown.org> Message-ID: <29A9F254-410D-4EBE-B579-7D0477F2A325@gmail.com> Hi All, Thanks for the suggestions, I will look into dsyevx. Unfortunately cuSolver might be out of the question since I actually need to compute these eigenvalues within a device kernel, although it might be worthwhile to compute the eigenvalues with cusolver as a pre-processing step to my own kernel. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) > On Jul 7, 2021, at 14:13, Adam Denchfield wrote: > > syevjBatched from cuSolver is quite good once it's configured fine. It's a direct solve for all eigenpairs, works for batches of small matrices with sizes up to (I believe) 32x32. The default CUDA example using it works except if you have "too many" small matrices, in which case you'll overload the GPU memory and need to further batch the calls. I found it to be fast enough for my needs. > > Regards, > Adam Denchfield > Ph.D Student, Physics > University of Illinois in Chicago > B.S. Applied Physics (2018) > Illinois Institute of Technology > Email: adenchfi at hawk.iit.edu > > > On Wed, Jul 7, 2021 at 2:31 AM Jose E. Roman > wrote: > cuSolver has syevjBatched, which seems to fit your purpose. But I have never used it. > > Lanczos is not competitive for such small matrices. > > Jose > > > > El 6 jul 2021, a las 21:56, Jed Brown > escribi?: > > > > Have you tried just calling LAPACK directly? (You could try dsyevx to see if there's something to gain by computing less than all the eigenvalues.) I'm not aware of a batched interface at this time, but that's what you'd want for performance. > > > > Jacob Faibussowitsch > writes: > > > >> Hello PETSc/SLEPc users, > >> > >> Similar to a recent question I am looking for an algorithm to compute the smallest eigenvalue and eigenvector for a bunch of matrices however I have a few extra ?restrictions?. All matrices have the following properties: > >> > >> - All matrices are the same size > >> - All matrices are small (perhaps no larger than 12x12) > >> - All matrices are SPD > >> - I only need the smallest eigenpair > >> > >> So far my best bet seems to be Lanczos but I?m wondering if there is some wunder method I?ve overlooked. > >> > >> Best regards, > >> > >> Jacob Faibussowitsch > >> (Jacob Fai - booss - oh - vitch) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 8 16:05:48 2021 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 8 Jul 2021 17:05:48 -0400 Subject: [petsc-users] Scaling of the Petsc Binary Viewer In-Reply-To: References: Message-ID: On Thu, Jul 8, 2021 at 7:35 AM Thibault Bridel-Bertomeu < thibault.bridelbertomeu at gmail.com> wrote: > Hi Matthew, > > Thank you for your answer ! So I tried to add those steps, and I have the > same behavior as the one described in this thread : > > https://lists.mcs.anl.gov/pipermail/petsc-dev/2015-July/017978.html > > > > > > > > > > > > *[0]PETSC ERROR: --------------------- Error Message > --------------------------------------------------------------[0]PETSC > ERROR: Object is in wrong state[0]PETSC ERROR: DM global to natural SF was > not created.You must call DMSetUseNatural() before > DMPlexDistribute().[0]PETSC ERROR: See > https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble > shooting.[0]PETSC ERROR: Petsc Development GIT revision: > v3.14.4-671-g707297fd510 GIT Date: 2021-02-24 22:50:05 +0000[0]PETSC > ERROR: /ccc/work/cont001/ocre/bridelbert/EULERIAN2D/bin/eulerian2D on a > named inti1401 by bridelbert Thu Jul 8 07:50:24 2021[0]PETSC ERROR: > Configure options --with-clean=1 > --prefix=/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti > --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 > --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 > --PETSC_ARCH=INTI_UNS3D > --with-fc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort > --with-cc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc > --with-cxx=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx > --with-openmp=0 > --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p1.tar.gz > --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz > --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz > --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz > --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default[0]PETSC ERROR: > #1 DMPlexGlobalToNaturalBegin() line 247 in > /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/dm/impls/plex/plexnatural.c[0]PETSC > ERROR: #2 User provided function() line 0 in User file* > > The creation of my DM is as follow : > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > * ! Read mesh from file name 'meshname' call > DMPlexCreateFromFile(PETSC_COMM_WORLD, meshname, PETSC_TRUE, dm, ierr); > CHKERRA(ierr) ! Distribute on processors call > DMSetUseNatural(dm, PETSC_TRUE, ierr) ; > CHKERRA(ierr) ! Start with connectivity call > DMSetBasicAdjacency(dm, PETSC_TRUE, PETSC_FALSE, ierr) ; > CHKERRA(ierr) ! Distribute on processors call > DMPlexDistribute(dm, overlap, PETSC_NULL_SF, dmDist, ierr) ; > CHKERRA(ierr) ! Security check if (dmDist /= > PETSC_NULL_DM) then ! Destroy previous dm > call DMDestroy(dm, ierr) ; > CHKERRA(ierr) ! Replace with dmDist dm = > dmDist end if ! Finalize setup of the object > call DMSetFromOptions(dm, ierr) > ; CHKERRA(ierr) ! Boundary condition with ghost cells > call DMPlexConstructGhostCells(dm, PETSC_NULL_CHARACTER, > PETSC_NULL_INTEGER, dmGhost, ierr); CHKERRA(ierr) ! Security > check if (dmGhost /= PETSC_NULL_DM) then ! > Destroy previous dm call DMDestroy(dm, ierr) > ; CHKERRA(ierr) ! Replace > with dmGhost dm = dmGhost end if* > > And I write my vector as follow : > > > > > > > > > > > > > > > > > * call DMCreateGlobalVector(dm, Xnat, ierr); CHKERRA(ierr) > call PetscObjectSetName(Xnat, "NaturalSolution", ierr); > CHKERRA(ierr) call DMPlexGlobalToNaturalBegin(dm, X, Xnat, > ierr); CHKERRA(ierr) call DMPlexGlobalToNaturalEnd(dm, X, > Xnat, ierr); CHKERRA(ierr) call > DMGetOutputSequenceNumber(dm, save_seqnum, save_seqval, ierr); > CHKERRA(ierr) call DMSetOutputSequenceNumber(dm, -1, 0.d0, > ierr); CHKERRA(ierr) write(filename,'(A,I8.8,A)') > "restart_", stepnum, ".bin" call > PetscViewerCreate(PETSC_COMM_WORLD, binViewer, ierr); CHKERRA(ierr) > call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, ierr); > CHKERRA(ierr) call PetscViewerFileSetMode(binViewer, > FILE_MODE_WRITE, ierr); CHKERRA(ierr); call > PetscViewerBinarySetUseMPIIO(binViewer, PETSC_TRUE, ierr); CHKERRA(ierr); > call PetscViewerFileSetName(binViewer, trim(filename), ierr); > CHKERRA(ierr) ! call VecView(X, binViewer, ierr); > CHKERRA(ierr) call VecView(Xnat, binViewer, ierr); > CHKERRA(ierr) call PetscViewerDestroy(binViewer, ierr); > CHKERRA(ierr) call DMSetOutputSequenceNumber(dm, > save_seqnum, save_seqval, ierr); CHKERRA(ierr)* > > Did you find the time to fix the bug you are mentioning in the thread > above regarding the passing of the natural property when calling > DMPlexConstructGhostCells ? > No, thanks for reminding me. I coded up what I think is a fix, but I do not have a test yet. Maybe you can help me make one. Here is the branch https://gitlab.com/petsc/petsc/-/commits/knepley/fix-plex-natural-fvghost Can you run and see if it fixes that error? Then we can figure out the best way to do your output. Thanks, Matt > Thanks !! > > Thibault > > > Le mer. 7 juil. 2021 ? 23:46, Matthew Knepley a > ?crit : > >> On Wed, Jul 7, 2021 at 3:49 PM Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> wrote: >> >>> Hi Dave, >>> >>> Thank you for your fast answer. >>> >>> To postprocess the files in python, I use the PetscBinaryIO package that >>> is provided with PETSc, yes. >>> >>> I load the file like this : >>> >>> >>> >>> >>> >>> >>> >>> >>> *import numpy as npimport meshioimport PetscBinaryIO as pioimport >>> matplotlib as mplimport matplotlib.pyplot as pltimport matplotlib.cm >>> as cmmpl.use('Agg')* >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> *restartname = "restart_00001001.bin"print("Reading {} >>> ...".format(restartname))io = pio.PetscBinaryIO()fh = >>> open(restartname)objecttype = io.readObjectType(fh)data = Noneif objecttype >>> == 'Vec': data = io.readVec(fh)print("Size of data = ", >>> data.size)print("Size of a single variable (4 variables) = ", data.size / >>> 4)assert(np.isclose(data.size / 4.0, np.floor(data.size / 4.0)))* >>> >>> Then I load the mesh (it's from Gmsh so I use the meshio package) : >>> >>> >>> >>> >>> >>> *meshname = "ForwardFacing.msh"print("Reading {} >>> ...".format(meshname))mesh = meshio.read(meshname)print("Number of vertices >>> = ", mesh.points.shape[0])print("Number of cells = ", >>> mesh.cells_dict['quad'].shape[0])* >>> >>> From the 'data' and the 'mesh' I use tricontourf from matplotlib to plot >>> the figure. >>> >>> I removed the call to ...SetUseMPIIO... and it gives the same kind of >>> data yes (I attached a figure of the data obtained with the binary viewer >>> without MPI I/O). >>> >>> Maybe it's just a connectivity issue ? Maybe the way the Vec is written >>> by the PETSc viewer somehow does not match the connectivity from the ori >>> Gmsh file but some other connectivity of the partitionned DMPlex ? >>> >> >> Yes, when you distribute the mesh, it gets permuted so that each piece is >> contiguous. This happens on all meshes (DMDA, DMStag, DMPlex, DMForest). >> When it is written out, it just concatenates that ordering, or what we >> usually call the "global order" since it is the order of a global vector. >> >> >>> If so, is there a way to get the latter ? >>> >> >> If you call >> >> >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetUseNatural.html >> >> before distribution, then a mapping back to the original ordering will be >> saved. You can use >> that mapping with a global vector and an original vector >> >> >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexGlobalToNaturalBegin.html >> >> to get a vector in the original ordering. However, you would also need to >> understand how you want that output. >> The ExodusII viewer uses this by default since people how use it (Blaise) >> generally want that. Most people using >> HDF5 (me) want the new order since it is faster. Plex ex15 and ex26 show >> some manipulations using this mapping. >> >> >>> I know the binary viewer does not work on DMPlex, >>> >> >> You can output the Vec in native format, which will use the >> GlobalToNatural reordering. It will not output the Plex, >> but you will have the values in the order you expect. >> >> >>> the VTK viewer yields a corrupted dataset >>> >> >> VTK is not supported. We support Paraview through the Xdmf extension to >> HDF5. >> >> >>> and I have issues with HDF5 viewer with MPI (see another recent thread >>> of mine) ... >>> >> >> I have not been able to reproduce this yet. >> >> Thanks, >> >> Matt >> >> >>> Thanks again for your help !! >>> >>> Thibault >>> >>> Le mer. 7 juil. 2021 ? 20:54, Dave May a >>> ?crit : >>> >>>> >>>> >>>> On Wed 7. Jul 2021 at 20:41, Thibault Bridel-Bertomeu < >>>> thibault.bridelbertomeu at gmail.com> wrote: >>>> >>>>> Dear all, >>>>> >>>>> I have been having issues with large Vec (based on DMPLex) and massive >>>>> MPI I/O ... it looks like the data that is written by the Petsc Binary >>>>> Viewer is gibberish for large meshes split on a high number of processes. >>>>> For instance, I am using a mesh that has around 50 million cells, split on >>>>> 1024 processors. >>>>> The computation seems to run fine, the timestep computed from the data >>>>> makes sense so I think internally everything is fine. But when I look at >>>>> the solution (one example attached) it's noise - at this point it should >>>>> show a bow shock developing on the left near the step. >>>>> The piece of code I use is below for the output : >>>>> >>>>> call DMGetOutputSequenceNumber(dm, save_seqnum, >>>>> save_seqval, ierr); CHKERRA(ierr) >>>>> call DMSetOutputSequenceNumber(dm, -1, 0.d0, ierr); >>>>> CHKERRA(ierr) >>>>> write(filename,'(A,I8.8,A)') "restart_", stepnum, >>>>> ".bin" >>>>> call PetscViewerCreate(PETSC_COMM_WORLD, binViewer, >>>>> ierr); CHKERRA(ierr) >>>>> call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, >>>>> ierr); CHKERRA(ierr) >>>>> call PetscViewerFileSetMode(binViewer, >>>>> FILE_MODE_WRITE, ierr); CHKERRA(ierr); >>>>> call PetscViewerBinarySetUseMPIIO(binViewer, >>>>> PETSC_TRUE, ierr); CHKERRA(ierr); >>>>> >>>>> >>>> >>>> Do you get the correct output if you don?t call the function above (or >>>> equivalently use PETSC_FALSE) >>>> >>>> >>>> call PetscViewerFileSetName(binViewer, trim(filename), ierr); >>>>> CHKERRA(ierr) >>>>> call VecView(X, binViewer, ierr); CHKERRA(ierr) >>>>> call PetscViewerDestroy(binViewer, ierr); CHKERRA(ierr) >>>>> call DMSetOutputSequenceNumber(dm, save_seqnum, >>>>> save_seqval, ierr); CHKERRA(ierr) >>>>> >>>>> I do not think there is anything wrong with it but of course I would >>>>> be happy to hear your feedback. >>>>> Nonetheless my question was : how far have you tested the binary mpi >>>>> i/o of a Vec ? Does it make some sense that for a 50 million cell mesh >>>>> split on 1024 processes, it could somehow fail ? >>>>> Or is it my python drawing method that is completely incapable of >>>>> handling this dataset ? (paraview displays the same thing though so I'm not >>>>> sure ...) >>>>> >>>> >>>> Are you using the python provided tools within petsc to load the Vec >>>> from file? >>>> >>>> >>>> Thanks, >>>> Dave >>>> >>>> >>>> >>>>> Thank you very much for your advice and help !!! >>>>> >>>>> Thibault >>>>> >>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jekozdon at nps.edu Thu Jul 8 22:17:11 2021 From: jekozdon at nps.edu (Kozdon, Jeremy (CIV)) Date: Fri, 9 Jul 2021 03:17:11 +0000 Subject: [petsc-users] Does petsc duplicate the users communicator? Message-ID: Sorry if this is clearly stated somewhere in the docs, I'm still getting familiar with the petsc codebase and was also unable to find the answer searching (nor could I determine where this would be done in the source). Does petsc duplicate MPI communicators? Or does the users program need to make sure that the communicator remains valid for the life of a petsc object? The attached little test code seems to suggest that there is some duplication of MPI communicators behind the scenes. This came up when working on Julia wrappers for petsc. (Julia has a garbage collector so we need to make sure that references are properly kept if needed.) -------------- next part -------------- A non-text attachment was scrubbed... Name: try.c Type: application/octet-stream Size: 2075 bytes Desc: try.c URL: From bsmith at petsc.dev Thu Jul 8 23:21:08 2021 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 8 Jul 2021 23:21:08 -0500 Subject: [petsc-users] Does petsc duplicate the users communicator? In-Reply-To: References: Message-ID: <2BCE8D87-DD32-489D-A777-43F6FFA588F7@petsc.dev> Whenever PETSc is handed a communicator it looks for an attribute inside of the communicator that contains the "PETSc" version of that communicator. If it does not find the attribute it adds an attribute with a new communicator in, if it does find one it increases its reference count by one. The routine it uses to perform this is PetscCommDuplicate(). We do it this was so that PETSc communication will never potentially interfere with the users use of their communicators. PetscCommDestroy() decreases the reference count of the inner communicator by one. So, for example, if you use "comm" to create two PETSc objects, PETSc will create an attribute on "comm" with a new communicator, when both objects are destroy then PetscCommDestroy() will have been called twice and the inner (PETSc) communicator will be destroyed. If someone did Use MPI to create a new communicator VecCreate(comm,...) Use MPI to destroy the new communicator .... VecDestroy() I am not sure what will happen since PETSc keeps a reference to the outer communicator from its own inner communicator. And destroying the user communicator will cause an attempt to destroy the attribute containing the inner PETSc communicator. I had always just assumed the user would not be deleting any MPI communicators they made and pass to PETSc until they were done with PETSc. It may work correctly but may not. The reality is very few MPI codes have complicated life cycles for MPI communicators. Barry > On Jul 8, 2021, at 10:17 PM, Kozdon, Jeremy (CIV) wrote: > > Sorry if this is clearly stated somewhere in the docs, I'm still getting familiar with the petsc codebase and was also unable to find the answer searching (nor could I determine where this would be done in the source). > > Does petsc duplicate MPI communicators? Or does the users program need to make sure that the communicator remains valid for the life of a petsc object? > > The attached little test code seems to suggest that there is some duplication of MPI communicators behind the scenes. > > This came up when working on Julia wrappers for petsc. (Julia has a garbage collector so we need to make sure that references are properly kept if needed.) > > From thibault.bridelbertomeu at gmail.com Fri Jul 9 06:25:52 2021 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Fri, 9 Jul 2021 13:25:52 +0200 Subject: [petsc-users] Scaling of the Petsc Binary Viewer In-Reply-To: References: Message-ID: Hi Matt, Thank you for working that fast ! I pulled your branch and tested your solution but unfortunately it crashed (I did not change the piece of code I sent you before for the creation of the DM). I am sending you the full error listing in a private e-mail so you can see what happened exactly. Thanks !! Thibault Le jeu. 8 juil. 2021 ? 23:06, Matthew Knepley a ?crit : > On Thu, Jul 8, 2021 at 7:35 AM Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > >> Hi Matthew, >> >> Thank you for your answer ! So I tried to add those steps, and I have the >> same behavior as the one described in this thread : >> >> https://lists.mcs.anl.gov/pipermail/petsc-dev/2015-July/017978.html >> >> >> >> >> >> >> >> >> >> >> >> *[0]PETSC ERROR: --------------------- Error Message >> --------------------------------------------------------------[0]PETSC >> ERROR: Object is in wrong state[0]PETSC ERROR: DM global to natural SF was >> not created.You must call DMSetUseNatural() before >> DMPlexDistribute().[0]PETSC ERROR: See >> https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble >> shooting.[0]PETSC ERROR: Petsc Development GIT revision: >> v3.14.4-671-g707297fd510 GIT Date: 2021-02-24 22:50:05 +0000[0]PETSC >> ERROR: /ccc/work/cont001/ocre/bridelbert/EULERIAN2D/bin/eulerian2D on a >> named inti1401 by bridelbert Thu Jul 8 07:50:24 2021[0]PETSC ERROR: >> Configure options --with-clean=1 >> --prefix=/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti >> --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 >> --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 >> --PETSC_ARCH=INTI_UNS3D >> --with-fc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort >> --with-cc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc >> --with-cxx=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx >> --with-openmp=0 >> --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p1.tar.gz >> --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz >> --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz >> --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz >> --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default[0]PETSC ERROR: >> #1 DMPlexGlobalToNaturalBegin() line 247 in >> /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/dm/impls/plex/plexnatural.c[0]PETSC >> ERROR: #2 User provided function() line 0 in User file* >> >> The creation of my DM is as follow : >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> * ! Read mesh from file name 'meshname' call >> DMPlexCreateFromFile(PETSC_COMM_WORLD, meshname, PETSC_TRUE, dm, ierr); >> CHKERRA(ierr) ! Distribute on processors call >> DMSetUseNatural(dm, PETSC_TRUE, ierr) ; >> CHKERRA(ierr) ! Start with connectivity call >> DMSetBasicAdjacency(dm, PETSC_TRUE, PETSC_FALSE, ierr) ; >> CHKERRA(ierr) ! Distribute on processors call >> DMPlexDistribute(dm, overlap, PETSC_NULL_SF, dmDist, ierr) ; >> CHKERRA(ierr) ! Security check if (dmDist /= >> PETSC_NULL_DM) then ! Destroy previous dm >> call DMDestroy(dm, ierr) ; >> CHKERRA(ierr) ! Replace with dmDist dm = >> dmDist end if ! Finalize setup of the object >> call DMSetFromOptions(dm, ierr) >> ; CHKERRA(ierr) ! Boundary condition with ghost cells >> call DMPlexConstructGhostCells(dm, PETSC_NULL_CHARACTER, >> PETSC_NULL_INTEGER, dmGhost, ierr); CHKERRA(ierr) ! Security >> check if (dmGhost /= PETSC_NULL_DM) then ! >> Destroy previous dm call DMDestroy(dm, ierr) >> ; CHKERRA(ierr) ! Replace >> with dmGhost dm = dmGhost end if* >> >> And I write my vector as follow : >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> * call DMCreateGlobalVector(dm, Xnat, ierr); >> CHKERRA(ierr) call PetscObjectSetName(Xnat, >> "NaturalSolution", ierr); CHKERRA(ierr) call >> DMPlexGlobalToNaturalBegin(dm, X, Xnat, ierr); CHKERRA(ierr) >> call DMPlexGlobalToNaturalEnd(dm, X, Xnat, ierr); CHKERRA(ierr) >> call DMGetOutputSequenceNumber(dm, save_seqnum, save_seqval, ierr); >> CHKERRA(ierr) call DMSetOutputSequenceNumber(dm, -1, 0.d0, >> ierr); CHKERRA(ierr) write(filename,'(A,I8.8,A)') >> "restart_", stepnum, ".bin" call >> PetscViewerCreate(PETSC_COMM_WORLD, binViewer, ierr); CHKERRA(ierr) >> call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, ierr); >> CHKERRA(ierr) call PetscViewerFileSetMode(binViewer, >> FILE_MODE_WRITE, ierr); CHKERRA(ierr); call >> PetscViewerBinarySetUseMPIIO(binViewer, PETSC_TRUE, ierr); CHKERRA(ierr); >> call PetscViewerFileSetName(binViewer, trim(filename), ierr); >> CHKERRA(ierr) ! call VecView(X, binViewer, ierr); >> CHKERRA(ierr) call VecView(Xnat, binViewer, ierr); >> CHKERRA(ierr) call PetscViewerDestroy(binViewer, ierr); >> CHKERRA(ierr) call DMSetOutputSequenceNumber(dm, >> save_seqnum, save_seqval, ierr); CHKERRA(ierr)* >> >> Did you find the time to fix the bug you are mentioning in the thread >> above regarding the passing of the natural property when calling >> DMPlexConstructGhostCells ? >> > > No, thanks for reminding me. I coded up what I think is a fix, but I do > not have a test yet. Maybe you can help me make one. Here is the branch > > > https://gitlab.com/petsc/petsc/-/commits/knepley/fix-plex-natural-fvghost > > Can you run and see if it fixes that error? Then we can figure out the > best way to do your output. > > Thanks, > > Matt > > >> Thanks !! >> >> Thibault >> >> >> Le mer. 7 juil. 2021 ? 23:46, Matthew Knepley a >> ?crit : >> >>> On Wed, Jul 7, 2021 at 3:49 PM Thibault Bridel-Bertomeu < >>> thibault.bridelbertomeu at gmail.com> wrote: >>> >>>> Hi Dave, >>>> >>>> Thank you for your fast answer. >>>> >>>> To postprocess the files in python, I use the PetscBinaryIO package >>>> that is provided with PETSc, yes. >>>> >>>> I load the file like this : >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> *import numpy as npimport meshioimport PetscBinaryIO as pioimport >>>> matplotlib as mplimport matplotlib.pyplot as pltimport matplotlib.cm >>>> as cmmpl.use('Agg')* >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> *restartname = "restart_00001001.bin"print("Reading {} >>>> ...".format(restartname))io = pio.PetscBinaryIO()fh = >>>> open(restartname)objecttype = io.readObjectType(fh)data = Noneif objecttype >>>> == 'Vec': data = io.readVec(fh)print("Size of data = ", >>>> data.size)print("Size of a single variable (4 variables) = ", data.size / >>>> 4)assert(np.isclose(data.size / 4.0, np.floor(data.size / 4.0)))* >>>> >>>> Then I load the mesh (it's from Gmsh so I use the meshio package) : >>>> >>>> >>>> >>>> >>>> >>>> *meshname = "ForwardFacing.msh"print("Reading {} >>>> ...".format(meshname))mesh = meshio.read(meshname)print("Number of vertices >>>> = ", mesh.points.shape[0])print("Number of cells = ", >>>> mesh.cells_dict['quad'].shape[0])* >>>> >>>> From the 'data' and the 'mesh' I use tricontourf from matplotlib to >>>> plot the figure. >>>> >>>> I removed the call to ...SetUseMPIIO... and it gives the same kind of >>>> data yes (I attached a figure of the data obtained with the binary viewer >>>> without MPI I/O). >>>> >>>> Maybe it's just a connectivity issue ? Maybe the way the Vec is written >>>> by the PETSc viewer somehow does not match the connectivity from the ori >>>> Gmsh file but some other connectivity of the partitionned DMPlex ? >>>> >>> >>> Yes, when you distribute the mesh, it gets permuted so that each piece >>> is contiguous. This happens on all meshes (DMDA, DMStag, DMPlex, DMForest). >>> When it is written out, it just concatenates that ordering, or what we >>> usually call the "global order" since it is the order of a global vector. >>> >>> >>>> If so, is there a way to get the latter ? >>>> >>> >>> If you call >>> >>> >>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetUseNatural.html >>> >>> before distribution, then a mapping back to the original ordering will >>> be saved. You can use >>> that mapping with a global vector and an original vector >>> >>> >>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexGlobalToNaturalBegin.html >>> >>> to get a vector in the original ordering. However, you would also need >>> to understand how you want that output. >>> The ExodusII viewer uses this by default since people how use it >>> (Blaise) generally want that. Most people using >>> HDF5 (me) want the new order since it is faster. Plex ex15 and ex26 show >>> some manipulations using this mapping. >>> >>> >>>> I know the binary viewer does not work on DMPlex, >>>> >>> >>> You can output the Vec in native format, which will use the >>> GlobalToNatural reordering. It will not output the Plex, >>> but you will have the values in the order you expect. >>> >>> >>>> the VTK viewer yields a corrupted dataset >>>> >>> >>> VTK is not supported. We support Paraview through the Xdmf extension to >>> HDF5. >>> >>> >>>> and I have issues with HDF5 viewer with MPI (see another recent thread >>>> of mine) ... >>>> >>> >>> I have not been able to reproduce this yet. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks again for your help !! >>>> >>>> Thibault >>>> >>>> Le mer. 7 juil. 2021 ? 20:54, Dave May a >>>> ?crit : >>>> >>>>> >>>>> >>>>> On Wed 7. Jul 2021 at 20:41, Thibault Bridel-Bertomeu < >>>>> thibault.bridelbertomeu at gmail.com> wrote: >>>>> >>>>>> Dear all, >>>>>> >>>>>> I have been having issues with large Vec (based on DMPLex) and >>>>>> massive MPI I/O ... it looks like the data that is written by the Petsc >>>>>> Binary Viewer is gibberish for large meshes split on a high number of >>>>>> processes. For instance, I am using a mesh that has around 50 million >>>>>> cells, split on 1024 processors. >>>>>> The computation seems to run fine, the timestep computed from the >>>>>> data makes sense so I think internally everything is fine. But when I look >>>>>> at the solution (one example attached) it's noise - at this point it should >>>>>> show a bow shock developing on the left near the step. >>>>>> The piece of code I use is below for the output : >>>>>> >>>>>> call DMGetOutputSequenceNumber(dm, save_seqnum, >>>>>> save_seqval, ierr); CHKERRA(ierr) >>>>>> call DMSetOutputSequenceNumber(dm, -1, 0.d0, ierr); >>>>>> CHKERRA(ierr) >>>>>> write(filename,'(A,I8.8,A)') "restart_", stepnum, >>>>>> ".bin" >>>>>> call PetscViewerCreate(PETSC_COMM_WORLD, binViewer, >>>>>> ierr); CHKERRA(ierr) >>>>>> call PetscViewerSetType(binViewer, PETSCVIEWERBINARY, >>>>>> ierr); CHKERRA(ierr) >>>>>> call PetscViewerFileSetMode(binViewer, >>>>>> FILE_MODE_WRITE, ierr); CHKERRA(ierr); >>>>>> call PetscViewerBinarySetUseMPIIO(binViewer, >>>>>> PETSC_TRUE, ierr); CHKERRA(ierr); >>>>>> >>>>>> >>>>> >>>>> Do you get the correct output if you don?t call the function above (or >>>>> equivalently use PETSC_FALSE) >>>>> >>>>> >>>>> call PetscViewerFileSetName(binViewer, trim(filename), ierr); >>>>>> CHKERRA(ierr) >>>>>> call VecView(X, binViewer, ierr); CHKERRA(ierr) >>>>>> call PetscViewerDestroy(binViewer, ierr); >>>>>> CHKERRA(ierr) >>>>>> call DMSetOutputSequenceNumber(dm, save_seqnum, >>>>>> save_seqval, ierr); CHKERRA(ierr) >>>>>> >>>>>> I do not think there is anything wrong with it but of course I would >>>>>> be happy to hear your feedback. >>>>>> Nonetheless my question was : how far have you tested the binary mpi >>>>>> i/o of a Vec ? Does it make some sense that for a 50 million cell mesh >>>>>> split on 1024 processes, it could somehow fail ? >>>>>> Or is it my python drawing method that is completely incapable of >>>>>> handling this dataset ? (paraview displays the same thing though so I'm not >>>>>> sure ...) >>>>>> >>>>> >>>>> Are you using the python provided tools within petsc to load the Vec >>>>> from file? >>>>> >>>>> >>>>> Thanks, >>>>> Dave >>>>> >>>>> >>>>> >>>>>> Thank you very much for your advice and help !!! >>>>>> >>>>>> Thibault >>>>>> >>>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Fri Jul 9 07:34:30 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Fri, 9 Jul 2021 14:34:30 +0200 Subject: [petsc-users] CUDA running out of memory in PtAP In-Reply-To: References: <8A532350-E75C-46F8-AD18-A0DD0A25B6CC@petsc.dev> <3EC35263-6F44-46D2-A3CA-5C4537D3AF99@gmail.com> Message-ID: <516A005D-2510-4D42-8536-EEFFEDFC3E72@gmail.com> Mark Can you test with https://gitlab.com/petsc/petsc/-/merge_requests/4158? It is off release > On Jul 7, 2021, at 4:24 PM, Mark Adams wrote: > > I think that is a good idea. I am trying to do it myself but it is getting messy. > Thanks, > > On Wed, Jul 7, 2021 at 9:50 AM Stefano Zampini > wrote: > Do you want me to open an MR to handle the sequential case? > >> On Jul 7, 2021, at 3:39 PM, Mark Adams > wrote: >> >> OK, I found where its not protected in sequential. >> >> On Wed, Jul 7, 2021 at 9:25 AM Mark Adams > wrote: >> Thanks, but that did not work. >> >> It looks like this is just in MPIAIJ, but I am using SeqAIJ. ex2 (below) uses PETSC_COMM_SELF everywhere. >> >> + srun -G 1 -n 16 -c 1 --cpu-bind=cores --ntasks-per-core=2 /global/homes/m/madams/mps-wrapper.sh ../ex2 -dm_landau_device_type cuda -dm_mat_type aijcusparse -dm_vec_type cuda -log_view -pc_type gamg -ksp_type gmres -pc_gamg_reuse_interpolation -matmatmult_backend_cpu -matptap_backend_cpu -dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 -dm_landau_thermal_temps 1,1,1,1,1,1,1,1,1,1 -dm_landau_n 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 >> 0 starting nvidia-cuda-mps-control on cgpu17 >> mps ready: 2021-07-07T06:17:36-07:00 >> masses: e= 9.109e-31; ions in proton mass units: 5.000e-04 1.000e+00 ... >> charges: e=-1.602e-19; charges in elementary units: 1.000e+00 2.000e+00 >> thermal T (K): e= 1.160e+07 i= 1.160e+07 imp= 1.160e+07. v_0= 1.326e+07 n_0= 1.000e+20 t_0= 5.787e-06 domain= 5.000e+00 >> CalculateE j0=0. Ec = 0.050991 >> 0 TS dt 1. time 0. >> 0) species-0: charge density= -1.6054532569865e+01 z-momentum= -1.9059929215360e-19 energy= 2.4178543516210e+04 >> 0) species-1: charge density= 8.0258396545108e+00 z-momentum= 7.0660527288120e-20 energy= 1.2082380663859e+04 >> 0) species-2: charge density= 6.3912608577597e-05 z-momentum= -1.1513901010709e-24 energy= 3.5799558195524e-01 >> 0) species-3: charge density= 9.5868912866395e-05 z-momentum= -1.1513901010709e-24 energy= 3.5799558195524e-01 >> 0) species-4: charge density= 1.2782521715519e-04 z-momentum= -1.1513901010709e-24 energy= 3.5799558195524e-01 >> [7]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [7]PETSC ERROR: GPU resources unavailable >> [7]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. Reports alloc failed; this indicates the GPU has run out resources >> [7]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [7]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e GIT Date: 2021-07-06 03:22:54 -0700 >> [7]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu17 by madams Wed Jul 7 06:17:38 2021 >> [7]PETSC ERROR: Configure options --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --FFLAGS=" -g " --COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 --download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc >> [7]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu:2622 >> [7]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1146 >> [7]PETSC ERROR: #3 MatProductSymbolic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 >> [7]PETSC ERROR: #4 MatPtAP() at /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 >> [7]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 >> [7]PETSC ERROR: #6 PCSetUp_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 >> [7]PETSC ERROR: #7 PCSetUp() at /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 >> [7]PETSC ERROR: #8 KSPSetUp() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 >> [7]PETSC ERROR: #9 KSPSolve_Private() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 >> [7]PETSC ERROR: #10 KSPSolve() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 >> [7]PETSC ERROR: #11 SNESSolve_NEWTONLS() at /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 >> [7]PETSC ERROR: #12 SNESSolve() at /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 >> [7]PETSC ERROR: #13 TSTheta_SNESSolve() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 >> [7]PETSC ERROR: #14 TSStep_Theta() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 >> [7]PETSC ERROR: #15 TSStep() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 >> [7]PETSC ERROR: #16 TSSolve() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 >> [7]PETSC ERROR: #17 main() at ex2.c:699 >> [7]PETSC ERROR: PETSc Option Table entries: >> [7]PETSC ERROR: -dm_landau_amr_levels_max 0 >> [7]PETSC ERROR: -dm_landau_amr_post_refine 5 >> [7]PETSC ERROR: -dm_landau_device_type cuda >> [7]PETSC ERROR: -dm_landau_domain_radius 5 >> [7]PETSC ERROR: -dm_landau_Ez 0 >> [7]PETSC ERROR: -dm_landau_ion_charges 1,2,3,4,5,6,7,8,9 >> [7]PETSC ERROR: -dm_landau_ion_masses .0005,1,1,1,1,1,1,1,1 >> [7]PETSC ERROR: -dm_landau_n 1.000003,.5,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7,1e-7 >> [7]PETSC ERROR: -dm_landau_thermal_temps 1,1,1,1,1,1,1,1,1,1 >> [7]PETSC ERROR: -dm_landau_type p4est >> [7]PETSC ERROR: -dm_mat_type aijcusparse >> [7]PETSC ERROR: -dm_preallocate_only >> [7]PETSC ERROR: -dm_vec_type cuda >> [7]PETSC ERROR: -ex2_connor_e_field_units >> [7]PETSC ERROR: -ex2_impurity_index 1 >> [7]PETSC ERROR: -ex2_plot_dt 200 >> [7]PETSC ERROR: -ex2_test_type none >> [7]PETSC ERROR: -ksp_type gmres >> [7]PETSC ERROR: -log_view >> [7]PETSC ERROR: -matmatmult_backend_cpu >> [7]PETSC ERROR: -matptap_backend_cpu >> [7]PETSC ERROR: -pc_gamg_reuse_interpolation >> [7]PETSC ERROR: -pc_type gamg >> [7]PETSC ERROR: -petscspace_degree 1 >> [7]PETSC ERROR: -snes_max_it 15 >> [7]PETSC ERROR: -snes_rtol 1.e-6 >> [7]PETSC ERROR: -snes_stol 1.e-6 >> [7]PETSC ERROR: -ts_adapt_scale_solve_failed 0.5 >> [7]PETSC ERROR: -ts_adapt_time_step_increase_delay 5 >> [7]PETSC ERROR: -ts_dt 1 >> [7]PETSC ERROR: -ts_exact_final_time stepover >> [7]PETSC ERROR: -ts_max_snes_failures -1 >> [7]PETSC ERROR: -ts_max_steps 10 >> [7]PETSC ERROR: -ts_max_time 300 >> [7]PETSC ERROR: -ts_rtol 1e-2 >> [7]PETSC ERROR: -ts_type beuler >> >> On Wed, Jul 7, 2021 at 4:07 AM Stefano Zampini > wrote: >> This will select the CPU path >> >> -matmatmult_backend_cpu -matptap_backend_cpu >> >>> On Jul 7, 2021, at 2:43 AM, Mark Adams > wrote: >>> >>> Can I turn off using cuSprarse for RAP? >>> >>> On Tue, Jul 6, 2021 at 6:25 PM Barry Smith > wrote: >>> >>> Stefano has mentioned this before. He reported cuSparse matrix-matrix vector products use a very amount of memory. >>> >>>> On Jul 6, 2021, at 4:33 PM, Mark Adams > wrote: >>>> >>>> I am running out of memory in GAMG. It looks like this is from the new cuSparse RAP. >>>> I was able to run Hypre with twice as much work on the GPU as this run. >>>> Are there parameters to tweek for this perhaps or can I disable it? >>>> >>>> Thanks, >>>> Mark >>>> >>>> 0 SNES Function norm 5.442539952302e-04 >>>> [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>>> [2]PETSC ERROR: GPU resources unavailable >>>> [2]PETSC ERROR: CUDA error 2 (cudaErrorMemoryAllocation) : out of memory. Reports alloc failed; this indicates the GPU has run out resources >>>> [2]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>>> [2]PETSC ERROR: Petsc Development GIT revision: v3.15.1-569-g270a066c1e GIT Date: 2021-07-06 03:22:54 -0700 >>>> [2]PETSC ERROR: ../ex2 on a arch-cori-gpu-opt-gcc named cgpu11 by madams Tue Jul 6 13:37:43 2021 >>>> [2]PETSC ERROR: Configure options --with-mpi-dir=/usr/common/software/sles15_cgpu/openmpi/4.0.3/gcc --with-cuda-dir=/usr/common/software/sles15_cgpu/cuda/11.1.1 --CFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECI >>>> ES=10 -DLANDAU_MAX_Q=4" --CXXFLAGS=" -g -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --CUDAFLAGS="-g -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4" --FFLAGS=" -g " - >>>> -COPTFLAGS=" -O3" --CXXOPTFLAGS=" -O3" --FOPTFLAGS=" -O3" --download-fblaslapack=1 --with-debugging=0 --with-mpiexec="srun -G 1" --with-cuda-gencodearch=70 --with-batch=0 --with-cuda=1 --download-p4est=1 -- >>>> download-hypre=1 --with-zlib=1 PETSC_ARCH=arch-cori-gpu-opt-gcc >>>> [2]PETSC ERROR: #1 MatProductSymbolic_SeqAIJCUSPARSE_SeqAIJCUSPARSE() at /global/u2/m/madams/petsc/src/mat/impls/aij/seq/seqcusparse/aijcusparse.cu:2622 >>>> [2]PETSC ERROR: #2 MatProductSymbolic_ABC_Basic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:1159 >>>> [2]PETSC ERROR: #3 MatProductSymbolic() at /global/u2/m/madams/petsc/src/mat/interface/matproduct.c:799 >>>> [2]PETSC ERROR: #4 MatPtAP() at /global/u2/m/madams/petsc/src/mat/interface/matrix.c:9626 >>>> [2]PETSC ERROR: #5 PCGAMGCreateLevel_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:87 >>>> [2]PETSC ERROR: #6 PCSetUp_GAMG() at /global/u2/m/madams/petsc/src/ksp/pc/impls/gamg/gamg.c:663 >>>> [2]PETSC ERROR: #7 PCSetUp() at /global/u2/m/madams/petsc/src/ksp/pc/interface/precon.c:1014 >>>> [2]PETSC ERROR: #8 KSPSetUp() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:406 >>>> [2]PETSC ERROR: #9 KSPSolve_Private() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:850 >>>> [2]PETSC ERROR: #10 KSPSolve() at /global/u2/m/madams/petsc/src/ksp/ksp/interface/itfunc.c:1084 >>>> [2]PETSC ERROR: #11 SNESSolve_NEWTONLS() at /global/u2/m/madams/petsc/src/snes/impls/ls/ls.c:225 >>>> [2]PETSC ERROR: #12 SNESSolve() at /global/u2/m/madams/petsc/src/snes/interface/snes.c:4769 >>>> [2]PETSC ERROR: #13 TSTheta_SNESSolve() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:185 >>>> [2]PETSC ERROR: #14 TSStep_Theta() at /global/u2/m/madams/petsc/src/ts/impls/implicit/theta/theta.c:223 >>>> [2]PETSC ERROR: #15 TSStep() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3571 >>>> [2]PETSC ERROR: #16 TSSolve() at /global/u2/m/madams/petsc/src/ts/interface/ts.c:3968 >>>> [2]PETSC ERROR: #17 main() at ex2.c:699 >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.semplice at uninsubria.it Fri Jul 9 09:03:37 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Fri, 9 Jul 2021 16:03:37 +0200 Subject: [petsc-users] best way to output in parallel data from DMDA (levelset) finite difference simulation Message-ID: <5ab6d521-c548-5cf1-6496-7c713906f55f@uninsubria.it> Dear all, ??? it seems it should be a fairly straighforward thing to do but I am struggling with the output of my finite difference simulation. I have tried adapting my XML ascii VTK output routines that work nicely for finite volumes (but I have issues at points where 4 CPU subdomains touch), with an HDF5 PetscViewer (for which I cannot get a correct xdmf file) and few other random attempts, but neither got me a fully satisfactory result. Rather than correcting my attempts, I am ready to start afresh, also since this project will end up with much larger meshes than those I am used to. In your experience, what is the best way to output data - associated to a DMDA with more than one scalar fields, so that each variable can be visualized independently (dof 0 to scalar field "A", dof 1 to field "B", etc) - compatible with paraview (or visit or any other free tool on linux, if it need be) - with decent scaling, i.e. Vec data should be written directly by each CPU - binary format so that files are not incredibly huge is a plus - (bonus point) in the (not so) long run this will become a method for an arbitrary? subdomain of the Cartesian grid defined by a level-set function, so a way to distinguish the points of the DMDA that are outside the physical domain would be a bonus (for example I could easily fill the Vec with NaNs for those points as long as the output/visualization can handle this). Thanks in advance! Best ??? Matteo From junchao.zhang at gmail.com Fri Jul 9 10:52:38 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Fri, 9 Jul 2021 10:52:38 -0500 Subject: [petsc-users] Does petsc duplicate the users communicator? In-Reply-To: <2BCE8D87-DD32-489D-A777-43F6FFA588F7@petsc.dev> References: <2BCE8D87-DD32-489D-A777-43F6FFA588F7@petsc.dev> Message-ID: On Thu, Jul 8, 2021 at 11:21 PM Barry Smith wrote: > > Whenever PETSc is handed a communicator it looks for an attribute > inside of the communicator that contains the "PETSc" version of that > communicator. If it does not find the attribute it adds an attribute with a > new communicator in, if it does find one it increases its reference count > by one. The routine it uses to perform this is PetscCommDuplicate(). We do > it this was so that PETSc communication will never potentially interfere > with the users use of their communicators. PetscCommDestroy() decreases the > reference count of the inner communicator by one. So, for example, if you > use "comm" to create two PETSc objects, PETSc will create an attribute on > "comm" with a new communicator, when both objects are destroy then > PetscCommDestroy() will have been called twice and the inner (PETSc) > communicator will be destroyed. > > If someone did > > Use MPI to create a new communicator > VecCreate(comm,...) > Use MPI to destroy the new communicator > .... > VecDestroy() > The code above will work correctly. In 'Use MPI to destroy the new communicator', MPI finds out *comm* has an attribute Petsc_InnerComm_keyval, so it invokes a petsc function Petsc_InnerComm_Attr_Delete_Fn (which was given to MPI at PetscInitialize). In Petsc_InnerComm_Attr_Delete_Fn, it cuts the link between *comm* and its inner petsc comm (which is still used by vec in this example). The inner petsc comm is still valid and accessible via PetscObjectComm(). It will be destroyed when its reference count (managed by petsc) reaches zero (probably in VecDestroy). > I am not sure what will happen since PETSc keeps a reference to the outer > communicator from its own inner communicator. And destroying the user > communicator will cause an attempt to destroy the attribute containing the > inner PETSc communicator. I had always just assumed the user would not be > deleting any MPI communicators they made and pass to PETSc until they were > done with PETSc. It may work correctly but may not. > > The reality is very few MPI codes have complicated life cycles for MPI > communicators. > > Barry > > > > On Jul 8, 2021, at 10:17 PM, Kozdon, Jeremy (CIV) > wrote: > > > > Sorry if this is clearly stated somewhere in the docs, I'm still getting > familiar with the petsc codebase and was also unable to find the answer > searching (nor could I determine where this would be done in the source). > > > > Does petsc duplicate MPI communicators? Or does the users program need > to make sure that the communicator remains valid for the life of a petsc > object? > > > > The attached little test code seems to suggest that there is some > duplication of MPI communicators behind the scenes. > > > > This came up when working on Julia wrappers for petsc. (Julia has a > garbage collector so we need to make sure that references are properly kept > if needed.) > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jekozdon at nps.edu Fri Jul 9 12:13:07 2021 From: jekozdon at nps.edu (Kozdon, Jeremy (CIV)) Date: Fri, 9 Jul 2021 17:13:07 +0000 Subject: [petsc-users] Does petsc duplicate the users communicator? In-Reply-To: References: <2BCE8D87-DD32-489D-A777-43F6FFA588F7@petsc.dev> Message-ID: This is all super helpful! Thanks. It seems to me that we do not need to carry around a reference to the communicator in Julia then. Mainly I wanted to use `PetscObjectGetComm` everywhere once `PetscObjects` were created, but someone pointed out this might run in to problems with the garbage collector. In my mind it makes sense to rely on `PetscObjectGetComm` since you don?t know if this is some object that matches the object you originally created or some derived object with a different processor distribution (such as what I believe happens with multigrid). > On Jul 9, 2021, at 8:52 AM, Junchao Zhang wrote: > > > NPS WARNING: *external sender* verify before acting. > > > > > > On Thu, Jul 8, 2021 at 11:21 PM Barry Smith wrote: > > Whenever PETSc is handed a communicator it looks for an attribute inside of the communicator that contains the "PETSc" version of that communicator. If it does not find the attribute it adds an attribute with a new communicator in, if it does find one it increases its reference count by one. The routine it uses to perform this is PetscCommDuplicate(). We do it this was so that PETSc communication will never potentially interfere with the users use of their communicators. PetscCommDestroy() decreases the reference count of the inner communicator by one. So, for example, if you use "comm" to create two PETSc objects, PETSc will create an attribute on "comm" with a new communicator, when both objects are destroy then PetscCommDestroy() will have been called twice and the inner (PETSc) communicator will be destroyed. > > If someone did > > Use MPI to create a new communicator > VecCreate(comm,...) > Use MPI to destroy the new communicator > .... > VecDestroy() > The code above will work correctly. In 'Use MPI to destroy the new communicator', MPI finds out comm has an attribute Petsc_InnerComm_keyval, so it invokes a petsc function Petsc_InnerComm_Attr_Delete_Fn (which was given to MPI at PetscInitialize). > In Petsc_InnerComm_Attr_Delete_Fn, it cuts the link between comm and its inner petsc comm (which is still used by vec in this example). The inner petsc comm is still valid and accessible via PetscObjectComm(). It will be destroyed when its reference count (managed by petsc) reaches zero (probably in VecDestroy). > > > I am not sure what will happen since PETSc keeps a reference to the outer communicator from its own inner communicator. And destroying the user communicator will cause an attempt to destroy the attribute containing the inner PETSc communicator. I had always just assumed the user would not be deleting any MPI communicators they made and pass to PETSc until they were done with PETSc. It may work correctly but may not. > > The reality is very few MPI codes have complicated life cycles for MPI communicators. > > Barry > > > > On Jul 8, 2021, at 10:17 PM, Kozdon, Jeremy (CIV) wrote: > > > > Sorry if this is clearly stated somewhere in the docs, I'm still getting familiar with the petsc codebase and was also unable to find the answer searching (nor could I determine where this would be done in the source). > > > > Does petsc duplicate MPI communicators? Or does the users program need to make sure that the communicator remains valid for the life of a petsc object? > > > > The attached little test code seems to suggest that there is some duplication of MPI communicators behind the scenes. > > > > This came up when working on Julia wrappers for petsc. (Julia has a garbage collector so we need to make sure that references are properly kept if needed.) > > > > > From junchao.zhang at gmail.com Fri Jul 9 12:57:09 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Fri, 9 Jul 2021 12:57:09 -0500 Subject: [petsc-users] Does petsc duplicate the users communicator? In-Reply-To: References: <2BCE8D87-DD32-489D-A777-43F6FFA588F7@petsc.dev> Message-ID: On Fri, Jul 9, 2021 at 12:13 PM Kozdon, Jeremy (CIV) wrote: > This is all super helpful! Thanks. > > It seems to me that we do not need to carry around a reference to the > communicator in Julia then. > > Mainly I wanted to use `PetscObjectGetComm` everywhere once `PetscObjects` > were created, but someone pointed out this might run in to problems with > the garbage collector. > > In my mind it makes sense to rely on `PetscObjectGetComm` since you don?t > know if this is some object that matches the object you originally created > or some derived object with a different processor distribution (such as > what I believe happens with multigrid). > I think you are right. > > > On Jul 9, 2021, at 8:52 AM, Junchao Zhang > wrote: > > > > > > NPS WARNING: *external sender* verify before acting. > > > > > > > > > > > > On Thu, Jul 8, 2021 at 11:21 PM Barry Smith wrote: > > > > Whenever PETSc is handed a communicator it looks for an attribute > inside of the communicator that contains the "PETSc" version of that > communicator. If it does not find the attribute it adds an attribute with a > new communicator in, if it does find one it increases its reference count > by one. The routine it uses to perform this is PetscCommDuplicate(). We do > it this was so that PETSc communication will never potentially interfere > with the users use of their communicators. PetscCommDestroy() decreases the > reference count of the inner communicator by one. So, for example, if you > use "comm" to create two PETSc objects, PETSc will create an attribute on > "comm" with a new communicator, when both objects are destroy then > PetscCommDestroy() will have been called twice and the inner (PETSc) > communicator will be destroyed. > > > > If someone did > > > > Use MPI to create a new communicator > > VecCreate(comm,...) > > Use MPI to destroy the new communicator > > .... > > VecDestroy() > > The code above will work correctly. In 'Use MPI to destroy the new > communicator', MPI finds out comm has an attribute Petsc_InnerComm_keyval, > so it invokes a petsc function Petsc_InnerComm_Attr_Delete_Fn (which was > given to MPI at PetscInitialize). > > In Petsc_InnerComm_Attr_Delete_Fn, it cuts the link between comm and its > inner petsc comm (which is still used by vec in this example). The inner > petsc comm is still valid and accessible via PetscObjectComm(). It will be > destroyed when its reference count (managed by petsc) reaches zero > (probably in VecDestroy). > > > > > > I am not sure what will happen since PETSc keeps a reference to the > outer communicator from its own inner communicator. And destroying the user > communicator will cause an attempt to destroy the attribute containing the > inner PETSc communicator. I had always just assumed the user would not be > deleting any MPI communicators they made and pass to PETSc until they were > done with PETSc. It may work correctly but may not. > > > > The reality is very few MPI codes have complicated life cycles for MPI > communicators. > > > > Barry > > > > > > > On Jul 8, 2021, at 10:17 PM, Kozdon, Jeremy (CIV) > wrote: > > > > > > Sorry if this is clearly stated somewhere in the docs, I'm still > getting familiar with the petsc codebase and was also unable to find the > answer searching (nor could I determine where this would be done in the > source). > > > > > > Does petsc duplicate MPI communicators? Or does the users program need > to make sure that the communicator remains valid for the life of a petsc > object? > > > > > > The attached little test code seems to suggest that there is some > duplication of MPI communicators behind the scenes. > > > > > > This came up when working on Julia wrappers for petsc. (Julia has a > garbage collector so we need to make sure that references are properly kept > if needed.) > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sat Jul 10 14:11:55 2021 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 10 Jul 2021 14:11:55 -0500 Subject: [petsc-users] petsc-3.15.2 now available Message-ID: <51437ab-b133-e08d-1124-fece557da8ce@mcs.anl.gov> Dear PETSc users, The patch release petsc-3.15.2 is now available for download. https://petsc.org/release/download/ Satish From matteo.semplice at uninsubria.it Mon Jul 12 10:40:36 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Mon, 12 Jul 2021 17:40:36 +0200 Subject: [petsc-users] output DMDA to hdf5 file? Message-ID: <5ca2d4df-3f67-2951-c0b7-a2678d9df335@uninsubria.it> Dear all, ??? I am experimenting with hdf5+xdmf output. At https://www.xdmf.org/index.php/XDMF_Model_and_Format I read that "XDMF uses XML to store Light data and to describe the data Model. Either HDF5[3] or binary files can be used to store Heavy data. The data Format is stored redundantly in both XML and HDF5." However, if I call DMView(dmda,hdf5viewer) and then I run h5ls or h5stat on the resulting h5 file, I see no "geometry" section in the file. How should I write the geometry to the HDF5 file? Here below is what I have tried. Best ??? Matteo //Setup ierr = DMDACreate2d(PETSC_COMM_WORLD, ????????????????????????????? DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, ????????????????????????????? DMDA_STENCIL_STAR, ????????????????????????????? ctx.Nx,ctx.Ny, //global dim ????????????????????????????? PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim ????????????????????????????? 2,stWidth, //dof, stencil width ????????????????????????????? NULL, NULL, //n nodes per direction on each cpu ????????????????????????????? &(ctx.daAll)); ierr = DMDASetUniformCoordinates(ctx.daAll, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0); CHKERRQ(ierr); ierr = DMDASetFieldName(ctx.daAll,0,"first"); CHKERRQ(ierr); ierr = DMDASetFieldName(ctx.daAll,1,"second"); CHKERRQ(ierr); ierr = DMDAGetLocalInfo(ctx.daAll,&ctx.daInfo); CHKERRQ(ierr); ierr = DMCreateFieldDecomposition(ctx.daAll,NULL, NULL, &ctx.is, &ctx.daField); CHKERRQ(ierr); //Initial data ierr = DMCreateGlobalVector(ctx.daAll,&ctx.U0); CHKERRQ(ierr); ierr = VecISSet(ctx.U0,ctx.is[0],1.); CHKERRQ(ierr); ierr = VecISSet(ctx.U0,ctx.is[1],100.0); CHKERRQ(ierr); Write mesh and Vec to hdf5: PetscViewer viewer; ? ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD,"solution.h5",FILE_MODE_WRITE,&viewer); CHKERRQ(ierr); ? ierr = DMView(ctx.daAll , viewer); CHKERRQ(ierr);? //does not output anything to solution.h5?? ? ierr = VecView(ctx.U0,viewer); CHKERRQ(ierr); ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr); Attempt to save the two fields separately:: PetscViewer viewer; ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD,"solution.h5",FILE_MODE_WRITE,&viewer); CHKERRQ(ierr); ierr = DMView(ctx.daField[0] , viewer); CHKERRQ(ierr); //does not output anything to solution.h5?? Vec uF; ierr = VecGetSubVector(ctx.U,ctx.is[0],&uF); CHKERRQ(ierr); PetscObjectSetName((PetscObject) uF, "first"); ierr = VecView(uF,viewer); CHKERRQ(ierr); ierr = VecRestoreSubVector(ctx.U,ctx.is[0],&uF); CHKERRQ(ierr); ierr = VecGetSubVector(ctx.U,ctx.is[1],&uF); CHKERRQ(ierr); PetscObjectSetName((PetscObject) uF, "second"); ierr = VecView(uF,viewer); CHKERRQ(ierr); ierr = VecRestoreSubVector(ctx.U,ctx.is[1],&uF); CHKERRQ(ierr); ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr); -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 12 10:51:07 2021 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 12 Jul 2021 11:51:07 -0400 Subject: [petsc-users] output DMDA to hdf5 file? In-Reply-To: <5ca2d4df-3f67-2951-c0b7-a2678d9df335@uninsubria.it> References: <5ca2d4df-3f67-2951-c0b7-a2678d9df335@uninsubria.it> Message-ID: On Mon, Jul 12, 2021 at 11:40 AM Matteo Semplice < matteo.semplice at uninsubria.it> wrote: > Dear all, > > I am experimenting with hdf5+xdmf output. At > https://www.xdmf.org/index.php/XDMF_Model_and_Format I read that "XDMF > uses XML to store Light data and to describe the data Model. Either HDF5 > [3] or binary files can be used to store > Heavy data. The data Format is stored redundantly in both XML and HDF5." > > However, if I call DMView(dmda,hdf5viewer) and then I run h5ls or h5stat > on the resulting h5 file, I see no "geometry" section in the file. How > should I write the geometry to the HDF5 file? > > Here below is what I have tried. > > The HDF5 stuff is only implemented for DMPlex since unstructured grids need to be explicitly stored. You can usually just define the structured grid in the XML without putting anything in the HDF5. We could write metadata so that the XML could be autogenerated, but we have not done that. Thanks, Matt > Best > > Matteo > > //Setup > > ierr = DMDACreate2d(PETSC_COMM_WORLD, > DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, > DMDA_STENCIL_STAR, > ctx.Nx,ctx.Ny, //global dim > PETSC_DECIDE,PETSC_DECIDE, //n proc on each > dim > 2,stWidth, //dof, stencil width > NULL, NULL, //n nodes per direction on each > cpu > &(ctx.daAll)); > > ierr = DMDASetUniformCoordinates(ctx.daAll, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0); > CHKERRQ(ierr); > ierr = DMDASetFieldName(ctx.daAll,0,"first"); CHKERRQ(ierr); > ierr = DMDASetFieldName(ctx.daAll,1,"second"); CHKERRQ(ierr); > ierr = DMDAGetLocalInfo(ctx.daAll,&ctx.daInfo); CHKERRQ(ierr); > ierr = DMCreateFieldDecomposition(ctx.daAll,NULL, NULL, &ctx.is, > &ctx.daField); CHKERRQ(ierr); > > //Initial data > ierr = DMCreateGlobalVector(ctx.daAll,&ctx.U0); CHKERRQ(ierr); > ierr = VecISSet(ctx.U0,ctx.is[0],1.); CHKERRQ(ierr); > ierr = VecISSet(ctx.U0,ctx.is[1],100.0); CHKERRQ(ierr); > > > Write mesh and Vec to hdf5: > > PetscViewer viewer; > ierr = > PetscViewerHDF5Open(PETSC_COMM_WORLD,"solution.h5",FILE_MODE_WRITE,&viewer); > CHKERRQ(ierr); > ierr = DMView(ctx.daAll , viewer); CHKERRQ(ierr); //does not output > anything to solution.h5?? > ierr = VecView(ctx.U0,viewer); CHKERRQ(ierr); > > ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr); > > Attempt to save the two fields separately:: > > PetscViewer viewer; > ierr = > PetscViewerHDF5Open(PETSC_COMM_WORLD,"solution.h5",FILE_MODE_WRITE,&viewer); > CHKERRQ(ierr); > ierr = DMView(ctx.daField[0] , viewer); CHKERRQ(ierr); //does not output > anything to solution.h5?? > > Vec uF; > ierr = VecGetSubVector(ctx.U,ctx.is[0],&uF); CHKERRQ(ierr); > PetscObjectSetName((PetscObject) uF, "first"); > ierr = VecView(uF,viewer); CHKERRQ(ierr); > ierr = VecRestoreSubVector(ctx.U,ctx.is[0],&uF); CHKERRQ(ierr); > > ierr = VecGetSubVector(ctx.U,ctx.is[1],&uF); CHKERRQ(ierr); > PetscObjectSetName((PetscObject) uF, "second"); > ierr = VecView(uF,viewer); CHKERRQ(ierr); > ierr = VecRestoreSubVector(ctx.U,ctx.is[1],&uF); CHKERRQ(ierr); > > ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr); > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 13 16:47:14 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 13 Jul 2021 17:47:14 -0400 Subject: [petsc-users] error on Spock Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 39943 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1639335 bytes Desc: not available URL: From junchao.zhang at gmail.com Tue Jul 13 19:59:33 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 13 Jul 2021 19:59:33 -0500 Subject: [petsc-users] error on Spock In-Reply-To: References: Message-ID: Try petsc/main? --Junchao Zhang On Tue, Jul 13, 2021 at 4:48 PM Mark Adams wrote: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 13 20:30:41 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 13 Jul 2021 21:30:41 -0400 Subject: [petsc-users] error on Spock In-Reply-To: References: Message-ID: On Tue, Jul 13, 2021 at 8:59 PM Junchao Zhang wrote: > Try petsc/main? > This was main. I started with your Kokkos fix branch and saw this and tried with main and w/o Kokkos. > --Junchao Zhang > > > On Tue, Jul 13, 2021 at 4:48 PM Mark Adams wrote: > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 13 20:37:15 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 13 Jul 2021 21:37:15 -0400 Subject: [petsc-users] error on Spock In-Reply-To: References: Message-ID: Configure is not finding mpicc: Using C compile: cc -o .o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -Qunused-arguments -fvisibility=hidden -g -O3 mpicc -show: Unavailable C compiler version: Cray clang version 11.0.4 (bc9473a12d1f2f43cde01f962a11240263bd8908) Using C++ compile: CC -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O3 -fPIC -I/gpfs/alpine/csc314/scratch/adams/petsc/include -I/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray/include -I/sw/spock/spack-envs/views/rocm-4.1.0/include mpicxx -show: Unavailable C++ compiler version: Cray clang version 11.0.4 (bc9473a12d1f2f43cde01f962a11240263bd8908) On Tue, Jul 13, 2021 at 9:30 PM Mark Adams wrote: > > > On Tue, Jul 13, 2021 at 8:59 PM Junchao Zhang > wrote: > >> Try petsc/main? >> > > This was main. > > I started with your Kokkos fix branch and saw this and tried with main and > w/o Kokkos. > > >> --Junchao Zhang >> >> >> On Tue, Jul 13, 2021 at 4:48 PM Mark Adams wrote: >> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 13 20:50:14 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 13 Jul 2021 21:50:14 -0400 Subject: [petsc-users] error on Spock In-Reply-To: References: Message-ID: I got it to build by adding some stuff that I thought I got rid of (and working): '--CPPFLAGS=-I${ROCM_PATH}/include', '--CC_LINKER_FLAGS=-L${ROCM_PATH}/lib -lamdhip64 -lhsa-runtime64', '--CXXPPFLAGS=-I${ROCM_PATH}/include', '--CXX_LINKER_FLAGS=-L${ROCM_PATH}/lib -lamdhip64 -lhsa-runtime64', '--COPTFLAGS=-g -O', '--CXXOPTFLAGS=-g -O', '--FOPTFLAGS=-g -O', '--HIPPPFLAGS=-I${MPICH_DIR}/include', # '--with-precision=double', On Tue, Jul 13, 2021 at 9:37 PM Mark Adams wrote: > Configure is not finding mpicc: > > Using C compile: cc -o .o -c -fPIC -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector > -Qunused-arguments -fvisibility=hidden -g -O3 > mpicc -show: Unavailable > C compiler version: Cray clang version 11.0.4 > (bc9473a12d1f2f43cde01f962a11240263bd8908) > Using C++ compile: CC -o .o -c -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O3 -fPIC > -I/gpfs/alpine/csc314/scratch/adams/petsc/include > -I/gpfs/alpine/csc314/scratch/adams/petsc/arch-spock-opt-cray/include > -I/sw/spock/spack-envs/views/rocm-4.1.0/include > mpicxx -show: Unavailable > C++ compiler version: Cray clang version 11.0.4 > (bc9473a12d1f2f43cde01f962a11240263bd8908) > > > On Tue, Jul 13, 2021 at 9:30 PM Mark Adams wrote: > >> >> >> On Tue, Jul 13, 2021 at 8:59 PM Junchao Zhang >> wrote: >> >>> Try petsc/main? >>> >> >> This was main. >> >> I started with your Kokkos fix branch and saw this and tried with main >> and w/o Kokkos. >> >> >>> --Junchao Zhang >>> >>> >>> On Tue, Jul 13, 2021 at 4:48 PM Mark Adams wrote: >>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jul 13 21:09:15 2021 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 13 Jul 2021 21:09:15 -0500 Subject: [petsc-users] error on Spock In-Reply-To: References: Message-ID: <0F9AC17B-591C-4F74-9F7A-50543E22C027@petsc.dev> The expected behavior on Cray systems is you load "appropriate" modules and then run ./configure without needing to provide compiler and MPI information and it "just works". So one should not need to provide --with-cc=cc --with-cxx=CC --with-fc=ftn on Cray systems. Loading "appropriate" modules is suppose to define the compilers (and MPI) you want to use so on should not need to be passed manually to PETSc's configure this kind of information. The --HIPPPFLAGS=-I/opt/cray/pe/mpich/8.1.4/ofi/crayclang/9.1include is horrific. needing to have pass MPI information to the HIP compiler likely means PETSc does not have a proper separation of HIP code from "plain old C code" that is mistakenly put in hip files. This is definitely currently true with CUDA and likely carried over to the HIP interfaces. (.i.e. most of the functions in the .cu files in PETSc should just be in .c files). The various rocm information should be handled automatically by ./configure and not be required to be provided by users. As it is currently handled for the CUDA libraries such as cubBLAS and cuSparse. > On Jul 13, 2021, at 4:47 PM, Mark Adams wrote: > > > From junchao.zhang at gmail.com Tue Jul 13 21:19:05 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 13 Jul 2021 21:19:05 -0500 Subject: [petsc-users] error on Spock In-Reply-To: <0F9AC17B-591C-4F74-9F7A-50543E22C027@petsc.dev> References: <0F9AC17B-591C-4F74-9F7A-50543E22C027@petsc.dev> Message-ID: Barry, the flags Mark used are required per Spock document. This is a test platform and the software environment is immature. --Junchao Zhang On Tue, Jul 13, 2021 at 9:09 PM Barry Smith wrote: > > The expected behavior on Cray systems is you load "appropriate" modules > and then run ./configure without needing to provide compiler and MPI > information and it "just works". > > So one should not need to provide --with-cc=cc --with-cxx=CC > --with-fc=ftn on Cray systems. Loading "appropriate" modules is suppose to > define the compilers (and MPI) you want to use so on should not need to be > passed manually to PETSc's configure this kind of information. > > The --HIPPPFLAGS=-I/opt/cray/pe/mpich/8.1.4/ofi/crayclang/9.1include is > horrific. needing to have pass MPI information to the HIP compiler likely > means PETSc does not have a proper separation of HIP code from "plain old C > code" that is mistakenly put in hip files. This is definitely currently > true with CUDA and likely carried over to the HIP interfaces. (.i.e. most > of the functions in the .cu files in PETSc should just be in .c files). > > The various rocm information should be handled automatically by > ./configure and not be required to be provided by users. As it is currently > handled for the CUDA libraries such as cubBLAS and cuSparse. > > > > > > > On Jul 13, 2021, at 4:47 PM, Mark Adams wrote: > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tangqi at msu.edu Tue Jul 13 23:12:12 2021 From: tangqi at msu.edu (Tang, Qi) Date: Wed, 14 Jul 2021 04:12:12 +0000 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> Message-ID: <3EC17F46-3BDE-4755-8EDA-34C8853EE378@msu.edu> Hi, During the process to experiment the suggestion Matt made, we ran into some questions regarding to TSSolve vs KSPSolve. We got different initial unpreconditioned residual using two solvers. Let?s say we solve the problem with backward Euler and there is no rhs. We guess TSSolve solves (U^{n+1}-U^n)/dt = A U^{n+1}. (We only provides IJacobian in this case and turn on TS_LINEAR.) So we guess the initial unpreconditioned residual would be ||U^n/dt||_2, which seems different from the residual we got from a backward Euler stepping we implemented by ourself through KSPSolve. Do we have some misunderstanding on TSSolve? Thanks, Qi T5 at LANL On Jul 7, 2021, at 3:54 PM, Matthew Knepley > wrote: On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae > wrote: Hi Matt, Thanks for your quick reply. I have not completely understood your suggestion, could you please elaborate a bit more? For your convenience, here is how I am proceeding for the moment in my code: TSGetKSP(ts,&ksp); KSPGetPC(ksp,&pc); PCSetType(pc,PCFIELDSPLIT); PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); PCSetUp(pc); PCFieldSplitGetSubKSP(pc, &n, &subksp); KSPGetPC(subksp[1], &(subpc[1])); I do not like the two lines above. We should not have to do this. KSPSetOperators(subksp[1],T,T); In the above line, I want you to use a separate preconditioning matrix M, instead of T. That way, it will provide the preconditioning matrix for your Schur complement problem. Thanks, Matt KSPSetUp(subksp[1]); PetscFree(subksp); TSSolve(ts,X); Thank you. Best, Zakariae ________________________________ From: Matthew Knepley > Sent: Wednesday, July 7, 2021 12:11:10 PM To: Jorti, Zakariae Cc: petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu Subject: [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users > wrote: Hi, I am trying to build a PCFIELDSPLIT preconditioner for a matrix J = [A00 A01] [A10 A11] that has the following shape: M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I 0] [0 I] [0 ksp(T)] [-A10 ksp(A00) I ] where T is a user-defined Schur complement approximation that replaces the true Schur complement S:= A11 - A10 ksp(A00) A01. I am trying to do something similar to this example (lines 41--45 and 116--121): https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html The problem I have is that I manage to replace S with T on a separate single linear system but not for the linear systems generated by my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} correctly, the T matrix gets replaced by S in the preconditioner once I call TSSolve. Do you have any suggestions how to fix this knowing that the matrix J does not change over time? I don't like how it is done in that example for this very reason. When I want to use a custom preconditioning matrix for the Schur complement, I always give a preconditioning matrix M to the outer solve. Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for the Schur complement, for that preconditioning matrix without extra code. Can you do this? Thanks, Matt Many thanks. Best regards, Zakariae -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Wed Jul 14 03:43:37 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 14 Jul 2021 10:43:37 +0200 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: <3EC17F46-3BDE-4755-8EDA-34C8853EE378@msu.edu> References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> <3EC17F46-3BDE-4755-8EDA-34C8853EE378@msu.edu> Message-ID: <3DCFCCF0-FFF3-4672-A1BC-85BAC5689FCC@gmail.com> Qi Backward Euler is a special case of Theta methods in PETSc (Theta=1). In src/ts/impls/implicit/theta/theta.c on top of SNESTSFormFunction_Theta you have some explanation of what is solved for at each time step (see below). SNES then solves for the Newton update dy_n and the next Newton iterate is computed as x_{n+1} = x_{n} - lambda * dy_n. Hope this helps. /* This defines the nonlinear equation that is to be solved with SNES G(U) = F[t0+Theta*dt, U, (U-U0)*shift] = 0 Note that U here is the stage argument. This means that U = U_{n+1} only if endpoint = true, otherwise U = theta U_{n+1} + (1 - theta) U0, which for the case of implicit midpoint is U = (U_{n+1} + U0)/2 */ static PetscErrorCode SNESTSFormFunction_Theta(SNES snes,Vec x,Vec y,TS ts) > On Jul 14, 2021, at 6:12 AM, Tang, Qi wrote: > > Hi, > > During the process to experiment the suggestion Matt made, we ran into some questions regarding to TSSolve vs KSPSolve. We got different initial unpreconditioned residual using two solvers. Let?s say we solve the problem with backward Euler and there is no rhs. We guess TSSolve solves > (U^{n+1}-U^n)/dt = A U^{n+1}. > (We only provides IJacobian in this case and turn on TS_LINEAR.) > So we guess the initial unpreconditioned residual would be ||U^n/dt||_2, which seems different from the residual we got from a backward Euler stepping we implemented by ourself through KSPSolve. > > Do we have some misunderstanding on TSSolve? > > Thanks, > Qi > T5 at LANL > > > >> On Jul 7, 2021, at 3:54 PM, Matthew Knepley > wrote: >> >> On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae > wrote: >> Hi Matt, >> >> >> >> Thanks for your quick reply. >> >> I have not completely understood your suggestion, could you please elaborate a bit more? >> >> For your convenience, here is how I am proceeding for the moment in my code: >> >> >> >> TSGetKSP(ts,&ksp); >> >> KSPGetPC(ksp,&pc); >> >> PCSetType(pc,PCFIELDSPLIT); >> >> PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); >> >> PCSetUp(pc); >> >> PCFieldSplitGetSubKSP(pc, &n, &subksp); >> >> KSPGetPC(subksp[1], &(subpc[1])); >> >> I do not like the two lines above. We should not have to do this. >> KSPSetOperators(subksp[1],T,T); >> >> In the above line, I want you to use a separate preconditioning matrix M, instead of T. That way, it will provide >> the preconditioning matrix for your Schur complement problem. >> >> Thanks, >> >> Matt >> KSPSetUp(subksp[1]); >> >> PetscFree(subksp); >> >> TSSolve(ts,X); >> >> >> >> Thank you. >> >> Best, >> >> >> >> Zakariae >> >> From: Matthew Knepley > >> Sent: Wednesday, July 7, 2021 12:11:10 PM >> To: Jorti, Zakariae >> Cc: petsc-users at mcs.anl.gov ; Tang, Qi; Tang, Xianzhu >> Subject: [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT >> >> On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users > wrote: >> Hi, >> >> >> >> I am trying to build a PCFIELDSPLIT preconditioner for a matrix >> >> J = [A00 A01] >> >> [A10 A11] >> >> that has the following shape: >> >> >> >> M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I 0] >> >> [0 I] [0 ksp(T)] [-A10 ksp(A00) I ] >> >> >> >> where T is a user-defined Schur complement approximation that replaces the true Schur complement S:= A11 - A10 ksp(A00) A01. >> >> >> >> I am trying to do something similar to this example (lines 41--45 and 116--121): https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html >> >> The problem I have is that I manage to replace S with T on a separate single linear system but not for the linear systems generated by my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} correctly, the T matrix gets replaced by S in the preconditioner once I call TSSolve. >> >> Do you have any suggestions how to fix this knowing that the matrix J does not change over time? >> >> >> I don't like how it is done in that example for this very reason. >> >> When I want to use a custom preconditioning matrix for the Schur complement, I always give a preconditioning matrix M to the outer solve. >> Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for the Schur complement, for that preconditioning matrix without >> extra code. Can you do this? >> >> Thanks, >> >> Matt >> Many thanks. >> >> >> >> Best regards, >> >> >> >> Zakariae >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 14 03:56:38 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Jul 2021 04:56:38 -0400 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: <3DCFCCF0-FFF3-4672-A1BC-85BAC5689FCC@gmail.com> References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> <3EC17F46-3BDE-4755-8EDA-34C8853EE378@msu.edu> <3DCFCCF0-FFF3-4672-A1BC-85BAC5689FCC@gmail.com> Message-ID: On Wed, Jul 14, 2021 at 4:43 AM Stefano Zampini wrote: > Qi > > Backward Euler is a special case of Theta methods in PETSc (Theta=1). > In src/ts/impls/implicit/theta/theta.c on top of > SNESTSFormFunction_Theta you have some explanation of what is solved for at > each time step (see below). SNES then solves for the Newton update dy_n > and the next Newton iterate is computed as x_{n+1} = x_{n} - lambda * > dy_n. Hope this helps. > In other words, you should be able to match the initial residual to F(t + dt, 0, -Un / dt) for your IFunction. However, it is really not normal to use U = 0. The default is to use U = U0 as the initial guess I think. Thanks, Matt > /* > This defines the nonlinear equation that is to be solved with SNES > G(U) = F[t0+Theta*dt, U, (U-U0)*shift] = 0 > > Note that U here is the stage argument. This means that U = U_{n+1} only > if endpoint = true, > otherwise U = theta U_{n+1} + (1 - theta) U0, which for the case of > implicit midpoint is > U = (U_{n+1} + U0)/2 > */ > static PetscErrorCode SNESTSFormFunction_Theta(SNES snes,Vec x,Vec y,TS > ts) > > > On Jul 14, 2021, at 6:12 AM, Tang, Qi wrote: > > Hi, > > During the process to experiment the suggestion Matt made, we ran into > some questions regarding to TSSolve vs KSPSolve. We got different initial > unpreconditioned residual using two solvers. Let?s say we solve the problem > with backward Euler and there is no rhs. We guess TSSolve solves > (U^{n+1}-U^n)/dt = A U^{n+1}. > (We only provides IJacobian in this case and turn on TS_LINEAR.) > So we guess the initial unpreconditioned residual would be ||U^n/dt||_2, > which seems different from the residual we got from a backward Euler > stepping we implemented by ourself through KSPSolve. > > Do we have some misunderstanding on TSSolve? > > Thanks, > Qi > T5 at LANL > > > > On Jul 7, 2021, at 3:54 PM, Matthew Knepley wrote: > > On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae wrote: > >> Hi Matt, >> >> >> Thanks for your quick reply. >> >> I have not completely understood your suggestion, could you please >> elaborate a bit more? >> >> For your convenience, here is how I am proceeding for the moment in my >> code: >> >> >> TSGetKSP(ts,&ksp); >> >> KSPGetPC(ksp,&pc); >> >> PCSetType(pc,PCFIELDSPLIT); >> >> PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); >> >> PCSetUp(pc); >> >> PCFieldSplitGetSubKSP(pc, &n, &subksp); >> >> KSPGetPC(subksp[1], &(subpc[1])); >> > I do not like the two lines above. We should not have to do this. > >> KSPSetOperators(subksp[1],T,T); >> > In the above line, I want you to use a separate preconditioning matrix M, > instead of T. That way, it will provide > the preconditioning matrix for your Schur complement problem. > > Thanks, > > Matt > >> KSPSetUp(subksp[1]); >> >> PetscFree(subksp); >> >> TSSolve(ts,X); >> >> >> Thank you. >> >> Best, >> >> >> Zakariae >> ------------------------------ >> *From:* Matthew Knepley >> *Sent:* Wednesday, July 7, 2021 12:11:10 PM >> *To:* Jorti, Zakariae >> *Cc:* petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu >> *Subject:* [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT >> >> On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >>> Hi, >>> >>> >>> I am trying to build a PCFIELDSPLIT preconditioner for a matrix >>> >>> J = [A00 A01] >>> >>> [A10 A11] >>> >>> that has the following shape: >>> >>> >>> M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I >>> 0] >>> >>> [0 I] [0 >>> ksp(T)] [-A10 ksp(A00) I ] >>> >>> >>> where T is a user-defined Schur complement approximation that replaces >>> the true Schur complement S:= A11 - A10 ksp(A00) A01. >>> >>> >>> I am trying to do something similar to this example (lines 41--45 and >>> 116--121): >>> https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html >>> >>> >>> >>> The problem I have is that I manage to replace S with T on a >>> separate single linear system but not for the linear systems generated by >>> my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} >>> correctly, the T matrix gets replaced by S in the preconditioner once I >>> call TSSolve. >>> >>> Do you have any suggestions how to fix this knowing that the matrix J >>> does not change over time? >>> >>> I don't like how it is done in that example for this very reason. >> >> When I want to use a custom preconditioning matrix for the Schur >> complement, I always give a preconditioning matrix M to the outer solve. >> Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for >> the Schur complement, for that preconditioning matrix without >> extra code. Can you do this? >> >> Thanks, >> >> Matt >> >>> Many thanks. >>> >>> >>> Best regards, >>> >>> >>> Zakariae >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tangqi at msu.edu Wed Jul 14 10:01:09 2021 From: tangqi at msu.edu (Tang, Qi) Date: Wed, 14 Jul 2021 15:01:09 +0000 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> <3EC17F46-3BDE-4755-8EDA-34C8853EE378@msu.edu> <3DCFCCF0-FFF3-4672-A1BC-85BAC5689FCC@gmail.com> Message-ID: <284DDF6B-80B8-43F3-83DE-8599A453CDBC@msu.edu> Thanks a lot for the explanation, Matt and Stefano. That helps a lot. Just to confirm, the comment in src/ts/impls/implicit/theta/theta.c seems to indicates TS solves U_{n+1} in its SNES/KSP solve, but it actually solves the update dU_n in U_{n+1} = U_n - lambda*dU_n in the solve. Right? It actually makes a lot sense, because KSPSolve in TSSolve reports it uses zero initial guess. So if what I said is true, that effectively means it uses U0 as the initial guess. Qi On Jul 14, 2021, at 2:56 AM, Matthew Knepley > wrote: On Wed, Jul 14, 2021 at 4:43 AM Stefano Zampini > wrote: Qi Backward Euler is a special case of Theta methods in PETSc (Theta=1). In src/ts/impls/implicit/theta/theta.c on top of SNESTSFormFunction_Theta you have some explanation of what is solved for at each time step (see below). SNES then solves for the Newton update dy_n and the next Newton iterate is computed as x_{n+1} = x_{n} - lambda * dy_n. Hope this helps. In other words, you should be able to match the initial residual to F(t + dt, 0, -Un / dt) for your IFunction. However, it is really not normal to use U = 0. The default is to use U = U0 as the initial guess I think. Thanks, Matt /* This defines the nonlinear equation that is to be solved with SNES G(U) = F[t0+Theta*dt, U, (U-U0)*shift] = 0 Note that U here is the stage argument. This means that U = U_{n+1} only if endpoint = true, otherwise U = theta U_{n+1} + (1 - theta) U0, which for the case of implicit midpoint is U = (U_{n+1} + U0)/2 */ static PetscErrorCode SNESTSFormFunction_Theta(SNES snes,Vec x,Vec y,TS ts) On Jul 14, 2021, at 6:12 AM, Tang, Qi > wrote: Hi, During the process to experiment the suggestion Matt made, we ran into some questions regarding to TSSolve vs KSPSolve. We got different initial unpreconditioned residual using two solvers. Let?s say we solve the problem with backward Euler and there is no rhs. We guess TSSolve solves (U^{n+1}-U^n)/dt = A U^{n+1}. (We only provides IJacobian in this case and turn on TS_LINEAR.) So we guess the initial unpreconditioned residual would be ||U^n/dt||_2, which seems different from the residual we got from a backward Euler stepping we implemented by ourself through KSPSolve. Do we have some misunderstanding on TSSolve? Thanks, Qi T5 at LANL On Jul 7, 2021, at 3:54 PM, Matthew Knepley > wrote: On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae > wrote: Hi Matt, Thanks for your quick reply. I have not completely understood your suggestion, could you please elaborate a bit more? For your convenience, here is how I am proceeding for the moment in my code: TSGetKSP(ts,&ksp); KSPGetPC(ksp,&pc); PCSetType(pc,PCFIELDSPLIT); PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); PCSetUp(pc); PCFieldSplitGetSubKSP(pc, &n, &subksp); KSPGetPC(subksp[1], &(subpc[1])); I do not like the two lines above. We should not have to do this. KSPSetOperators(subksp[1],T,T); In the above line, I want you to use a separate preconditioning matrix M, instead of T. That way, it will provide the preconditioning matrix for your Schur complement problem. Thanks, Matt KSPSetUp(subksp[1]); PetscFree(subksp); TSSolve(ts,X); Thank you. Best, Zakariae ________________________________ From: Matthew Knepley > Sent: Wednesday, July 7, 2021 12:11:10 PM To: Jorti, Zakariae Cc: petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu Subject: [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users > wrote: Hi, I am trying to build a PCFIELDSPLIT preconditioner for a matrix J = [A00 A01] [A10 A11] that has the following shape: M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I 0] [0 I] [0 ksp(T)] [-A10 ksp(A00) I ] where T is a user-defined Schur complement approximation that replaces the true Schur complement S:= A11 - A10 ksp(A00) A01. I am trying to do something similar to this example (lines 41--45 and 116--121): https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html The problem I have is that I manage to replace S with T on a separate single linear system but not for the linear systems generated by my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} correctly, the T matrix gets replaced by S in the preconditioner once I call TSSolve. Do you have any suggestions how to fix this knowing that the matrix J does not change over time? I don't like how it is done in that example for this very reason. When I want to use a custom preconditioning matrix for the Schur complement, I always give a preconditioning matrix M to the outer solve. Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for the Schur complement, for that preconditioning matrix without extra code. Can you do this? Thanks, Matt Many thanks. Best regards, Zakariae -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 14 10:09:50 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Jul 2021 11:09:50 -0400 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: <284DDF6B-80B8-43F3-83DE-8599A453CDBC@msu.edu> References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> <3EC17F46-3BDE-4755-8EDA-34C8853EE378@msu.edu> <3DCFCCF0-FFF3-4672-A1BC-85BAC5689FCC@gmail.com> <284DDF6B-80B8-43F3-83DE-8599A453CDBC@msu.edu> Message-ID: On Wed, Jul 14, 2021 at 11:01 AM Tang, Qi wrote: > Thanks a lot for the explanation, Matt and Stefano. That helps a lot. > > Just to confirm, the comment in src/ts/impls/implicit/theta/theta.c seems > to indicates TS solves U_{n+1} in its SNES/KSP solve, but it actually > solves the update dU_n in U_{n+1} = U_n - lambda*dU_n in the solve. Right? > > It actually makes a lot sense, because KSPSolve in TSSolve reports it uses > zero initial guess. So if what I said is true, that effectively means it > uses U0 as the initial guess. > Yes, this makes it fit the SNES API. Thanks, Matt > Qi > > On Jul 14, 2021, at 2:56 AM, Matthew Knepley wrote: > > On Wed, Jul 14, 2021 at 4:43 AM Stefano Zampini > wrote: > >> Qi >> >> Backward Euler is a special case of Theta methods in PETSc (Theta=1). >> In src/ts/impls/implicit/theta/theta.c on top of >> SNESTSFormFunction_Theta you have some explanation of what is solved for at >> each time step (see below). SNES then solves for the Newton update dy_n >> and the next Newton iterate is computed as x_{n+1} = x_{n} - lambda * >> dy_n. Hope this helps. >> > > In other words, you should be able to match the initial residual to > > F(t + dt, 0, -Un / dt) > > for your IFunction. However, it is really not normal to use U = 0. The > default is to use U = U0 > as the initial guess I think. > > Thanks, > > Matt > > >> /* >> This defines the nonlinear equation that is to be solved with SNES >> G(U) = F[t0+Theta*dt, U, (U-U0)*shift] = 0 >> >> Note that U here is the stage argument. This means that U = U_{n+1} >> only if endpoint = true, >> otherwise U = theta U_{n+1} + (1 - theta) U0, which for the case of >> implicit midpoint is >> U = (U_{n+1} + U0)/2 >> */ >> static PetscErrorCode SNESTSFormFunction_Theta(SNES snes,Vec x,Vec y,TS >> ts) >> >> >> On Jul 14, 2021, at 6:12 AM, Tang, Qi wrote: >> >> Hi, >> >> During the process to experiment the suggestion Matt made, we ran into >> some questions regarding to TSSolve vs KSPSolve. We got different initial >> unpreconditioned residual using two solvers. Let?s say we solve the problem >> with backward Euler and there is no rhs. We guess TSSolve solves >> (U^{n+1}-U^n)/dt = A U^{n+1}. >> (We only provides IJacobian in this case and turn on TS_LINEAR.) >> So we guess the initial unpreconditioned residual would be ||U^n/dt||_2, >> which seems different from the residual we got from a backward Euler >> stepping we implemented by ourself through KSPSolve. >> >> Do we have some misunderstanding on TSSolve? >> >> Thanks, >> Qi >> T5 at LANL >> >> >> >> On Jul 7, 2021, at 3:54 PM, Matthew Knepley wrote: >> >> On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae wrote: >> >>> Hi Matt, >>> >>> >>> Thanks for your quick reply. >>> >>> I have not completely understood your suggestion, could you please >>> elaborate a bit more? >>> >>> For your convenience, here is how I am proceeding for the moment in my >>> code: >>> >>> >>> TSGetKSP(ts,&ksp); >>> >>> KSPGetPC(ksp,&pc); >>> >>> PCSetType(pc,PCFIELDSPLIT); >>> >>> PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); >>> >>> PCSetUp(pc); >>> >>> PCFieldSplitGetSubKSP(pc, &n, &subksp); >>> >>> KSPGetPC(subksp[1], &(subpc[1])); >>> >> I do not like the two lines above. We should not have to do this. >> >>> KSPSetOperators(subksp[1],T,T); >>> >> In the above line, I want you to use a separate preconditioning matrix >> M, instead of T. That way, it will provide >> the preconditioning matrix for your Schur complement problem. >> >> Thanks, >> >> Matt >> >>> KSPSetUp(subksp[1]); >>> >>> PetscFree(subksp); >>> >>> TSSolve(ts,X); >>> >>> >>> Thank you. >>> >>> Best, >>> >>> >>> Zakariae >>> ------------------------------ >>> *From:* Matthew Knepley >>> *Sent:* Wednesday, July 7, 2021 12:11:10 PM >>> *To:* Jorti, Zakariae >>> *Cc:* petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu >>> *Subject:* [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT >>> >>> On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users < >>> petsc-users at mcs.anl.gov> wrote: >>> >>>> Hi, >>>> >>>> >>>> I am trying to build a PCFIELDSPLIT preconditioner for a matrix >>>> >>>> J = [A00 A01] >>>> >>>> [A10 A11] >>>> >>>> that has the following shape: >>>> >>>> >>>> M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I >>>> 0] >>>> >>>> [0 I] [0 >>>> ksp(T)] [-A10 ksp(A00) I ] >>>> >>>> >>>> where T is a user-defined Schur complement approximation that replaces >>>> the true Schur complement S:= A11 - A10 ksp(A00) A01. >>>> >>>> >>>> I am trying to do something similar to this example (lines 41--45 and >>>> 116--121): >>>> https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html >>>> >>>> >>>> >>>> The problem I have is that I manage to replace S with T on a >>>> separate single linear system but not for the linear systems generated by >>>> my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} >>>> correctly, the T matrix gets replaced by S in the preconditioner once I >>>> call TSSolve. >>>> >>>> Do you have any suggestions how to fix this knowing that the matrix J >>>> does not change over time? >>>> >>>> I don't like how it is done in that example for this very reason. >>> >>> When I want to use a custom preconditioning matrix for the Schur >>> complement, I always give a preconditioning matrix M to the outer solve. >>> Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) >>> for the Schur complement, for that preconditioning matrix without >>> extra code. Can you do this? >>> >>> Thanks, >>> >>> Matt >>> >>>> Many thanks. >>>> >>>> >>>> Best regards, >>>> >>>> >>>> Zakariae >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Wed Jul 14 10:11:10 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 14 Jul 2021 17:11:10 +0200 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: <284DDF6B-80B8-43F3-83DE-8599A453CDBC@msu.edu> References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> <3EC17F46-3BDE-4755-8EDA-34C8853EE378@msu.edu> <3DCFCCF0-FFF3-4672-A1BC-85BAC5689FCC@gmail.com> <284DDF6B-80B8-43F3-83DE-8599A453CDBC@msu.edu> Message-ID: <30B647FB-CAA6-4918-9081-5477F3E0E13A@gmail.com> > On Jul 14, 2021, at 5:01 PM, Tang, Qi wrote: > > Thanks a lot for the explanation, Matt and Stefano. That helps a lot. > > Just to confirm, the comment in src/ts/impls/implicit/theta/theta.c seems to indicates TS solves U_{n+1} in its SNES/KSP solve, but it actually solves the update dU_n in U_{n+1} = U_n - lambda*dU_n in the solve. Right? The SNES object solves the nonlinear equations as written in the comment of TSTHETA. F[t0+Theta*dt, U, (U-U0)*shift] = 0 In case SNES is of type SNESLS (Newton), then the linearized equations are solved. The linear system matrix is the one provided by the IJacobian function J = dF/dU + shift dF/dUdot If it is SNESKSPONLY ( as it should be for TS_LINEAR), then only one step is taken and lambda = 1. > > It actually makes a lot sense, because KSPSolve in TSSolve reports it uses zero initial guess. So if what I said is true, that effectively means it uses U0 as the initial guess. > > Qi > >> On Jul 14, 2021, at 2:56 AM, Matthew Knepley > wrote: >> >> On Wed, Jul 14, 2021 at 4:43 AM Stefano Zampini > wrote: >> Qi >> >> Backward Euler is a special case of Theta methods in PETSc (Theta=1). In src/ts/impls/implicit/theta/theta.c on top of SNESTSFormFunction_Theta you have some explanation of what is solved for at each time step (see below). SNES then solves for the Newton update dy_n and the next Newton iterate is computed as x_{n+1} = x_{n} - lambda * dy_n. Hope this helps. >> >> In other words, you should be able to match the initial residual to >> >> F(t + dt, 0, -Un / dt) >> >> for your IFunction. However, it is really not normal to use U = 0. The default is to use U = U0 >> as the initial guess I think. >> >> Thanks, >> >> Matt >> >> /* >> This defines the nonlinear equation that is to be solved with SNES >> G(U) = F[t0+Theta*dt, U, (U-U0)*shift] = 0 >> >> Note that U here is the stage argument. This means that U = U_{n+1} only if endpoint = true, >> otherwise U = theta U_{n+1} + (1 - theta) U0, which for the case of implicit midpoint is >> U = (U_{n+1} + U0)/2 >> */ >> static PetscErrorCode SNESTSFormFunction_Theta(SNES snes,Vec x,Vec y,TS ts) >> >> >>> On Jul 14, 2021, at 6:12 AM, Tang, Qi > wrote: >>> >>> Hi, >>> >>> During the process to experiment the suggestion Matt made, we ran into some questions regarding to TSSolve vs KSPSolve. We got different initial unpreconditioned residual using two solvers. Let?s say we solve the problem with backward Euler and there is no rhs. We guess TSSolve solves >>> (U^{n+1}-U^n)/dt = A U^{n+1}. >>> (We only provides IJacobian in this case and turn on TS_LINEAR.) >>> So we guess the initial unpreconditioned residual would be ||U^n/dt||_2, which seems different from the residual we got from a backward Euler stepping we implemented by ourself through KSPSolve. >>> >>> Do we have some misunderstanding on TSSolve? >>> >>> Thanks, >>> Qi >>> T5 at LANL >>> >>> >>> >>>> On Jul 7, 2021, at 3:54 PM, Matthew Knepley > wrote: >>>> >>>> On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae > wrote: >>>> Hi Matt, >>>> >>>> >>>> >>>> Thanks for your quick reply. >>>> >>>> I have not completely understood your suggestion, could you please elaborate a bit more? >>>> >>>> For your convenience, here is how I am proceeding for the moment in my code: >>>> >>>> >>>> >>>> TSGetKSP(ts,&ksp); >>>> >>>> KSPGetPC(ksp,&pc); >>>> >>>> PCSetType(pc,PCFIELDSPLIT); >>>> >>>> PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); >>>> >>>> PCSetUp(pc); >>>> >>>> PCFieldSplitGetSubKSP(pc, &n, &subksp); >>>> >>>> KSPGetPC(subksp[1], &(subpc[1])); >>>> >>>> I do not like the two lines above. We should not have to do this. >>>> KSPSetOperators(subksp[1],T,T); >>>> >>>> In the above line, I want you to use a separate preconditioning matrix M, instead of T. That way, it will provide >>>> the preconditioning matrix for your Schur complement problem. >>>> >>>> Thanks, >>>> >>>> Matt >>>> KSPSetUp(subksp[1]); >>>> >>>> PetscFree(subksp); >>>> >>>> TSSolve(ts,X); >>>> >>>> >>>> >>>> Thank you. >>>> >>>> Best, >>>> >>>> >>>> >>>> Zakariae >>>> >>>> From: Matthew Knepley > >>>> Sent: Wednesday, July 7, 2021 12:11:10 PM >>>> To: Jorti, Zakariae >>>> Cc: petsc-users at mcs.anl.gov ; Tang, Qi; Tang, Xianzhu >>>> Subject: [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT >>>> >>>> On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users > wrote: >>>> Hi, >>>> >>>> >>>> >>>> I am trying to build a PCFIELDSPLIT preconditioner for a matrix >>>> >>>> J = [A00 A01] >>>> >>>> [A10 A11] >>>> >>>> that has the following shape: >>>> >>>> >>>> >>>> M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I 0] >>>> >>>> [0 I] [0 ksp(T)] [-A10 ksp(A00) I ] >>>> >>>> >>>> >>>> where T is a user-defined Schur complement approximation that replaces the true Schur complement S:= A11 - A10 ksp(A00) A01. >>>> >>>> >>>> >>>> I am trying to do something similar to this example (lines 41--45 and 116--121): https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html >>>> >>>> The problem I have is that I manage to replace S with T on a separate single linear system but not for the linear systems generated by my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} correctly, the T matrix gets replaced by S in the preconditioner once I call TSSolve. >>>> >>>> Do you have any suggestions how to fix this knowing that the matrix J does not change over time? >>>> >>>> >>>> I don't like how it is done in that example for this very reason. >>>> >>>> When I want to use a custom preconditioning matrix for the Schur complement, I always give a preconditioning matrix M to the outer solve. >>>> Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for the Schur complement, for that preconditioning matrix without >>>> extra code. Can you do this? >>>> >>>> Thanks, >>>> >>>> Matt >>>> Many thanks. >>>> >>>> >>>> >>>> Best regards, >>>> >>>> >>>> >>>> Zakariae >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Wed Jul 14 12:18:01 2021 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Wed, 14 Jul 2021 13:18:01 -0400 Subject: [petsc-users] Is it possible to keep track of original elements # after a call to DMPlexDistribute ? Message-ID: <7236c736-6066-1ba3-55b1-60782d8e754f@giref.ulaval.ca> Hi, I want to use DMPlexDistribute from PETSc for computing overlapping and play with the different partitioners supported. However, after calling DMPlexDistribute, I noticed the elements are renumbered and then the original number is lost. What would be the best way to keep track of the element renumbering? a) Adding an optional parameter to let the user retrieve a vector or "IS" giving the old number? b) Adding a DMLabel (seems a wrong good solution) c) Other idea? Of course, I don't want to loose performances with the need of this "mapping"... Thanks, Eric -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 From Eric.Chamberland at giref.ulaval.ca Wed Jul 14 12:25:14 2021 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Wed, 14 Jul 2021 13:25:14 -0400 Subject: [petsc-users] How to combine different element types into a single DMPlex? Message-ID: Hi, while playing with DMPlexBuildFromCellListParallel, I noticed we have to specify "numCorners" which is a fixed value, then gives a fixed number of nodes for a series of elements. How can I then add, for example, triangles and quadrangles into a DMPlex? Thanks, Eric -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 From knepley at gmail.com Wed Jul 14 14:09:40 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Jul 2021 15:09:40 -0400 Subject: [petsc-users] Is it possible to keep track of original elements # after a call to DMPlexDistribute ? In-Reply-To: <7236c736-6066-1ba3-55b1-60782d8e754f@giref.ulaval.ca> References: <7236c736-6066-1ba3-55b1-60782d8e754f@giref.ulaval.ca> Message-ID: On Wed, Jul 14, 2021 at 1:18 PM Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Hi, > > I want to use DMPlexDistribute from PETSc for computing overlapping and > play with the different partitioners supported. > > However, after calling DMPlexDistribute, I noticed the elements are > renumbered and then the original number is lost. > > What would be the best way to keep track of the element renumbering? > > a) Adding an optional parameter to let the user retrieve a vector or > "IS" giving the old number? > > b) Adding a DMLabel (seems a wrong good solution) > > c) Other idea? > > Of course, I don't want to loose performances with the need of this > "mapping"... > You need to call https://petsc.org/release/docs/manualpages/DM/DMSetUseNatural.html before call DMPlexDistribute(). Then you can call https://petsc.org/release/docs/manualpages/DMPLEX/DMPlexGlobalToNaturalBegin.html to map back to the original numbering if you want. This is the same thing that DMDA is doing. Thanks, Matt > Thanks, > > Eric > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 14 14:14:14 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Jul 2021 15:14:14 -0400 Subject: [petsc-users] How to combine different element types into a single DMPlex? In-Reply-To: References: Message-ID: On Wed, Jul 14, 2021 at 1:25 PM Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Hi, > > while playing with DMPlexBuildFromCellListParallel, I noticed we have to > specify "numCorners" which is a fixed value, then gives a fixed number > of nodes for a series of elements. > > How can I then add, for example, triangles and quadrangles into a DMPlex? > You can't with that function. It would be much mich more complicated if you could, and I am not sure it is worth it for that function. The reason is that you would need index information to offset into the connectivity list, and that would need to be replicated to some extent so that all processes know what the others are doing. Possible, but complicated. Maybe I can help suggest something for what you are trying to do? Thanks, Matt > Thanks, > > Eric > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Wed Jul 14 14:45:26 2021 From: hongzhang at anl.gov (Zhang, Hong) Date: Wed, 14 Jul 2021 19:45:26 +0000 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: <284DDF6B-80B8-43F3-83DE-8599A453CDBC@msu.edu> References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> <3EC17F46-3BDE-4755-8EDA-34C8853EE378@msu.edu> <3DCFCCF0-FFF3-4672-A1BC-85BAC5689FCC@gmail.com> <284DDF6B-80B8-43F3-83DE-8599A453CDBC@msu.edu> Message-ID: <7A22FBCF-11A7-4480-99E8-4BF522B6B445@anl.gov> On Jul 14, 2021, at 10:01 AM, Tang, Qi > wrote: Thanks a lot for the explanation, Matt and Stefano. That helps a lot. Just to confirm, the comment in src/ts/impls/implicit/theta/theta.c seems to indicates TS solves U_{n+1} in its SNES/KSP solve, but it actually solves the update dU_n in U_{n+1} = U_n - lambda*dU_n in the solve. Right? SNESSolve yields U_{n+1}. But KSPSolve yields the Newton direction dU_n at each SNES iteration. It actually makes a lot sense, because KSPSolve in TSSolve reports it uses zero initial guess. So if what I said is true, that effectively means it uses U0 as the initial guess. Correct. TSSolve uses a warm start SNES so the previous solution is used as the initial guess for the next SNESSolve. Note that TSSolve calls SNESSolve instead of calling KSPSolve directly even when you are solving a linear problem. Hong (Mr.) Qi On Jul 14, 2021, at 2:56 AM, Matthew Knepley > wrote: On Wed, Jul 14, 2021 at 4:43 AM Stefano Zampini > wrote: Qi Backward Euler is a special case of Theta methods in PETSc (Theta=1). In src/ts/impls/implicit/theta/theta.c on top of SNESTSFormFunction_Theta you have some explanation of what is solved for at each time step (see below). SNES then solves for the Newton update dy_n and the next Newton iterate is computed as x_{n+1} = x_{n} - lambda * dy_n. Hope this helps. In other words, you should be able to match the initial residual to F(t + dt, 0, -Un / dt) for your IFunction. However, it is really not normal to use U = 0. The default is to use U = U0 as the initial guess I think. Thanks, Matt /* This defines the nonlinear equation that is to be solved with SNES G(U) = F[t0+Theta*dt, U, (U-U0)*shift] = 0 Note that U here is the stage argument. This means that U = U_{n+1} only if endpoint = true, otherwise U = theta U_{n+1} + (1 - theta) U0, which for the case of implicit midpoint is U = (U_{n+1} + U0)/2 */ static PetscErrorCode SNESTSFormFunction_Theta(SNES snes,Vec x,Vec y,TS ts) On Jul 14, 2021, at 6:12 AM, Tang, Qi > wrote: Hi, During the process to experiment the suggestion Matt made, we ran into some questions regarding to TSSolve vs KSPSolve. We got different initial unpreconditioned residual using two solvers. Let?s say we solve the problem with backward Euler and there is no rhs. We guess TSSolve solves (U^{n+1}-U^n)/dt = A U^{n+1}. (We only provides IJacobian in this case and turn on TS_LINEAR.) So we guess the initial unpreconditioned residual would be ||U^n/dt||_2, which seems different from the residual we got from a backward Euler stepping we implemented by ourself through KSPSolve. Do we have some misunderstanding on TSSolve? Thanks, Qi T5 at LANL On Jul 7, 2021, at 3:54 PM, Matthew Knepley > wrote: On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae > wrote: Hi Matt, Thanks for your quick reply. I have not completely understood your suggestion, could you please elaborate a bit more? For your convenience, here is how I am proceeding for the moment in my code: TSGetKSP(ts,&ksp); KSPGetPC(ksp,&pc); PCSetType(pc,PCFIELDSPLIT); PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); PCSetUp(pc); PCFieldSplitGetSubKSP(pc, &n, &subksp); KSPGetPC(subksp[1], &(subpc[1])); I do not like the two lines above. We should not have to do this. KSPSetOperators(subksp[1],T,T); In the above line, I want you to use a separate preconditioning matrix M, instead of T. That way, it will provide the preconditioning matrix for your Schur complement problem. Thanks, Matt KSPSetUp(subksp[1]); PetscFree(subksp); TSSolve(ts,X); Thank you. Best, Zakariae ________________________________ From: Matthew Knepley > Sent: Wednesday, July 7, 2021 12:11:10 PM To: Jorti, Zakariae Cc: petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu Subject: [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users > wrote: Hi, I am trying to build a PCFIELDSPLIT preconditioner for a matrix J = [A00 A01] [A10 A11] that has the following shape: M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I 0] [0 I] [0 ksp(T)] [-A10 ksp(A00) I ] where T is a user-defined Schur complement approximation that replaces the true Schur complement S:= A11 - A10 ksp(A00) A01. I am trying to do something similar to this example (lines 41--45 and 116--121): https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html The problem I have is that I manage to replace S with T on a separate single linear system but not for the linear systems generated by my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} correctly, the T matrix gets replaced by S in the preconditioner once I call TSSolve. Do you have any suggestions how to fix this knowing that the matrix J does not change over time? I don't like how it is done in that example for this very reason. When I want to use a custom preconditioning matrix for the Schur complement, I always give a preconditioning matrix M to the outer solve. Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for the Schur complement, for that preconditioning matrix without extra code. Can you do this? Thanks, Matt Many thanks. Best regards, Zakariae -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aduarteg at utexas.edu Wed Jul 14 14:47:04 2021 From: aduarteg at utexas.edu (Alfredo J Duarte Gomez) Date: Wed, 14 Jul 2021 14:47:04 -0500 Subject: [petsc-users] Loading PETSC Grid Message-ID: Good morning, We are currently developing a PETSC application on a structured grid, however, we are applying a curvilinear transformation so that the physical domain is not a rectangle (the computational domain is). As a result we are trying to load in a mesh into the PETSC dmda object to create easy visualizations. I have successfully done this through the following code: ierr=VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,6400,&co);CHKERRQ(ierr); ierr=PetscObjectSetName((PetscObject) co, "coordinates");CHKERRQ(ierr); ierr=PetscViewerHDF5Open(PETSC_COMM_WORLD,"coord2d.h5",FILE_MODE_READ,&viewer);CHKERRQ(ierr); ierr=VecLoad(co,viewer);CHKERRQ(ierr); ierr=PetscViewerDestroy(&viewer);CHKERRQ(ierr); ierr=DMSetCoordinates(da,co);CHKERRQ(ierr); Where my coordinate vector in HDF5 format is a single column vector in the order x1,y1,x2,y2,...xN,yN,etc. where 1 is the lower left corner and N. However this seems like a rather crude way of doing it, it seems like using the DMGetCoordinateDM() and DMCreateGlobalVector() is a more sophisticated way of leveraging the dmda object, however when I use: DMGetCoordinateDM(da, &cda); DMCreateGlobalVector(cda, &co); ierr=PetscObjectSetName((PetscObject) co, "coordinates");CHKERRQ(ierr); ierr=PetscViewerHDF5Open(PETSC_COMM_WORLD,"coord2d.h5",FILE_MODE_READ,&viewer);CHKERRQ(ierr); ierr=VecLoad(co,viewer);CHKERRQ(ierr); ierr=PetscViewerDestroy(&viewer);CHKERRQ(ierr); ierr=DMSetCoordinates(da,co);CHKERRQ(ierr); The loading function complains about the structure/format of my data. I haven't been able to find the format that this coordinate dmda expects from an HDF5 file in any of the documentation. Is there anywhere I could find this information? Thank you, -Alfredo -- Alfredo Duarte Graduate Research Assistant The University of Texas at Austin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Wed Jul 14 15:58:14 2021 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Wed, 14 Jul 2021 16:58:14 -0400 Subject: [petsc-users] Is it possible to keep track of original elements # after a call to DMPlexDistribute ? In-Reply-To: References: <7236c736-6066-1ba3-55b1-60782d8e754f@giref.ulaval.ca> Message-ID: Hi Matthew, Ok, I did that but it segfault now.? Here are the order of the calls: DMPlexCreate DMSetDimension DMPlexBuildFromCellListParallel(...) DMPlexInterpolate PetscPartitioner lPart; DMPlexGetPartitioner(lDMSansOverlap, &lPart); PetscPartitionerSetFromOptions(lPart); DMSetUseNatural(lDMSansOverlap, PETSC_TRUE) DMPlexDistribute DMPlexGlobalToNaturalBegin DMPlexGlobalToNaturalEnd But it gives me the following error: 0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: DM global to natural SF not present. If DMPlexDistribute() was called and a section was defined, report to petsc-maint at mcs.anl.gov. [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.15.0, Mar 30, 2021 [0]PETSC ERROR: MEF++.dev on a? named rohan by ericc Wed Jul 14 16:57:48 2021 [0]PETSC ERROR: Configure options --prefix=/opt/petsc-3.15.0_debug_openmpi-4.1.0_gcc7 --with-mpi-compilers=1 --with-mpi-dir=/opt/openmpi-4.1.0_gcc7 --with-cxx-dialect=C++14 --with-make-np=12 --with-shared-libraries=1 --with-debugging=yes --with-memalign=64 --with-visibility=0 --with-64-bit-indices=0 --download-ml=yes --download-mumps=yes --download-superlu=yes --download-hpddm=yes --download-slepc=yes --download-superlu_dist=yes --download-parmetis=yes --download-ptscotch=yes --download-metis=yes --download-strumpack=yes --download-suitesparse=yes --download-hypre=yes --with-blaslapack-dir=/opt/intel/oneapi/mkl/2021.1.1/env/../lib/intel64 --with-mkl_pardiso-dir=/opt/intel/oneapi/mkl/2021.1.1/env/.. --with-mkl_cpardiso-dir=/opt/intel/oneapi/mkl/2021.1.1/env/.. --with-scalapack=1 --with-scalapack-include=/opt/intel/oneapi/mkl/2021.1.1/env/../include --with-scalapack-lib="-L/opt/intel/oneapi/mkl/2021.1.1/env/../lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64" [0]PETSC ERROR: #1 DMPlexGlobalToNaturalBegin() at /tmp/ompi-opt/petsc-3.15.0-debug/src/dm/impls/plex/plexnatural.c:245 What did I missed? Thanks a lot! Eric On 2021-07-14 3:09 p.m., Matthew Knepley wrote: > On Wed, Jul 14, 2021 at 1:18 PM Eric Chamberland > > wrote: > > Hi, > > I want to use DMPlexDistribute from PETSc for computing > overlapping and > play with the different partitioners supported. > > However, after calling DMPlexDistribute, I noticed the elements are > renumbered and then the original number is lost. > > What would be the best way to keep track of the element renumbering? > > a) Adding an optional parameter to let the user retrieve a vector or > "IS" giving the old number? > > b) Adding a DMLabel (seems a wrong good solution) > > c) Other idea? > > Of course, I don't want to loose performances with the need of this > "mapping"... > > > You need to call > > https://petsc.org/release/docs/manualpages/DM/DMSetUseNatural.html > > > before call DMPlexDistribute(). Then you can call > > https://petsc.org/release/docs/manualpages/DMPLEX/DMPlexGlobalToNaturalBegin.html > > > to map back to the original numbering if you want. This is the same > thing that DMDA is doing. > > ? Thanks, > > ? ? ?Matt > > Thanks, > > Eric > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tangqi at msu.edu Wed Jul 14 16:29:52 2021 From: tangqi at msu.edu (Tang, Qi) Date: Wed, 14 Jul 2021 21:29:52 +0000 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: <7A22FBCF-11A7-4480-99E8-4BF522B6B445@anl.gov> References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> <3EC17F46-3BDE-4755-8EDA-34C8853EE378@msu.edu> <3DCFCCF0-FFF3-4672-A1BC-85BAC5689FCC@gmail.com> <284DDF6B-80B8-43F3-83DE-8599A453CDBC@msu.edu> <7A22FBCF-11A7-4480-99E8-4BF522B6B445@anl.gov> Message-ID: <889C3708-D348-46E3-B708-34EE27C8A206@msu.edu> Thanks a lot, Hong and everyone. It makes a lot of sense now. We will continue with the fieldsplit business. Qi On Jul 14, 2021, at 1:45 PM, Zhang, Hong > wrote: On Jul 14, 2021, at 10:01 AM, Tang, Qi > wrote: Thanks a lot for the explanation, Matt and Stefano. That helps a lot. Just to confirm, the comment in src/ts/impls/implicit/theta/theta.c seems to indicates TS solves U_{n+1} in its SNES/KSP solve, but it actually solves the update dU_n in U_{n+1} = U_n - lambda*dU_n in the solve. Right? SNESSolve yields U_{n+1}. But KSPSolve yields the Newton direction dU_n at each SNES iteration. It actually makes a lot sense, because KSPSolve in TSSolve reports it uses zero initial guess. So if what I said is true, that effectively means it uses U0 as the initial guess. Correct. TSSolve uses a warm start SNES so the previous solution is used as the initial guess for the next SNESSolve. Note that TSSolve calls SNESSolve instead of calling KSPSolve directly even when you are solving a linear problem. Hong (Mr.) Qi On Jul 14, 2021, at 2:56 AM, Matthew Knepley > wrote: On Wed, Jul 14, 2021 at 4:43 AM Stefano Zampini > wrote: Qi Backward Euler is a special case of Theta methods in PETSc (Theta=1). In src/ts/impls/implicit/theta/theta.c on top of SNESTSFormFunction_Theta you have some explanation of what is solved for at each time step (see below). SNES then solves for the Newton update dy_n and the next Newton iterate is computed as x_{n+1} = x_{n} - lambda * dy_n. Hope this helps. In other words, you should be able to match the initial residual to F(t + dt, 0, -Un / dt) for your IFunction. However, it is really not normal to use U = 0. The default is to use U = U0 as the initial guess I think. Thanks, Matt /* This defines the nonlinear equation that is to be solved with SNES G(U) = F[t0+Theta*dt, U, (U-U0)*shift] = 0 Note that U here is the stage argument. This means that U = U_{n+1} only if endpoint = true, otherwise U = theta U_{n+1} + (1 - theta) U0, which for the case of implicit midpoint is U = (U_{n+1} + U0)/2 */ static PetscErrorCode SNESTSFormFunction_Theta(SNES snes,Vec x,Vec y,TS ts) On Jul 14, 2021, at 6:12 AM, Tang, Qi > wrote: Hi, During the process to experiment the suggestion Matt made, we ran into some questions regarding to TSSolve vs KSPSolve. We got different initial unpreconditioned residual using two solvers. Let?s say we solve the problem with backward Euler and there is no rhs. We guess TSSolve solves (U^{n+1}-U^n)/dt = A U^{n+1}. (We only provides IJacobian in this case and turn on TS_LINEAR.) So we guess the initial unpreconditioned residual would be ||U^n/dt||_2, which seems different from the residual we got from a backward Euler stepping we implemented by ourself through KSPSolve. Do we have some misunderstanding on TSSolve? Thanks, Qi T5 at LANL On Jul 7, 2021, at 3:54 PM, Matthew Knepley > wrote: On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae > wrote: Hi Matt, Thanks for your quick reply. I have not completely understood your suggestion, could you please elaborate a bit more? For your convenience, here is how I am proceeding for the moment in my code: TSGetKSP(ts,&ksp); KSPGetPC(ksp,&pc); PCSetType(pc,PCFIELDSPLIT); PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); PCSetUp(pc); PCFieldSplitGetSubKSP(pc, &n, &subksp); KSPGetPC(subksp[1], &(subpc[1])); I do not like the two lines above. We should not have to do this. KSPSetOperators(subksp[1],T,T); In the above line, I want you to use a separate preconditioning matrix M, instead of T. That way, it will provide the preconditioning matrix for your Schur complement problem. Thanks, Matt KSPSetUp(subksp[1]); PetscFree(subksp); TSSolve(ts,X); Thank you. Best, Zakariae ________________________________ From: Matthew Knepley > Sent: Wednesday, July 7, 2021 12:11:10 PM To: Jorti, Zakariae Cc: petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu Subject: [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users > wrote: Hi, I am trying to build a PCFIELDSPLIT preconditioner for a matrix J = [A00 A01] [A10 A11] that has the following shape: M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I 0] [0 I] [0 ksp(T)] [-A10 ksp(A00) I ] where T is a user-defined Schur complement approximation that replaces the true Schur complement S:= A11 - A10 ksp(A00) A01. I am trying to do something similar to this example (lines 41--45 and 116--121): https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html The problem I have is that I manage to replace S with T on a separate single linear system but not for the linear systems generated by my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} correctly, the T matrix gets replaced by S in the preconditioner once I call TSSolve. Do you have any suggestions how to fix this knowing that the matrix J does not change over time? I don't like how it is done in that example for this very reason. When I want to use a custom preconditioning matrix for the Schur complement, I always give a preconditioning matrix M to the outer solve. Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for the Schur complement, for that preconditioning matrix without extra code. Can you do this? Thanks, Matt Many thanks. Best regards, Zakariae -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 14 17:42:52 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Jul 2021 18:42:52 -0400 Subject: [petsc-users] Is it possible to keep track of original elements # after a call to DMPlexDistribute ? In-Reply-To: References: <7236c736-6066-1ba3-55b1-60782d8e754f@giref.ulaval.ca> Message-ID: On Wed, Jul 14, 2021 at 4:58 PM Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Hi Matthew, > > Ok, I did that but it segfault now. Here are the order of the calls: > > DMPlexCreate > > DMSetDimension > > DMPlexBuildFromCellListParallel(...) > > DMPlexInterpolate > > PetscPartitioner lPart; > DMPlexGetPartitioner(lDMSansOverlap, &lPart); > PetscPartitionerSetFromOptions(lPart); > > DMSetUseNatural(lDMSansOverlap, PETSC_TRUE) > > DMPlexDistribute > > DMPlexGlobalToNaturalBegin > > DMPlexGlobalToNaturalEnd > > > But it gives me the following error: > > 0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Petsc has generated inconsistent data > [0]PETSC ERROR: DM global to natural SF not present. > If DMPlexDistribute() was called and a section was defined, report to > petsc-maint at mcs.anl.gov. > > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.15.0, Mar 30, 2021 > [0]PETSC ERROR: MEF++.dev on a named rohan by ericc Wed Jul 14 16:57:48 > 2021 > [0]PETSC ERROR: Configure options > --prefix=/opt/petsc-3.15.0_debug_openmpi-4.1.0_gcc7 --with-mpi-compilers=1 > --with-mpi-dir=/opt/openmpi-4.1.0_gcc7 --with-cxx-dialect=C++14 > --with-make-np=12 --with-shared-libraries=1 --with-debugging=yes > --with-memalign=64 --with-visibility=0 --with-64-bit-indices=0 > --download-ml=yes --download-mumps=yes --download-superlu=yes > --download-hpddm=yes --download-slepc=yes --download-superlu_dist=yes > --download-parmetis=yes --download-ptscotch=yes --download-metis=yes > --download-strumpack=yes --download-suitesparse=yes --download-hypre=yes > --with-blaslapack-dir=/opt/intel/oneapi/mkl/2021.1.1/env/../lib/intel64 > --with-mkl_pardiso-dir=/opt/intel/oneapi/mkl/2021.1.1/env/.. > --with-mkl_cpardiso-dir=/opt/intel/oneapi/mkl/2021.1.1/env/.. > --with-scalapack=1 > --with-scalapack-include=/opt/intel/oneapi/mkl/2021.1.1/env/../include > --with-scalapack-lib="-L/opt/intel/oneapi/mkl/2021.1.1/env/../lib/intel64 > -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64" > [0]PETSC ERROR: #1 DMPlexGlobalToNaturalBegin() at > /tmp/ompi-opt/petsc-3.15.0-debug/src/dm/impls/plex/plexnatural.c:245 > > What did I missed? > > Ah, there was a confusion of intent. GlobalToNatural() is for people that want data transformed back into the original order. I thought that was what you wanted. If you just want mesh points in the original order, we give you the transformation as part of the output of DMPlexDistribute(). The migrationSF that is output maps the original point to the distributed point. You run it backwards to get the original ordering. Thanks, Matt > Thanks a lot! > > Eric > On 2021-07-14 3:09 p.m., Matthew Knepley wrote: > > On Wed, Jul 14, 2021 at 1:18 PM Eric Chamberland < > Eric.Chamberland at giref.ulaval.ca> wrote: > >> Hi, >> >> I want to use DMPlexDistribute from PETSc for computing overlapping and >> play with the different partitioners supported. >> >> However, after calling DMPlexDistribute, I noticed the elements are >> renumbered and then the original number is lost. >> >> What would be the best way to keep track of the element renumbering? >> >> a) Adding an optional parameter to let the user retrieve a vector or >> "IS" giving the old number? >> >> b) Adding a DMLabel (seems a wrong good solution) >> >> c) Other idea? >> >> Of course, I don't want to loose performances with the need of this >> "mapping"... >> > > You need to call > > https://petsc.org/release/docs/manualpages/DM/DMSetUseNatural.html > > before call DMPlexDistribute(). Then you can call > > > https://petsc.org/release/docs/manualpages/DMPLEX/DMPlexGlobalToNaturalBegin.html > > to map back to the original numbering if you want. This is the same thing > that DMDA is doing. > > Thanks, > > Matt > > >> Thanks, >> >> Eric >> >> -- >> Eric Chamberland, ing., M. Ing >> Professionnel de recherche >> GIREF/Universit? Laval >> (418) 656-2131 poste 41 22 42 >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 14 18:14:04 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Jul 2021 19:14:04 -0400 Subject: [petsc-users] Loading PETSC Grid In-Reply-To: References: Message-ID: On Wed, Jul 14, 2021 at 3:47 PM Alfredo J Duarte Gomez wrote: > Good morning, > > We are currently developing a PETSC application on a structured grid, > however, we are applying a curvilinear transformation so that the physical > domain is not a rectangle (the computational domain is). > > As a result we are trying to load in a mesh into the PETSC dmda object to > create easy visualizations. I have successfully done this through the > following code: > > > ierr=VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,6400,&co);CHKERRQ(ierr); > ierr=PetscObjectSetName((PetscObject) co, > "coordinates");CHKERRQ(ierr); > > ierr=PetscViewerHDF5Open(PETSC_COMM_WORLD,"coord2d.h5",FILE_MODE_READ,&viewer);CHKERRQ(ierr); > ierr=VecLoad(co,viewer);CHKERRQ(ierr); > ierr=PetscViewerDestroy(&viewer);CHKERRQ(ierr); > ierr=DMSetCoordinates(da,co);CHKERRQ(ierr); > > Where my coordinate vector in HDF5 format is a single column vector in the > order x1,y1,x2,y2,...xN,yN,etc. where 1 is the lower left corner and N. > However this seems like a rather crude way of doing it, it seems like using > the DMGetCoordinateDM() and DMCreateGlobalVector() is a more sophisticated > way of leveraging the dmda object, however when I use: > > DMGetCoordinateDM(da, &cda); > DMCreateGlobalVector(cda, &co); > ierr=PetscObjectSetName((PetscObject) co, "coordinates");CHKERRQ(ierr); > > ierr=PetscViewerHDF5Open(PETSC_COMM_WORLD,"coord2d.h5",FILE_MODE_READ,&viewer);CHKERRQ(ierr); > ierr=VecLoad(co,viewer);CHKERRQ(ierr); > ierr=PetscViewerDestroy(&viewer);CHKERRQ(ierr); > ierr=DMSetCoordinates(da,co);CHKERRQ(ierr); > > The loading function complains about the structure/format of my data. I > haven't been able to find the format that this coordinate dmda expects from > an HDF5 file in any of the documentation. Is there anywhere I could find > this information? > Yes, I agree that it should work that way. You have to show me the complaint, or a small example that I can run. Perhaps it is the fact that the coordinate vector will have a block size (the spatial dimension), which your HDF5 data may be missing. Thanks, Matt > Thank you, > > -Alfredo > > > > > > -- > Alfredo Duarte > Graduate Research Assistant > The University of Texas at Austin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.semplice at uninsubria.it Thu Jul 15 05:38:59 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Thu, 15 Jul 2021 12:38:59 +0200 Subject: [petsc-users] output DMDA to hdf5 file? In-Reply-To: References: <5ca2d4df-3f67-2951-c0b7-a2678d9df335@uninsubria.it> Message-ID: Il 12/07/21 17:51, Matthew Knepley ha scritto: > On Mon, Jul 12, 2021 at 11:40 AM Matteo Semplice > > > wrote: > > Dear all, > > ??? I am experimenting with hdf5+xdmf output. At > https://www.xdmf.org/index.php/XDMF_Model_and_Format > > I read that "XDMF uses XML to store Light data and to describe the > data Model. Either HDF5[3] > > or binary files can be used to store Heavy data. The data Format > is stored redundantly in both XML and HDF5." > > However, if I call DMView(dmda,hdf5viewer) and then I run h5ls or > h5stat on the resulting h5 file, I see no "geometry" section in > the file. How should I write the geometry to the HDF5 file? > > Here below is what I have tried. > > The HDF5 stuff is only implemented for DMPlex since unstructured?grids > need to be explicitly stored. You can usually just define the > structured grid in the XML > without putting anything in the HDF5. We could write metadata so that > the XML could be autogenerated, but we have not done that. Thanks for the clarification. It shouldn't be hard to produce the XML from my code. Just another related question: if I call VecView in parallel with the HDF5 viewer, I get a single output file. Does this mean that data are gathered by one process and written or it handles it smartly by coordinating the output of all processes to a single file? Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 15 07:15:47 2021 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 15 Jul 2021 08:15:47 -0400 Subject: [petsc-users] output DMDA to hdf5 file? In-Reply-To: References: <5ca2d4df-3f67-2951-c0b7-a2678d9df335@uninsubria.it> Message-ID: On Thu, Jul 15, 2021 at 6:39 AM Matteo Semplice < matteo.semplice at uninsubria.it> wrote: > > Il 12/07/21 17:51, Matthew Knepley ha scritto: > > On Mon, Jul 12, 2021 at 11:40 AM Matteo Semplice < > matteo.semplice at uninsubria.it> wrote: > >> Dear all, >> >> I am experimenting with hdf5+xdmf output. At >> https://www.xdmf.org/index.php/XDMF_Model_and_Format >> >> I read that "XDMF uses XML to store Light data and to describe the data >> Model. Either HDF5[3] >> >> or binary files can be used to store Heavy data. The data Format is stored >> redundantly in both XML and HDF5." >> >> However, if I call DMView(dmda,hdf5viewer) and then I run h5ls or h5stat >> on the resulting h5 file, I see no "geometry" section in the file. How >> should I write the geometry to the HDF5 file? >> >> Here below is what I have tried. >> > The HDF5 stuff is only implemented for DMPlex since unstructured grids > need to be explicitly stored. You can usually just define the structured > grid in the XML > without putting anything in the HDF5. We could write metadata so that the > XML could be autogenerated, but we have not done that. > > Thanks for the clarification. It shouldn't be hard to produce the XML from > my code. > > Just another related question: if I call VecView in parallel with the HDF5 > viewer, I get a single output file. Does this mean that data are gathered > by one process and written or it handles it smartly by coordinating the > output of all processes to a single file? > This is slightly more complicated than you would expect. We have two implementations, one which uses MPI-IO, and one which sends data from each process to 0, which writes it out. It turns out that MPI-IO is sometimes poorly supported or badly implemented, so you need the fallback. Thanks, Matt > Matteo > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.semplice at uninsubria.it Thu Jul 15 07:20:18 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Thu, 15 Jul 2021 14:20:18 +0200 Subject: [petsc-users] output DMDA to hdf5 file? In-Reply-To: References: <5ca2d4df-3f67-2951-c0b7-a2678d9df335@uninsubria.it> Message-ID: Il 15/07/21 14:15, Matthew Knepley ha scritto: > On Thu, Jul 15, 2021 at 6:39 AM Matteo Semplice > > > wrote: > > > Il 12/07/21 17:51, Matthew Knepley ha scritto: >> On Mon, Jul 12, 2021 at 11:40 AM Matteo Semplice >> > > wrote: >> >> Dear all, >> >> ??? I am experimenting with hdf5+xdmf output. At >> https://www.xdmf.org/index.php/XDMF_Model_and_Format >> >> I read that "XDMF uses XML to store Light data and to >> describe the data Model. Either HDF5[3] >> >> or binary files can be used to store Heavy data. The data >> Format is stored redundantly in both XML and HDF5." >> >> However, if I call DMView(dmda,hdf5viewer) and then I run >> h5ls or h5stat on the resulting h5 file, I see no "geometry" >> section in the file. How should I write the geometry to the >> HDF5 file? >> >> Here below is what I have tried. >> >> The HDF5 stuff is only implemented for DMPlex since >> unstructured?grids need to be explicitly stored. You can usually >> just define the structured grid in the XML >> without putting anything in the HDF5. We could write metadata so >> that the XML could be autogenerated, but we have not done that. > > Thanks for the clarification. It shouldn't be hard to produce the > XML from my code. > > Just another related question: if I call VecView in parallel with > the HDF5 viewer, I get a single output file. Does this mean that > data are gathered by one process and written or it handles it > smartly by coordinating the output of all processes to a single file? > > This is slightly more complicated than you would expect. We have two > implementations, one which uses MPI-IO, and one which sends > data from each process to 0, which writes it out. It turns out that > MPI-IO is sometimes poorly?supported or badly implemented, so you need > the fallback. Thanks! On my machine I am compiling from the git repo with --download-hdf5, so I have some control, but on clusters I prefer to use the available petsc. Is there a simple way to check which implementation is begin used in a run? Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 15 07:26:34 2021 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 15 Jul 2021 08:26:34 -0400 Subject: [petsc-users] output DMDA to hdf5 file? In-Reply-To: References: <5ca2d4df-3f67-2951-c0b7-a2678d9df335@uninsubria.it> Message-ID: On Thu, Jul 15, 2021 at 8:20 AM Matteo Semplice < matteo.semplice at uninsubria.it> wrote: > > Il 15/07/21 14:15, Matthew Knepley ha scritto: > > On Thu, Jul 15, 2021 at 6:39 AM Matteo Semplice < > matteo.semplice at uninsubria.it> wrote: > >> >> Il 12/07/21 17:51, Matthew Knepley ha scritto: >> >> On Mon, Jul 12, 2021 at 11:40 AM Matteo Semplice < >> matteo.semplice at uninsubria.it> wrote: >> >>> Dear all, >>> >>> I am experimenting with hdf5+xdmf output. At >>> https://www.xdmf.org/index.php/XDMF_Model_and_Format >>> >>> I read that "XDMF uses XML to store Light data and to describe the data >>> Model. Either HDF5[3] >>> >>> or binary files can be used to store Heavy data. The data Format is stored >>> redundantly in both XML and HDF5." >>> >>> However, if I call DMView(dmda,hdf5viewer) and then I run h5ls or h5stat >>> on the resulting h5 file, I see no "geometry" section in the file. How >>> should I write the geometry to the HDF5 file? >>> >>> Here below is what I have tried. >>> >> The HDF5 stuff is only implemented for DMPlex since unstructured grids >> need to be explicitly stored. You can usually just define the structured >> grid in the XML >> without putting anything in the HDF5. We could write metadata so that the >> XML could be autogenerated, but we have not done that. >> >> Thanks for the clarification. It shouldn't be hard to produce the XML >> from my code. >> >> Just another related question: if I call VecView in parallel with the >> HDF5 viewer, I get a single output file. Does this mean that data are >> gathered by one process and written or it handles it smartly by >> coordinating the output of all processes to a single file? >> > This is slightly more complicated than you would expect. We have two > implementations, one which uses MPI-IO, and one which sends > data from each process to 0, which writes it out. It turns out that MPI-IO > is sometimes poorly supported or badly implemented, so you need > the fallback. > > Thanks! > > On my machine I am compiling from the git repo with --download-hdf5, so I > have some control, but on clusters I prefer to use the available petsc. > Is there a simple way to check which implementation is begin used in a run? > You have to check the configure output. We never gather everything to one process, so you should not have to worry about it. Thanks, Matt Matteo > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.semplice at uninsubria.it Thu Jul 15 07:53:43 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Thu, 15 Jul 2021 14:53:43 +0200 Subject: [petsc-users] output DMDA to hdf5 file? In-Reply-To: References: <5ca2d4df-3f67-2951-c0b7-a2678d9df335@uninsubria.it> Message-ID: <148effeb-cf30-bd40-ac1d-f740e795799a@uninsubria.it> Il 15/07/21 14:26, Matthew Knepley ha scritto: > On Thu, Jul 15, 2021 at 8:20 AM Matteo Semplice > > > wrote: > > > Il 15/07/21 14:15, Matthew Knepley ha scritto: >> On Thu, Jul 15, 2021 at 6:39 AM Matteo Semplice >> > > wrote: >> >> >> Il 12/07/21 17:51, Matthew Knepley ha scritto: >>> On Mon, Jul 12, 2021 at 11:40 AM Matteo Semplice >>> >> > wrote: >>> >>> Dear all, >>> >>> ??? I am experimenting with hdf5+xdmf output. At >>> https://www.xdmf.org/index.php/XDMF_Model_and_Format >>> >>> I read that "XDMF uses XML to store Light data and to >>> describe the data Model. Either HDF5[3] >>> >>> or binary files can be used to store Heavy data. The >>> data Format is stored redundantly in both XML and HDF5." >>> >>> However, if I call DMView(dmda,hdf5viewer) and then I >>> run h5ls or h5stat on the resulting h5 file, I see no >>> "geometry" section in the file. How should I write the >>> geometry to the HDF5 file? >>> >>> Here below is what I have tried. >>> >>> The HDF5 stuff is only implemented for DMPlex since >>> unstructured?grids need to be explicitly stored. You can >>> usually just define the structured grid in the XML >>> without putting anything in the HDF5. We could write >>> metadata so that the XML could be autogenerated, but we have >>> not done that. >> >> Thanks for the clarification. It shouldn't be hard to produce >> the XML from my code. >> >> Just another related question: if I call VecView in parallel >> with the HDF5 viewer, I get a single output file. Does this >> mean that data are gathered by one process and written or it >> handles it smartly by coordinating the output of all >> processes to a single file? >> >> This is slightly more complicated than you would expect. We have >> two implementations, one which uses MPI-IO, and one which sends >> data from each process to 0, which writes it out. It turns out >> that MPI-IO is sometimes poorly?supported or badly implemented, >> so you need >> the fallback. > > Thanks! > > On my machine I am compiling from the git repo with > --download-hdf5, so I have some control, but on clusters I prefer > to use the available petsc. > > Is there a simple way to check which implementation is begin used > in a run? > > You have to check the configure output. We never gather everything to > one process, so you should not have to worry about it. Thanks a lot! ??? Matteo > > ? ?Thanks, > > ? ? ? Matt > > Matteo > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -- --- Professore Associato in Analisi Numerica Dipartimento di Scienza e Alta Tecnologia Universit? degli Studi dell'Insubria Via Valleggio, 11 - Como -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Thu Jul 15 09:41:35 2021 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Thu, 15 Jul 2021 07:41:35 -0700 Subject: [petsc-users] MatZeroRows changes my sparsity pattern Message-ID: My interpretation of the documentation page of MatZeroRows is that if I've set MAT_KEEP_NONZERO_PATTERN to true, then my sparsity pattern shouldn't be changed by a call to it, e.g. a->imax should not change. However, at least for sequential matrices, MatAssemblyEnd is called with MAT_FINAL_ASSEMBLY at the end of MatZeroRows_SeqAIJ and that does indeed change my sparsity pattern. Is my interpretation of the documentation page wrong? Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Thu Jul 15 10:30:00 2021 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Thu, 15 Jul 2021 17:30:00 +0200 Subject: [petsc-users] MatZeroRows changes my sparsity pattern In-Reply-To: References: Message-ID: <38092983-3FE9-4780-A2E8-4C3AA897652A@gmail.com> Alexander Do you have a small code to reproduce the issue? Below is the output using a PETSc example (src/mat/tests/ex11). The pattern is kept. kl-18448:tests szampini$ ./ex11 Mat Object: 1 MPI processes type: seqaij row 0: (0, 5.) row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) row 2: (2, 5.) row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) row 4: (4, 5.) row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) row 6: (6, 5.) row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) row 8: (8, 5.) row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) row 10: (10, 5.) row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) row 12: (12, 5.) row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) row 14: (14, 5.) row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) row 16: (16, 5.) row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) row 18: (18, 5.) row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) row 20: (20, 5.) row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) row 22: (22, 5.) row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) row 24: (19, -1.) (23, -1.) (24, 4.) kl-18448:tests szampini$ ./ex11 -keep_nonzero_pattern Mat Object: 1 MPI processes type: seqaij row 0: (0, 5.) (1, 0.) (5, 0.) row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) row 2: (1, 0.) (2, 5.) (3, 0.) (7, 0.) row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) row 4: (3, 0.) (4, 5.) (9, 0.) row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) row 6: (1, 0.) (5, 0.) (6, 5.) (7, 0.) (11, 0.) row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) row 8: (3, 0.) (7, 0.) (8, 5.) (9, 0.) (13, 0.) row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) row 10: (5, 0.) (10, 5.) (11, 0.) (15, 0.) row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) row 12: (7, 0.) (11, 0.) (12, 5.) (13, 0.) (17, 0.) row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) row 14: (9, 0.) (13, 0.) (14, 5.) (19, 0.) row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) row 16: (11, 0.) (15, 0.) (16, 5.) (17, 0.) (21, 0.) row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) row 18: (13, 0.) (17, 0.) (18, 5.) (19, 0.) (23, 0.) row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) row 20: (15, 0.) (20, 5.) (21, 0.) row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) row 22: (17, 0.) (21, 0.) (22, 5.) (23, 0.) row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) row 24: (19, -1.) (23, -1.) (24, 4.) > On Jul 15, 2021, at 4:41 PM, Alexander Lindsay wrote: > > My interpretation of the documentation page of MatZeroRows is that if I've set MAT_KEEP_NONZERO_PATTERN to true, then my sparsity pattern shouldn't be changed by a call to it, e.g. a->imax should not change. However, at least for sequential matrices, MatAssemblyEnd is called with MAT_FINAL_ASSEMBLY at the end of MatZeroRows_SeqAIJ and that does indeed change my sparsity pattern. Is my interpretation of the documentation page wrong? > > Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.semplice at uninsubria.it Thu Jul 15 10:44:06 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Thu, 15 Jul 2021 17:44:06 +0200 Subject: [petsc-users] parallel HDF5 output of DMDA data with dof>1 Message-ID: <69d928b7-09c4-cc73-6c7e-dac4ee98f84a@uninsubria.it> Hi. When I write (HDF5 viewer) a vector associated to a DMDA with 1 dof, the output is independent of the number of cpus used. However, for a DMDA with dof=2, the output seems to be correct when I run on 1 or 2 cpus, but is scrambled when I run with 4 cpus. Judging from the ranges of the data, each field gets written to the correct part, and its the data witin the field that is scrambled. Here's my MWE: #include #include #include #include #include int main(int argc, char **argv) { ? PetscErrorCode ierr; ? ierr = PetscInitialize(&argc,&argv,(char*)0,help); CHKERRQ(ierr); ? PetscInt Nx=11; ? PetscInt Ny=11; ? PetscScalar dx = 1.0 / (Nx-1); ? PetscScalar dy = 1.0 / (Ny-1); ? DM dmda; ? ierr = DMDACreate2d(PETSC_COMM_WORLD, ????????????????????? DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, ????????????????????? DMDA_STENCIL_STAR, ????????????????????? Nx,Ny, //global dim ????????????????????? PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim ????????????????????? 2,1, //dof, stencil width ????????????????????? NULL, NULL, //n nodes per direction on each cpu ????????????????????? &dmda);????? CHKERRQ(ierr); ? ierr = DMSetFromOptions(dmda); CHKERRQ(ierr); ? ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr); ? ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0); CHKERRQ(ierr); ? ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr); ? ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr); ? DMDALocalInfo daInfo; ? ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr); ? IS *is; ? DM *daField; ? ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is, &daField); CHKERRQ(ierr); ? Vec U0; ? ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr); ? //Initial data ? typedef struct{ PetscScalar s,c;} data_type; ? data_type **u; ? ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr); ? for (PetscInt j=daInfo.ys; j ? ??? ????? ????? ??????? ??????? ????????? 0.0 0.0 ????????? 0.1 0.1 ??????? ??????? ????????? solutionSC.hdf5:/S ??????? ??????? ????????? solutionSC.hdf5:/C ??????? ????? ??? ? Steps to reprduce: run code and open the xdmf with paraview. If the code was run with 1,2 or 3 cpus, the data are correct (except the plane xy has become the plane yz), but with 4 cpus the data are scrambled. Does anyone have any insight? (I am using Petsc Release Version 3.14.2, but I can compile a newer one if you think it's important.) Best ??? Matteo From fdkong.jd at gmail.com Thu Jul 15 10:45:48 2021 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 15 Jul 2021 09:45:48 -0600 Subject: [petsc-users] MatZeroRows changes my sparsity pattern In-Reply-To: <38092983-3FE9-4780-A2E8-4C3AA897652A@gmail.com> References: <38092983-3FE9-4780-A2E8-4C3AA897652A@gmail.com> Message-ID: "if (a->keepnonzeropattern)" branch does not change ilen so that A->ops->assemblyend will be fine. It would help if you made sure that elements have been inserted for these rows before you call MatZeroRows. However, I am not sure it is necessary to call A->ops->assemblyend if we already require a->keepnonzeropattern. That being said, we might have something like this *diff --git a/src/mat/impls/aij/seq/aij.c b/src/mat/impls/aij/seq/aij.c* *index 42c93a82b1..3f20a599d6 100644* *--- a/src/mat/impls/aij/seq/aij.c* *+++ b/src/mat/impls/aij/seq/aij.c* @@ -2203,7 +2203,9 @@ PetscErrorCode MatZeroRows_SeqAIJ(Mat A,PetscInt N,const PetscInt rows[],PetscSc #if defined(PETSC_HAVE_DEVICE) if (A->offloadmask != PETSC_OFFLOAD_UNALLOCATED) A->offloadmask = PETSC_OFFLOAD_CPU; #endif - ierr = (*A->ops->assemblyend)(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); + if (!a->keepnonzeropattern) { + ierr = (*A->ops->assemblyend)(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); + } PetscFunctionReturn(0); } Fande On Thu, Jul 15, 2021 at 9:30 AM Stefano Zampini wrote: > Alexander > > Do you have a small code to reproduce the issue? > > Below is the output using a PETSc example (src/mat/tests/ex11). The > pattern is kept. > > kl-18448:tests szampini$ ./ex11 > Mat Object: 1 MPI processes > type: seqaij > row 0: (0, 5.) > row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) > row 2: (2, 5.) > row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) > row 4: (4, 5.) > row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) > row 6: (6, 5.) > row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) > row 8: (8, 5.) > row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) > row 10: (10, 5.) > row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) > row 12: (12, 5.) > row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) > row 14: (14, 5.) > row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) > row 16: (16, 5.) > row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) > row 18: (18, 5.) > row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) > row 20: (20, 5.) > row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) > row 22: (22, 5.) > row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) > row 24: (19, -1.) (23, -1.) (24, 4.) > kl-18448:tests szampini$ ./ex11 -keep_nonzero_pattern > Mat Object: 1 MPI processes > type: seqaij > row 0: (0, 5.) (1, 0.) (5, 0.) > row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) > row 2: (1, 0.) (2, 5.) (3, 0.) (7, 0.) > row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) > row 4: (3, 0.) (4, 5.) (9, 0.) > row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) > row 6: (1, 0.) (5, 0.) (6, 5.) (7, 0.) (11, 0.) > row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) > row 8: (3, 0.) (7, 0.) (8, 5.) (9, 0.) (13, 0.) > row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) > row 10: (5, 0.) (10, 5.) (11, 0.) (15, 0.) > row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) > row 12: (7, 0.) (11, 0.) (12, 5.) (13, 0.) (17, 0.) > row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) > row 14: (9, 0.) (13, 0.) (14, 5.) (19, 0.) > row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) > row 16: (11, 0.) (15, 0.) (16, 5.) (17, 0.) (21, 0.) > row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) > row 18: (13, 0.) (17, 0.) (18, 5.) (19, 0.) (23, 0.) > row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) > row 20: (15, 0.) (20, 5.) (21, 0.) > row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) > row 22: (17, 0.) (21, 0.) (22, 5.) (23, 0.) > row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) > row 24: (19, -1.) (23, -1.) (24, 4.) > > On Jul 15, 2021, at 4:41 PM, Alexander Lindsay > wrote: > > My interpretation of the documentation page of MatZeroRows is that if I've > set MAT_KEEP_NONZERO_PATTERN to true, then my sparsity pattern shouldn't be > changed by a call to it, e.g. a->imax should not change. However, at least > for sequential matrices, MatAssemblyEnd is called with MAT_FINAL_ASSEMBLY > at the end of MatZeroRows_SeqAIJ and that does indeed change my sparsity > pattern. Is my interpretation of the documentation page wrong? > > Alex > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Thu Jul 15 10:51:29 2021 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Thu, 15 Jul 2021 08:51:29 -0700 Subject: [petsc-users] MatZeroRows changes my sparsity pattern In-Reply-To: References: <38092983-3FE9-4780-A2E8-4C3AA897652A@gmail.com> Message-ID: On Thu, Jul 15, 2021 at 8:46 AM Fande Kong wrote: > "if (a->keepnonzeropattern)" branch does not change ilen so that > A->ops->assemblyend will be fine. It would help if you made sure that > elements have been inserted for these rows before you call MatZeroRows. > So this is the crux of the problem. In ex11 let's say that I had not insert a value at (0,5) but I know I'm going to later and I've preallocated the space for it. MatZeroValues will erase that preallocation with its call to MatAssemblyEnd with MAT_FINAL_ASSEMBLY regardless of the value of keepnonzeropattern. > However, I am not sure it is necessary to call A->ops->assemblyend if we > already require a->keepnonzeropattern. That being said, we might have > something like this > > > *diff --git a/src/mat/impls/aij/seq/aij.c b/src/mat/impls/aij/seq/aij.c* > > *index 42c93a82b1..3f20a599d6 100644* > > *--- a/src/mat/impls/aij/seq/aij.c* > > *+++ b/src/mat/impls/aij/seq/aij.c* > > @@ -2203,7 +2203,9 @@ PetscErrorCode MatZeroRows_SeqAIJ(Mat A,PetscInt > N,const PetscInt rows[],PetscSc > > #if defined(PETSC_HAVE_DEVICE) > > if (A->offloadmask != PETSC_OFFLOAD_UNALLOCATED) A->offloadmask = > PETSC_OFFLOAD_CPU; > > #endif > > - ierr = (*A->ops->assemblyend)(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > > + if (!a->keepnonzeropattern) { > > + ierr = (*A->ops->assemblyend)(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > > + } > > PetscFunctionReturn(0); > > } > > > Fande > > On Thu, Jul 15, 2021 at 9:30 AM Stefano Zampini > wrote: > >> Alexander >> >> Do you have a small code to reproduce the issue? >> >> Below is the output using a PETSc example (src/mat/tests/ex11). The >> pattern is kept. >> >> kl-18448:tests szampini$ ./ex11 >> Mat Object: 1 MPI processes >> type: seqaij >> row 0: (0, 5.) >> row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) >> row 2: (2, 5.) >> row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) >> row 4: (4, 5.) >> row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) >> row 6: (6, 5.) >> row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) >> row 8: (8, 5.) >> row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) >> row 10: (10, 5.) >> row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) >> row 12: (12, 5.) >> row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) >> row 14: (14, 5.) >> row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) >> row 16: (16, 5.) >> row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) >> row 18: (18, 5.) >> row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) >> row 20: (20, 5.) >> row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) >> row 22: (22, 5.) >> row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) >> row 24: (19, -1.) (23, -1.) (24, 4.) >> kl-18448:tests szampini$ ./ex11 -keep_nonzero_pattern >> Mat Object: 1 MPI processes >> type: seqaij >> row 0: (0, 5.) (1, 0.) (5, 0.) >> row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) >> row 2: (1, 0.) (2, 5.) (3, 0.) (7, 0.) >> row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) >> row 4: (3, 0.) (4, 5.) (9, 0.) >> row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) >> row 6: (1, 0.) (5, 0.) (6, 5.) (7, 0.) (11, 0.) >> row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) >> row 8: (3, 0.) (7, 0.) (8, 5.) (9, 0.) (13, 0.) >> row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) >> row 10: (5, 0.) (10, 5.) (11, 0.) (15, 0.) >> row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) >> row 12: (7, 0.) (11, 0.) (12, 5.) (13, 0.) (17, 0.) >> row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) >> row 14: (9, 0.) (13, 0.) (14, 5.) (19, 0.) >> row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) >> row 16: (11, 0.) (15, 0.) (16, 5.) (17, 0.) (21, 0.) >> row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) >> row 18: (13, 0.) (17, 0.) (18, 5.) (19, 0.) (23, 0.) >> row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) >> row 20: (15, 0.) (20, 5.) (21, 0.) >> row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) >> row 22: (17, 0.) (21, 0.) (22, 5.) (23, 0.) >> row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) >> row 24: (19, -1.) (23, -1.) (24, 4.) >> >> On Jul 15, 2021, at 4:41 PM, Alexander Lindsay >> wrote: >> >> My interpretation of the documentation page of MatZeroRows is that if >> I've set MAT_KEEP_NONZERO_PATTERN to true, then my sparsity pattern >> shouldn't be changed by a call to it, e.g. a->imax should not change. >> However, at least for sequential matrices, MatAssemblyEnd is called with >> MAT_FINAL_ASSEMBLY at the end of MatZeroRows_SeqAIJ and that does indeed >> change my sparsity pattern. Is my interpretation of the documentation page >> wrong? >> >> Alex >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Thu Jul 15 11:33:31 2021 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Thu, 15 Jul 2021 09:33:31 -0700 Subject: [petsc-users] MatZeroRows changes my sparsity pattern In-Reply-To: References: <38092983-3FE9-4780-A2E8-4C3AA897652A@gmail.com> Message-ID: Especially if the user has requested to keep their nonzero pattern, is there any harm in calling MatAssembly with FLUSH instead of FINAL? Are there users relying on MatZeroValues being their final assembly? On Thu, Jul 15, 2021 at 8:51 AM Alexander Lindsay wrote: > On Thu, Jul 15, 2021 at 8:46 AM Fande Kong wrote: > >> "if (a->keepnonzeropattern)" branch does not change ilen so that >> A->ops->assemblyend will be fine. It would help if you made sure that >> elements have been inserted for these rows before you call MatZeroRows. >> > > So this is the crux of the problem. In ex11 let's say that I had not > insert a value at (0,5) but I know I'm going to later and I've preallocated > the space for it. MatZeroValues will erase that preallocation with its call > to MatAssemblyEnd with MAT_FINAL_ASSEMBLY regardless of the value of > keepnonzeropattern. > > >> However, I am not sure it is necessary to call A->ops->assemblyend if we >> already require a->keepnonzeropattern. That being said, we might have >> something like this >> >> >> *diff --git a/src/mat/impls/aij/seq/aij.c b/src/mat/impls/aij/seq/aij.c* >> >> *index 42c93a82b1..3f20a599d6 100644* >> >> *--- a/src/mat/impls/aij/seq/aij.c* >> >> *+++ b/src/mat/impls/aij/seq/aij.c* >> >> @@ -2203,7 +2203,9 @@ PetscErrorCode MatZeroRows_SeqAIJ(Mat A,PetscInt >> N,const PetscInt rows[],PetscSc >> >> #if defined(PETSC_HAVE_DEVICE) >> >> if (A->offloadmask != PETSC_OFFLOAD_UNALLOCATED) A->offloadmask = >> PETSC_OFFLOAD_CPU; >> >> #endif >> >> - ierr = (*A->ops->assemblyend)(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); >> >> + if (!a->keepnonzeropattern) { >> >> + ierr = (*A->ops->assemblyend)(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); >> >> + } >> >> PetscFunctionReturn(0); >> >> } >> >> >> Fande >> >> On Thu, Jul 15, 2021 at 9:30 AM Stefano Zampini < >> stefano.zampini at gmail.com> wrote: >> >>> Alexander >>> >>> Do you have a small code to reproduce the issue? >>> >>> Below is the output using a PETSc example (src/mat/tests/ex11). The >>> pattern is kept. >>> >>> kl-18448:tests szampini$ ./ex11 >>> Mat Object: 1 MPI processes >>> type: seqaij >>> row 0: (0, 5.) >>> row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) >>> row 2: (2, 5.) >>> row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) >>> row 4: (4, 5.) >>> row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) >>> row 6: (6, 5.) >>> row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) >>> row 8: (8, 5.) >>> row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) >>> row 10: (10, 5.) >>> row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) >>> row 12: (12, 5.) >>> row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) >>> row 14: (14, 5.) >>> row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) >>> row 16: (16, 5.) >>> row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) >>> row 18: (18, 5.) >>> row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) >>> row 20: (20, 5.) >>> row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) >>> row 22: (22, 5.) >>> row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) >>> row 24: (19, -1.) (23, -1.) (24, 4.) >>> kl-18448:tests szampini$ ./ex11 -keep_nonzero_pattern >>> Mat Object: 1 MPI processes >>> type: seqaij >>> row 0: (0, 5.) (1, 0.) (5, 0.) >>> row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) >>> row 2: (1, 0.) (2, 5.) (3, 0.) (7, 0.) >>> row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) >>> row 4: (3, 0.) (4, 5.) (9, 0.) >>> row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) >>> row 6: (1, 0.) (5, 0.) (6, 5.) (7, 0.) (11, 0.) >>> row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) >>> row 8: (3, 0.) (7, 0.) (8, 5.) (9, 0.) (13, 0.) >>> row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) >>> row 10: (5, 0.) (10, 5.) (11, 0.) (15, 0.) >>> row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) >>> row 12: (7, 0.) (11, 0.) (12, 5.) (13, 0.) (17, 0.) >>> row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) >>> row 14: (9, 0.) (13, 0.) (14, 5.) (19, 0.) >>> row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) >>> row 16: (11, 0.) (15, 0.) (16, 5.) (17, 0.) (21, 0.) >>> row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) >>> row 18: (13, 0.) (17, 0.) (18, 5.) (19, 0.) (23, 0.) >>> row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) >>> row 20: (15, 0.) (20, 5.) (21, 0.) >>> row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) >>> row 22: (17, 0.) (21, 0.) (22, 5.) (23, 0.) >>> row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) >>> row 24: (19, -1.) (23, -1.) (24, 4.) >>> >>> On Jul 15, 2021, at 4:41 PM, Alexander Lindsay >>> wrote: >>> >>> My interpretation of the documentation page of MatZeroRows is that if >>> I've set MAT_KEEP_NONZERO_PATTERN to true, then my sparsity pattern >>> shouldn't be changed by a call to it, e.g. a->imax should not change. >>> However, at least for sequential matrices, MatAssemblyEnd is called with >>> MAT_FINAL_ASSEMBLY at the end of MatZeroRows_SeqAIJ and that does indeed >>> change my sparsity pattern. Is my interpretation of the documentation page >>> wrong? >>> >>> Alex >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Thu Jul 15 12:33:43 2021 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Thu, 15 Jul 2021 10:33:43 -0700 Subject: [petsc-users] MatZeroRows changes my sparsity pattern In-Reply-To: References: <38092983-3FE9-4780-A2E8-4C3AA897652A@gmail.com> Message-ID: After talking with Fande, I don't think my proposal is a good one. Whereas MatSetValues makes it clear that you must call through to MatAssemblyBegin/MatAssemblyEnd after use, there is no such indication for MatZeroRows. Probably most users expect to be able to go onto preconditioning after MatZeroRows, no matter the value of MAT_KEEP_NONZERO_PATTERN. For MOOSE's use, Fande is proposing an option to not shrink memory that we would toggle on at the beginning of Jacobian evaluation and then toggle off before our final call to MatAssemblyBegin/End. With that in mind, I consider this thread to be resolved... On Thu, Jul 15, 2021 at 9:33 AM Alexander Lindsay wrote: > Especially if the user has requested to keep their nonzero pattern, is > there any harm in calling MatAssembly with FLUSH instead of FINAL? Are > there users relying on MatZeroValues being their final assembly? > > On Thu, Jul 15, 2021 at 8:51 AM Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > >> On Thu, Jul 15, 2021 at 8:46 AM Fande Kong wrote: >> >>> "if (a->keepnonzeropattern)" branch does not change ilen so that >>> A->ops->assemblyend will be fine. It would help if you made sure that >>> elements have been inserted for these rows before you call MatZeroRows. >>> >> >> So this is the crux of the problem. In ex11 let's say that I had not >> insert a value at (0,5) but I know I'm going to later and I've preallocated >> the space for it. MatZeroValues will erase that preallocation with its call >> to MatAssemblyEnd with MAT_FINAL_ASSEMBLY regardless of the value of >> keepnonzeropattern. >> >> >>> However, I am not sure it is necessary to call A->ops->assemblyend if >>> we already require a->keepnonzeropattern. That being said, we might have >>> something like this >>> >>> >>> *diff --git a/src/mat/impls/aij/seq/aij.c b/src/mat/impls/aij/seq/aij.c* >>> >>> *index 42c93a82b1..3f20a599d6 100644* >>> >>> *--- a/src/mat/impls/aij/seq/aij.c* >>> >>> *+++ b/src/mat/impls/aij/seq/aij.c* >>> >>> @@ -2203,7 +2203,9 @@ PetscErrorCode MatZeroRows_SeqAIJ(Mat A,PetscInt >>> N,const PetscInt rows[],PetscSc >>> >>> #if defined(PETSC_HAVE_DEVICE) >>> >>> if (A->offloadmask != PETSC_OFFLOAD_UNALLOCATED) A->offloadmask = >>> PETSC_OFFLOAD_CPU; >>> >>> #endif >>> >>> - ierr = (*A->ops->assemblyend)(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); >>> >>> + if (!a->keepnonzeropattern) { >>> >>> + ierr = (*A->ops->assemblyend)(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); >>> >>> + } >>> >>> PetscFunctionReturn(0); >>> >>> } >>> >>> >>> Fande >>> >>> On Thu, Jul 15, 2021 at 9:30 AM Stefano Zampini < >>> stefano.zampini at gmail.com> wrote: >>> >>>> Alexander >>>> >>>> Do you have a small code to reproduce the issue? >>>> >>>> Below is the output using a PETSc example (src/mat/tests/ex11). The >>>> pattern is kept. >>>> >>>> kl-18448:tests szampini$ ./ex11 >>>> Mat Object: 1 MPI processes >>>> type: seqaij >>>> row 0: (0, 5.) >>>> row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) >>>> row 2: (2, 5.) >>>> row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) >>>> row 4: (4, 5.) >>>> row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) >>>> row 6: (6, 5.) >>>> row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) >>>> row 8: (8, 5.) >>>> row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) >>>> row 10: (10, 5.) >>>> row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) >>>> row 12: (12, 5.) >>>> row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) >>>> row 14: (14, 5.) >>>> row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) >>>> row 16: (16, 5.) >>>> row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) >>>> row 18: (18, 5.) >>>> row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) >>>> row 20: (20, 5.) >>>> row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) >>>> row 22: (22, 5.) >>>> row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) >>>> row 24: (19, -1.) (23, -1.) (24, 4.) >>>> kl-18448:tests szampini$ ./ex11 -keep_nonzero_pattern >>>> Mat Object: 1 MPI processes >>>> type: seqaij >>>> row 0: (0, 5.) (1, 0.) (5, 0.) >>>> row 1: (0, -1.) (1, 4.) (2, -1.) (6, -1.) >>>> row 2: (1, 0.) (2, 5.) (3, 0.) (7, 0.) >>>> row 3: (2, -1.) (3, 4.) (4, -1.) (8, -1.) >>>> row 4: (3, 0.) (4, 5.) (9, 0.) >>>> row 5: (0, -1.) (5, 4.) (6, -1.) (10, -1.) >>>> row 6: (1, 0.) (5, 0.) (6, 5.) (7, 0.) (11, 0.) >>>> row 7: (2, -1.) (6, -1.) (7, 4.) (8, -1.) (12, -1.) >>>> row 8: (3, 0.) (7, 0.) (8, 5.) (9, 0.) (13, 0.) >>>> row 9: (4, -1.) (8, -1.) (9, 4.) (14, -1.) >>>> row 10: (5, 0.) (10, 5.) (11, 0.) (15, 0.) >>>> row 11: (6, -1.) (10, -1.) (11, 4.) (12, -1.) (16, -1.) >>>> row 12: (7, 0.) (11, 0.) (12, 5.) (13, 0.) (17, 0.) >>>> row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.) >>>> row 14: (9, 0.) (13, 0.) (14, 5.) (19, 0.) >>>> row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.) >>>> row 16: (11, 0.) (15, 0.) (16, 5.) (17, 0.) (21, 0.) >>>> row 17: (12, -1.) (16, -1.) (17, 4.) (18, -1.) (22, -1.) >>>> row 18: (13, 0.) (17, 0.) (18, 5.) (19, 0.) (23, 0.) >>>> row 19: (14, -1.) (18, -1.) (19, 4.) (24, -1.) >>>> row 20: (15, 0.) (20, 5.) (21, 0.) >>>> row 21: (16, -1.) (20, -1.) (21, 4.) (22, -1.) >>>> row 22: (17, 0.) (21, 0.) (22, 5.) (23, 0.) >>>> row 23: (18, -1.) (22, -1.) (23, 4.) (24, -1.) >>>> row 24: (19, -1.) (23, -1.) (24, 4.) >>>> >>>> On Jul 15, 2021, at 4:41 PM, Alexander Lindsay < >>>> alexlindsay239 at gmail.com> wrote: >>>> >>>> My interpretation of the documentation page of MatZeroRows is that if >>>> I've set MAT_KEEP_NONZERO_PATTERN to true, then my sparsity pattern >>>> shouldn't be changed by a call to it, e.g. a->imax should not change. >>>> However, at least for sequential matrices, MatAssemblyEnd is called with >>>> MAT_FINAL_ASSEMBLY at the end of MatZeroRows_SeqAIJ and that does indeed >>>> change my sparsity pattern. Is my interpretation of the documentation page >>>> wrong? >>>> >>>> Alex >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.semplice at uninsubria.it Fri Jul 16 05:27:09 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Fri, 16 Jul 2021 12:27:09 +0200 Subject: [petsc-users] parallel HDF5 output of DMDA data with dof>1 In-Reply-To: <69d928b7-09c4-cc73-6c7e-dac4ee98f84a@uninsubria.it> References: <69d928b7-09c4-cc73-6c7e-dac4ee98f84a@uninsubria.it> Message-ID: <6c443aac-e9c6-0704-beb5-05afa8c38798@uninsubria.it> Il 15/07/21 17:44, Matteo Semplice ha scritto: > Hi. > > When I write (HDF5 viewer) a vector associated to a DMDA with 1 dof, > the output is independent of the number of cpus used. > > However, for a DMDA with dof=2, the output seems to be correct when I > run on 1 or 2 cpus, but is scrambled when I run with 4 cpus. Judging > from the ranges of the data, each field gets written to the correct > part, and its the data witin the field that is scrambled. Here's my MWE: > > #include > #include > #include > #include > #include > > int main(int argc, char **argv) { > > ? PetscErrorCode ierr; > ? ierr = PetscInitialize(&argc,&argv,(char*)0,help); CHKERRQ(ierr); > ? PetscInt Nx=11; > ? PetscInt Ny=11; > ? PetscScalar dx = 1.0 / (Nx-1); > ? PetscScalar dy = 1.0 / (Ny-1); > ? DM dmda; > ? ierr = DMDACreate2d(PETSC_COMM_WORLD, > ????????????????????? DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, > ????????????????????? DMDA_STENCIL_STAR, > ????????????????????? Nx,Ny, //global dim > ????????????????????? PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim > ????????????????????? 2,1, //dof, stencil width > ????????????????????? NULL, NULL, //n nodes per direction on each cpu > ????????????????????? &dmda);????? CHKERRQ(ierr); > ? ierr = DMSetFromOptions(dmda); CHKERRQ(ierr); > ? ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr); > ? ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0, 0.0, > 1.0); CHKERRQ(ierr); > ? ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr); > ? ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr); > ? DMDALocalInfo daInfo; > ? ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr); > ? IS *is; > ? DM *daField; > ? ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is, &daField); > CHKERRQ(ierr); > ? Vec U0; > ? ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr); > > ? //Initial data > ? typedef struct{ PetscScalar s,c;} data_type; > ? data_type **u; > ? ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr); > ? for (PetscInt j=daInfo.ys; j ??? PetscScalar y = j*dy; > ??? for (PetscInt i=daInfo.xs; i ????? PetscScalar x = i*dx; > ????? u[j][i].s = x+2.*y; > ????? u[j][i].c = 10. + 2.*x*x+y*y; > ??? } > ? } > ? ierr = DMDAVecRestoreArray(dmda,U0,&u); CHKERRQ(ierr); > > ? PetscViewer viewer; > ? ierr = > PetscViewerHDF5Open(PETSC_COMM_WORLD,"solutionSC.hdf5",FILE_MODE_WRITE,&viewer); > CHKERRQ(ierr); > ? Vec uField; > ? ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr); > ? PetscObjectSetName((PetscObject) uField, "S"); > ? ierr = VecView(uField,viewer); CHKERRQ(ierr); > ? ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr); > ? ierr = VecGetSubVector(U0,is[1],&uField); CHKERRQ(ierr); > ? PetscObjectSetName((PetscObject) uField, "C"); > ? ierr = VecView(uField,viewer); CHKERRQ(ierr); > ? ierr = VecRestoreSubVector(U0,is[1],&uField); CHKERRQ(ierr); > ? ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr); > > ? ierr = PetscFinalize(); > ? return ierr; > } > > and my xdmf file > > > xmlns:xi="https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.w3.org%2F2001%2FXInclude&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C5207ada4b58b4312880108d947a769e8%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637619607056182657%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=ksp%2B0oGApgERb%2FIokBrF4q0HzxDoqstxUxl%2FB%2B4Fn3U%3D&reserved=0" > Version="2.0"> > ? > ??? > ????? > ????? > ??????? > ??????? > ????????? 0.0 > 0.0 > ????????? 0.1 > 0.1 > ??????? > ??????? > ????????? solutionSC.hdf5:/S > ??????? > ??????? > ????????? solutionSC.hdf5:/C > ??????? > ????? > ??? > ? > > > Steps to reprduce: run code and open the xdmf with paraview. If the > code was run with 1,2 or 3 cpus, the data are correct (except the > plane xy has become the plane yz), but with 4 cpus the data are > scrambled. > > Does anyone have any insight? > > (I am using Petsc Release Version 3.14.2, but I can compile a newer > one if you think it's important.) Hi, ??? I have a small update on this issue. First, it is still here with version 3.15.2. Secondly, I have run the code under valgrind and - for 1 or 2 processes, I get no errors - for 4 processes, 3 out of 4, trigger the following ==25921== Conditional jump or move depends on uninitialised value(s) ==25921==??? at 0xB3D6259: ??? (in /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so) ==25921==??? by 0xB3D85C8: mca_fcoll_two_phase_file_write_all (in /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so) ==25921==??? by 0xAAEB29B: mca_common_ompio_file_write_at_all (in /usr/lib/x86_64-linux-gnu/openmpi/lib/libmca_common_ompio.so.41.9.0) ==25921==??? by 0xB316605: mca_io_ompio_file_write_at_all (in /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_io_ompio.so) ==25921==??? by 0x73C7FE7: PMPI_File_write_at_all (in /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3) ==25921==??? by 0x69E8700: H5FD__mpio_write (H5FDmpio.c:1466) ==25921==??? by 0x670D6EB: H5FD_write (H5FDint.c:248) ==25921==??? by 0x66DA0D3: H5F__accum_write (H5Faccum.c:826) ==25921==??? by 0x684F091: H5PB_write (H5PB.c:1031) ==25921==??? by 0x66E8055: H5F_shared_block_write (H5Fio.c:205) ==25921==??? by 0x6674538: H5D__chunk_collective_fill (H5Dchunk.c:5064) ==25921==??? by 0x6674538: H5D__chunk_allocate (H5Dchunk.c:4736) ==25921==??? by 0x668C839: H5D__init_storage (H5Dint.c:2473) ==25921==? Uninitialised value was created by a heap allocation ==25921==??? at 0x483577F: malloc (vg_replace_malloc.c:299) ==25921==??? by 0xB3D6155: ??? (in /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so) ==25921==??? by 0xB3D85C8: mca_fcoll_two_phase_file_write_all (in /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so) ==25921==??? by 0xAAEB29B: mca_common_ompio_file_write_at_all (in /usr/lib/x86_64-linux-gnu/openmpi/lib/libmca_common_ompio.so.41.9.0) ==25921==??? by 0xB316605: mca_io_ompio_file_write_at_all (in /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_io_ompio.so) ==25921==??? by 0x73C7FE7: PMPI_File_write_at_all (in /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3) ==25921==??? by 0x69E8700: H5FD__mpio_write (H5FDmpio.c:1466) ==25921==??? by 0x670D6EB: H5FD_write (H5FDint.c:248) ==25921==??? by 0x66DA0D3: H5F__accum_write (H5Faccum.c:826) ==25921==??? by 0x684F091: H5PB_write (H5PB.c:1031) ==25921==??? by 0x66E8055: H5F_shared_block_write (H5Fio.c:205) ==25921==??? by 0x6674538: H5D__chunk_collective_fill (H5Dchunk.c:5064) ==25921==??? by 0x6674538: H5D__chunk_allocate (H5Dchunk.c:4736) Does anyone have any hint on what might be causing this? Is this the "buggy MPI-IO" that Matt was mentioning in https://lists.mcs.anl.gov/pipermail/petsc-users/2021-July/044138.html? I am using the release branch (commit c548142fde) and I have configured with --download-hdf5; configure finds the installed openmpi 3.1.3 from Debian buster. The relevant lines from configure.log are MPI: ? Version:? 3 ? Mpiexec: mpiexec --oversubscribe ? OMPI_VERSION: 3.1.3 hdf5: ? Version:? 1.12.0 ? Includes: -I/home/matteo/software/petsc/opt/include ? Library:? -Wl,-rpath,/home/matteo/software/petsc/opt/lib -L/home/matteo/software/petsc/opt/lib -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 Best ??? Matteo From mfadams at lbl.gov Fri Jul 16 08:47:40 2021 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 16 Jul 2021 09:47:40 -0400 Subject: [petsc-users] Spock error Message-ID: I seem to have some missing autoconf stuff on Spock for p4est. This machine does not have much loaded bu default (eg, emacs) and I did load the autoconf module. Any ideas? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1587055 bytes Desc: not available URL: From f.levrero-florencio at onscale.com Fri Jul 16 04:50:43 2021 From: f.levrero-florencio at onscale.com (Francesc Levrero-Florencio) Date: Fri, 16 Jul 2021 10:50:43 +0100 Subject: [petsc-users] Fwd: Some guidance on setting an arc-length solver up in PETSc with TS/SNES In-Reply-To: References: Message-ID: Dear PETSc team and users, I am trying to implement a ?non-consistent arc-length method? (i.e. non-consistent as in the Jacobian from a traditional load-controlled method is used instead of the ?augmented one?, the latter would need an extra/row column for the constraint terms; the non-consistent method requires two linear solves, with different RHS, at every nonlinear iteration). I am trying to do this in the context of TS/SNES. Computation of the Jacobian remains the same, but I am thinking of doing most of the work within the function to compute the residual. The rough non-consistent arc-length algorithm is (delta here refers to the correction per nonlinear iteration): - Solve K * delta u^k_1 = - R - Solve K * delta u^k_2 = - F_ext - Find the correct root of delta lambda - lambda^k = lambda^{k-1} + delta lambda^k - delta u^k = delta u^k_1 + delta lambda^k * delta u^k_2 - u^{k} = u^{k-1} + delta u^k, u here is the solution to the mechanical problem (displacement). The rest is the same as a load-controlled scheme (for a more complete arc-length algorithm, please see ?A simple extrapolated predictor for overcoming the starting and tracking issues in the arc-length method for nonlinear structural mechanics?, Kadapa, C. (2021), *Eng Struct*). I see two potential issues with an arc-length implementation by using TS/SNES: - Needing to modify the solution vector within each nonlinear iteration. I believe this could be sorted out by allowing TS/SNES to obtain the solution to the traditional load-controlled problem (u_1 above), and only keep track of the correction per nonlinear iteration, which would correspond to delta u^k_1 = u^k_1 (given by the TS at the current iteration) ? u^{k-1}_1 (given by the TS at the previous iteration). I imagine the nonlinear iterative correction would be correct because the residual (and jacobian) would be calculated by using an internally stored u (u above, stored outside the TS, which is not the same as the TS internally stored u_1). I believe this would be okay in quasistatic simulations since solution derivatives are not required. How could this be amended for transient simulations? Is there any other simpler way to achieve this? Can the solution vector be set to a predictor value at the beginning of a time step (maybe prestep or poststep related functions)? - Solving two linear systems per iteration. I imagine the way around this is to do TSGetKSP(ts, ksp), and then KSPSolve(ksp, - F_ext, delta u^k_2) within the TS, since the TS would be using the correctly calculated Jacobian K above. Would this an effective way to achieve this? Thanks for your help in advance and please keep up the good work! Regards, Francesc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Fri Jul 16 09:10:27 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Fri, 16 Jul 2021 09:10:27 -0500 Subject: [petsc-users] Spock error In-Reply-To: References: Message-ID: module load autoconf automake libtool --Junchao Zhang On Fri, Jul 16, 2021 at 8:48 AM Mark Adams wrote: > I seem to have some missing autoconf stuff on Spock for p4est. > > This machine does not have much loaded bu default (eg, emacs) and I did > load the autoconf module. > > Any ideas? > Thanks, > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Jul 16 13:43:07 2021 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 16 Jul 2021 14:43:07 -0400 Subject: [petsc-users] make check error on Spock Message-ID: I get this error on Spock. Any ideas? 14:36 main *= /gpfs/alpine/csc314/scratch/adams/petsc$ make PETSC_DIR=/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray PETSC_ARCH="" check Running check examples to verify correct installation Using PETSC_DIR=/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray and PETSC_ARCH= gmake[3]: [/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib/petsc/conf/rules:301: ex19.PETSc] Error 2 (ignored) *******************Error detected during compile or link!******************* See http://www.mcs.anl.gov/petsc/documentation/faq.html /gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tutorials ex19 ********************************************************************************* cc -L/sw/spock/spack-envs/views/rocm-4.1.0/lib -lhsa-runtime64 -L/sw/spock/spack-envs/views/rocm-4.1.0/lib -lamdhip64 -lhsa-runtime64 -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -Qunused-arguments -fvisibility=hidden -g -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -Qunused-arguments -fvisibility=hidden -g -I/sw/spock/spack-envs/views/rocm-4.1.0/include -I/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/include -I/sw/spock/spack-envs/base/opt/linux-sles15-x86_64/gcc-7.5.0/zlib-1.2.11-xujqx3ri73tbwmcwoolo3jnn6ti6vwsh/include -I/sw/spock/spack-envs/views/rocm-4.1.0/include -I/sw/spock/spack-envs/views/rocm-4.1.0/include ex19.c -Wl,-rpath,/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib -L/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib -Wl,-rpath,/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib -L/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib -Wl,-rpath,/sw/spock/spack-envs/base/opt/linux-sles15-x86_64/gcc-7.5.0/zlib-1.2.11-xujqx3ri73tbwmcwoolo3jnn6ti6vwsh/lib -L/sw/spock/spack-envs/base/opt/linux-sles15-x86_64/gcc-7.5.0/zlib-1.2.11-xujqx3ri73tbwmcwoolo3jnn6ti6vwsh/lib -Wl,-rpath,/sw/spock/spack-envs/views/rocm-4.1.0/lib -L/sw/spock/spack-envs/views/rocm-4.1.0/lib -Wl,-rpath,/opt/gcc/8.1.0/snos/lib64 -L/opt/gcc/8.1.0/snos/lib64 -Wl,-rpath,/opt/cray/pe/libsci/21.04.1.1/CRAY/9.0/x86_64/lib -L/opt/cray/pe/libsci/21.04.1.1/CRAY/9.0/x86_64/lib -Wl,-rpath,/opt/cray/pe/mpich/8.1.4/ofi/cray/9.1/lib -L/opt/cray/pe/mpich/8.1.4/ofi/cray/9.1/lib -Wl,-rpath,/opt/cray/pe/mpich/8.1.4/gtl/lib -L/opt/cray/pe/mpich/8.1.4/gtl/lib -Wl,-rpath,/opt/cray/pe/pmi/6.0.10/lib -L/opt/cray/pe/pmi/6.0.10/lib -Wl,-rpath,/opt/cray/pe/dsmml/0.1.4/dsmml/lib -L/opt/cray/pe/dsmml/0.1.4/dsmml/lib -Wl,-rpath,/opt/cray/pe/cce/11.0.4/cce/x86_64/lib -L/opt/cray/pe/cce/11.0.4/cce/x86_64/lib -Wl,-rpath,/opt/cray/xpmem/2.2.40-2.1_2.7__g3cf3325.shasta/lib64 -L/opt/cray/xpmem/2.2.40-2.1_2.7__g3cf3325.shasta/lib64 -Wl,-rpath,/opt/cray/pe/cce/11.0.4/cce-clang/x86_64/lib/clang/11.0.0/lib/linux -L/opt/cray/pe/cce/11.0.4/cce-clang/x86_64/lib/clang/11.0.0/lib/linux -Wl,-rpath,/opt/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 -L/opt/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 -Wl,-rpath,/opt/cray/pe/cce/11.0.4/binutils/x86_64/x86_64-pc-linux-gnu/..//x86_64-unknown-linux-gnu/lib -L/opt/cray/pe/cce/11.0.4/binutils/x86_64/x86_64-pc-linux-gnu/..//x86_64-unknown-linux-gnu/lib -lpetsc -lp4est -lsc -lz -lhipsparse -lhipblas -lrocsparse -lrocsolver -lrocblas -lamdhip64 -lhsa-runtime64 -lstdc++ -ldl -lpmi -lsci_cray_mpi -lsci_cray -lmpifort_cray -lmpi_cray -lmpi_gtl_hsa -lxpmem -ldsmml -lpgas-shmem -lquadmath -lcrayacc_amdgpu -lopenacc -lmodules -lfi -lcraymath -lf -lu -lcsup -lgfortran -lpthread -lgcc_eh -lm -lclang_rt.craypgo-x86_64 -lclang_rt.builtins-x86_64 -lquadmath -lstdc++ -ldl -o ex19 ld.lld: error: /gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib/libpetsc.so: undefined reference to .omp_offloading.img_start.cray_amdgcn-amd-amdhsa [--no-allow-shlib-undefined] ld.lld: error: /gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib/libpetsc.so: undefined reference to .omp_offloading.img_size.cray_amdgcn-amd-amdhsa [--no-allow-shlib-undefined] ld.lld: error: /gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib/libpetsc.so: undefined reference to .omp_offloading.img_cache.cray_amdgcn-amd-amdhsa [--no-allow-shlib-undefined] clang-11: error: linker command failed with exit code 1 (use -v to see invocation) gmake[4]: *** [: ex19] Error 1 *******************Error detected during compile or link!******************* See http://www.mcs.anl.gov/petsc/documentation/faq.html /gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tutorials ex5f ********************************************************* ftn -L/sw/spock/spack-envs/views/rocm-4.1.0/lib -lhsa-runtime64 -fPIC -g -fPIC -g -I/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/include -I/sw/spock/spack-envs/base/opt/linux-sles15-x86_64/gcc-7.5.0/zlib-1.2.11-xujqx3ri73tbwmcwoolo3jnn6ti6vwsh/include -I/sw/spock/spack-envs/views/rocm-4.1.0/include ex5f.F90 -Wl,-rpath,/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib -L/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib -Wl,-rpath,/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib -L/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray/lib -Wl,-rpath,/sw/spock/spack-envs/base/opt/linux-sles15-x86_64/gcc-7.5.0/zlib-1.2.11-xujqx3ri73tbwmcwoolo3jnn6ti6vwsh/lib -L/sw/spock/spack-envs/base/opt/linux-sles15-x86_64/gcc-7.5.0/zlib-1.2.11-xujqx3ri73tbwmcwoolo3jnn6ti6vwsh/lib -Wl,-rpath,/sw/spock/spack-envs/views/rocm-4.1.0/lib -L/sw/spock/spack-envs/views/rocm-4.1.0/lib -Wl,-rpath,/opt/gcc/8.1.0/snos/lib64 -L/opt/gcc/8.1.0/snos/lib64 -Wl,-rpath,/opt/cray/pe/libsci/21.04.1.1/CRAY/9.0/x86_64/lib -L/opt/cray/pe/libsci/21.04.1.1/CRAY/9.0/x86_64/lib -Wl,-rpath,/opt/cray/pe/mpich/8.1.4/ofi/cray/9.1/lib -L/opt/cray/pe/mpich/8.1.4/ofi/cray/9.1/lib -Wl,-rpath,/opt/cray/pe/mpich/8.1.4/gtl/lib -L/opt/cray/pe/mpich/8.1.4/gtl/lib -Wl,-rpath,/opt/cray/pe/pmi/6.0.10/lib -L/opt/cray/pe/pmi/6.0.10/lib -Wl,-rpath,/opt/cray/pe/dsmml/0.1.4/dsmml/lib -L/opt/cray/pe/dsmml/0.1.4/dsmml/lib -Wl,-rpath,/opt/cray/pe/cce/11.0.4/cce/x86_64/lib -L/opt/cray/pe/cce/11.0.4/cce/x86_64/lib -Wl,-rpath,/opt/cray/xpmem/2.2.40-2.1_2.7__g3cf3325.shasta/lib64 -L/opt/cray/xpmem/2.2.40-2.1_2.7__g3cf3325.shasta/lib64 -Wl,-rpath,/opt/cray/pe/cce/11.0.4/cce-clang/x86_64/lib/clang/11.0.0/lib/linux -L/opt/cray/pe/cce/11.0.4/cce-clang/x86_64/lib/clang/11.0.0/lib/linux -Wl,-rpath,/opt/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 -L/opt/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 -Wl,-rpath,/opt/cray/pe/cce/11.0.4/binutils/x86_64/x86_64-pc-linux-gnu/..//x86_64-unknown-linux-gnu/lib -L/opt/cray/pe/cce/11.0.4/binutils/x86_64/x86_64-pc-linux-gnu/..//x86_64-unknown-linux-gnu/lib -lpetsc -lp4est -lsc -lz -lhipsparse -lhipblas -lrocsparse -lrocsolver -lrocblas -lamdhip64 -lhsa-runtime64 -lstdc++ -ldl -lpmi -lsci_cray_mpi -lsci_cray -lmpifort_cray -lmpi_cray -lmpi_gtl_hsa -lxpmem -ldsmml -lpgas-shmem -lquadmath -lcrayacc_amdgpu -lopenacc -lmodules -lfi -lcraymath -lf -lu -lcsup -lgfortran -lpthread -lgcc_eh -lm -lclang_rt.craypgo-x86_64 -lclang_rt.builtins-x86_64 -lquadmath -lstdc++ -ldl -o ex5f /opt/cray/pe/cce/11.0.4/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld: warning: alignment 128 of symbol `$host_init$$runtime_init_for_iso_c_binding$iso_c_binding_' in /opt/cray/pe/cce/11.0.4/cce/x86_64/lib/libmodules.so is smaller than 256 in /tmp/pe_44489/ex5f_1.o /opt/cray/pe/cce/11.0.4/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld: warning: alignment 64 of symbol `$data_init$iso_c_binding_' in /opt/cray/pe/cce/11.0.4/cce/x86_64/lib/libmodules.so is smaller than 256 in /tmp/pe_44489/ex5f_1.o -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 113166 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 2695125 bytes Desc: not available URL: From tangqi at msu.edu Fri Jul 16 14:33:08 2021 From: tangqi at msu.edu (Tang, Qi) Date: Fri, 16 Jul 2021 19:33:08 +0000 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> Message-ID: Matt, We are confident that the modified Schur complement works well based on our subksp and ksp tests. Let me summarize what you suggest us to do regarding to our original question of TSSolve. Instead of calling subksp, you suggest we should provide two matrices J and Jpre through TSSetIJacobian, where J is the original Jacobian and Jpre is the one we modified and both are 2x2 blocks. Then if we call fieldsplit, petsc will automatically use Jpre to construct its Schur complement in the preconditioner stage. Is that what you suggested? Thanks a lot! Qi On Jul 7, 2021, at 3:54 PM, Matthew Knepley > wrote: On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae > wrote: Hi Matt, Thanks for your quick reply. I have not completely understood your suggestion, could you please elaborate a bit more? For your convenience, here is how I am proceeding for the moment in my code: TSGetKSP(ts,&ksp); KSPGetPC(ksp,&pc); PCSetType(pc,PCFIELDSPLIT); PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); PCSetUp(pc); PCFieldSplitGetSubKSP(pc, &n, &subksp); KSPGetPC(subksp[1], &(subpc[1])); I do not like the two lines above. We should not have to do this. KSPSetOperators(subksp[1],T,T); In the above line, I want you to use a separate preconditioning matrix M, instead of T. That way, it will provide the preconditioning matrix for your Schur complement problem. Thanks, Matt KSPSetUp(subksp[1]); PetscFree(subksp); TSSolve(ts,X); Thank you. Best, Zakariae ________________________________ From: Matthew Knepley > Sent: Wednesday, July 7, 2021 12:11:10 PM To: Jorti, Zakariae Cc: petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu Subject: [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users > wrote: Hi, I am trying to build a PCFIELDSPLIT preconditioner for a matrix J = [A00 A01] [A10 A11] that has the following shape: M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I 0] [0 I] [0 ksp(T)] [-A10 ksp(A00) I ] where T is a user-defined Schur complement approximation that replaces the true Schur complement S:= A11 - A10 ksp(A00) A01. I am trying to do something similar to this example (lines 41--45 and 116--121): https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html The problem I have is that I manage to replace S with T on a separate single linear system but not for the linear systems generated by my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} correctly, the T matrix gets replaced by S in the preconditioner once I call TSSolve. Do you have any suggestions how to fix this knowing that the matrix J does not change over time? I don't like how it is done in that example for this very reason. When I want to use a custom preconditioning matrix for the Schur complement, I always give a preconditioning matrix M to the outer solve. Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for the Schur complement, for that preconditioning matrix without extra code. Can you do this? Thanks, Matt Many thanks. Best regards, Zakariae -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 16 14:38:14 2021 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 16 Jul 2021 15:38:14 -0400 Subject: [petsc-users] [EXTERNAL] Re: Problem with PCFIELDSPLIT In-Reply-To: References: <415b50d703ea443b86c86b117ffd23e8@lanl.gov> Message-ID: On Fri, Jul 16, 2021 at 3:33 PM Tang, Qi wrote: > Matt, > > We are confident that the modified Schur complement works well based on > our subksp and ksp tests. Let me summarize what you suggest us to do > regarding to our original question of TSSolve. Instead of calling subksp, > you suggest we should provide two matrices J and Jpre through > TSSetIJacobian, where J is the original Jacobian and Jpre is the one we > modified and both are 2x2 blocks. Then if we call fieldsplit, petsc will > automatically use Jpre to construct its Schur complement in the > preconditioner stage. Is that what you suggested? > Yes, exactly. We should be able to reproduce what you have done, but now completely from the command line. Thanks, Matt > Thanks a lot! > > Qi > > > > > > On Jul 7, 2021, at 3:54 PM, Matthew Knepley wrote: > > On Wed, Jul 7, 2021 at 2:33 PM Jorti, Zakariae wrote: > >> Hi Matt, >> >> >> Thanks for your quick reply. >> >> I have not completely understood your suggestion, could you please >> elaborate a bit more? >> >> For your convenience, here is how I am proceeding for the moment in my >> code: >> >> >> TSGetKSP(ts,&ksp); >> >> KSPGetPC(ksp,&pc); >> >> PCSetType(pc,PCFIELDSPLIT); >> >> PCFieldSplitSetDetectSaddlePoint(pc,PETSC_TRUE); >> >> PCSetUp(pc); >> >> PCFieldSplitGetSubKSP(pc, &n, &subksp); >> >> KSPGetPC(subksp[1], &(subpc[1])); >> > I do not like the two lines above. We should not have to do this. > >> KSPSetOperators(subksp[1],T,T); >> > In the above line, I want you to use a separate preconditioning matrix M, > instead of T. That way, it will provide > the preconditioning matrix for your Schur complement problem. > > Thanks, > > Matt > >> KSPSetUp(subksp[1]); >> >> PetscFree(subksp); >> >> TSSolve(ts,X); >> >> >> Thank you. >> >> Best, >> >> >> Zakariae >> ------------------------------ >> *From:* Matthew Knepley >> *Sent:* Wednesday, July 7, 2021 12:11:10 PM >> *To:* Jorti, Zakariae >> *Cc:* petsc-users at mcs.anl.gov; Tang, Qi; Tang, Xianzhu >> *Subject:* [EXTERNAL] Re: [petsc-users] Problem with PCFIELDSPLIT >> >> On Wed, Jul 7, 2021 at 1:51 PM Jorti, Zakariae via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >>> Hi, >>> >>> >>> I am trying to build a PCFIELDSPLIT preconditioner for a matrix >>> >>> J = [A00 A01] >>> >>> [A10 A11] >>> >>> that has the following shape: >>> >>> >>> M_{user}^{-1} = [I -ksp(A00) A01] [ksp(A00) 0] [I >>> 0] >>> >>> [0 I] [0 >>> ksp(T)] [-A10 ksp(A00) I ] >>> >>> >>> where T is a user-defined Schur complement approximation that replaces >>> the true Schur complement S:= A11 - A10 ksp(A00) A01. >>> >>> >>> I am trying to do something similar to this example (lines 41--45 and >>> 116--121): >>> https://www.mcs.anl.gov/petsc/petsc-current/src/snes/tutorials/ex70.c.html >>> >>> >>> >>> The problem I have is that I manage to replace S with T on a >>> separate single linear system but not for the linear systems generated by >>> my time-dependent PDE. Even if I set the preconditioner M_{user}^{-1} >>> correctly, the T matrix gets replaced by S in the preconditioner once I >>> call TSSolve. >>> >>> Do you have any suggestions how to fix this knowing that the matrix J >>> does not change over time? >>> >>> I don't like how it is done in that example for this very reason. >> >> When I want to use a custom preconditioning matrix for the Schur >> complement, I always give a preconditioning matrix M to the outer solve. >> Then PCFIELDSPLIT automatically pulls the correct block from M, (1,1) for >> the Schur complement, for that preconditioning matrix without >> extra code. Can you do this? >> >> Thanks, >> >> Matt >> >>> Many thanks. >>> >>> >>> Best regards, >>> >>> >>> Zakariae >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zjorti at lanl.gov Fri Jul 16 19:45:56 2021 From: zjorti at lanl.gov (Jorti, Zakariae) Date: Sat, 17 Jul 2021 00:45:56 +0000 Subject: [petsc-users] Question about MatGetSubMatrix Message-ID: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> Hello, I have a matrix A = [A00 , A01 ; A10, A11]. I extract the submatrix A11 with MatGetSubMatrix. I only know the global IS is1 and is2, so to get A11 I call: MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? Many thanks. Best regards, Zakariae -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jul 16 21:17:33 2021 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 16 Jul 2021 21:17:33 -0500 Subject: [petsc-users] Question about MatGetSubMatrix In-Reply-To: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> References: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> Message-ID: <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> Zakariae, MatGetSubMatrix() was removed a long time ago, the routine is now MatCreateSubMatrix() but it does not work in way you had hoped. There is currently no mechanism to move values you put into the sub matrix back into the original larger matrix (though perhaps there should be?). Please look at MatCreateSubMatrixVirtual() and also MatCreateNest() to see if either of those approaches satisfy your needs. Please let us know if there are extensions that would be useful for you to accomplish what you need. Barry > On Jul 16, 2021, at 7:45 PM, Jorti, Zakariae via petsc-users wrote: > > Hello, > > I have a matrix A = [A00 , A01 ; A10, A11]. > I extract the submatrix A11 with MatGetSubMatrix. > I only know the global IS is1 and is2, so to get A11 I call: > MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); > I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. > Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? > Many thanks. > Best regards, > > Zakariae -------------- next part -------------- An HTML attachment was scrubbed... URL: From aduarteg at utexas.edu Mon Jul 19 15:32:08 2021 From: aduarteg at utexas.edu (Alfredo J Duarte Gomez) Date: Mon, 19 Jul 2021 15:32:08 -0500 Subject: [petsc-users] PETSC DMDA fields matrix operations Message-ID: Good morning, I am developing an application for PETSC using the DMDA object. For my purposes, I need to create a few matrix operators and then apply them to the fields in my DMDA object. I have so far successfully created and validated these matrices for a DMDA with one field using the DMCreateMatrix() and MatSetValuesStencil() very efficiently. However it is unclear to me what is the best way to use these matrices in a context with more than one field. It would be easy for me to modify my routine slightly to include all fields on the stencil for when I call MatSetValuesStencil() since most of these operators will act on all fields. I do wonder what to do when I have an operator acting on one field only. Is it reasonable to only assign the values corresponding to that field and do the matrix-vector multiply? Or is that very wasteful in terms of efficiency? On that note and for other applications (intermediate linear system solutions of lower order) what is the cleanest way to pull out a vector that has only one field from the dmda? The resulting vector from operations with this field vector can hopefully still maintain the i,j structure of the dmda. Is DMCreateFieldDecomposition() what I am looking for? Thank you, -Alfredo -- Alfredo Duarte Graduate Research Assistant The University of Texas at Austin -------------- next part -------------- An HTML attachment was scrubbed... URL: From tangqi at msu.edu Mon Jul 19 19:47:34 2021 From: tangqi at msu.edu (Tang, Qi) Date: Tue, 20 Jul 2021 00:47:34 +0000 Subject: [petsc-users] Question about MatGetSubMatrix In-Reply-To: <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> References: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> Message-ID: <76EF3B4F-7E35-4A9B-9BCA-264167E56538@msu.edu> Hi, As a part of implementing this process by ourself, we would like to eliminate boundary dofs. By reading DMStag code, we guess we can use DMStagStencilToIndexLocal MatZeroRowsLocal We note that DMStagStencilToIndexLocal is not explicitly defined in the header file. Is this function ready to use? And will we be able to eliminate the dofs using the above functions? Thanks, Qi On Jul 16, 2021, at 8:17 PM, Barry Smith > wrote: Zakariae, MatGetSubMatrix() was removed a long time ago, the routine is now MatCreateSubMatrix() but it does not work in way you had hoped. There is currently no mechanism to move values you put into the sub matrix back into the original larger matrix (though perhaps there should be?). Please look at MatCreateSubMatrixVirtual() and also MatCreateNest() to see if either of those approaches satisfy your needs. Please let us know if there are extensions that would be useful for you to accomplish what you need. Barry On Jul 16, 2021, at 7:45 PM, Jorti, Zakariae via petsc-users > wrote: Hello, I have a matrix A = [A00 , A01 ; A10, A11]. I extract the submatrix A11 with MatGetSubMatrix. I only know the global IS is1 and is2, so to get A11 I call: MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? Many thanks. Best regards, Zakariae -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Tue Jul 20 03:18:54 2021 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Tue, 20 Jul 2021 10:18:54 +0200 Subject: [petsc-users] Question about MatGetSubMatrix In-Reply-To: <76EF3B4F-7E35-4A9B-9BCA-264167E56538@msu.edu> References: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> <76EF3B4F-7E35-4A9B-9BCA-264167E56538@msu.edu> Message-ID: Hi Qi - I just opened a PR to make DMStagStencilToIndexLocal() public https://gitlab.com/petsc/petsc/-/merge_requests/4180 (Sorry for my inattention - I think I may have missed some communications in processing the flood of PETSc emails too quickly - I still plan to get some more automatic DMStag fieldsplit capabilities into main, if it's not too late). > Am 20.07.2021 um 02:47 schrieb Tang, Qi : > > Hi, > As a part of implementing this process by ourself, we would like to eliminate boundary dofs. By reading DMStag code, we guess we can use > DMStagStencilToIndexLocal > MatZeroRowsLocal > > We note that DMStagStencilToIndexLocal is not explicitly defined in the header file. Is this function ready to use? And will we be able to eliminate the dofs using the above functions? > > Thanks, > Qi > > > >> On Jul 16, 2021, at 8:17 PM, Barry Smith > wrote: >> >> >> Zakariae, >> >> MatGetSubMatrix() was removed a long time ago, the routine is now MatCreateSubMatrix() but it does not work in way you had hoped. There is currently no mechanism to move values you put into the sub matrix back into the original larger matrix (though perhaps there should be?). >> >> Please look at MatCreateSubMatrixVirtual() and also MatCreateNest() to see if either of those approaches satisfy your needs. >> >> Please let us know if there are extensions that would be useful for you to accomplish what you need. >> >> Barry >> >> >>> On Jul 16, 2021, at 7:45 PM, Jorti, Zakariae via petsc-users > wrote: >>> >>> Hello, >>> >>> I have a matrix A = [A00 , A01 ; A10, A11]. >>> I extract the submatrix A11 with MatGetSubMatrix. >>> I only know the global IS is1 and is2, so to get A11 I call: >>> MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); >>> I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. >>> Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? >>> Many thanks. >>> Best regards, >>> >>> Zakariae >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 20 08:30:11 2021 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 20 Jul 2021 09:30:11 -0400 Subject: [petsc-users] Kokkos/GNU/HIP error Message-ID: I have a Kokkos arch flag set, but it does not seem to satisfy Kokkos with HIP and GNU ... -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1363988 bytes Desc: not available URL: From junchao.zhang at gmail.com Tue Jul 20 09:18:07 2021 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 20 Jul 2021 09:18:07 -0500 Subject: [petsc-users] Kokkos/GNU/HIP error In-Reply-To: References: Message-ID: --with-kokkos-hip-arch=VEGA908--prefix=/gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-gnu-kokkos You need a space before --prefix --Junchao Zhang On Tue, Jul 20, 2021 at 8:30 AM Mark Adams wrote: > I have a Kokkos arch flag set, but it does not seem to satisfy Kokkos with > HIP and GNU ... > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tangqi at msu.edu Tue Jul 20 09:34:49 2021 From: tangqi at msu.edu (Tang, Qi) Date: Tue, 20 Jul 2021 14:34:49 +0000 Subject: [petsc-users] Question about MatGetSubMatrix In-Reply-To: References: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> <76EF3B4F-7E35-4A9B-9BCA-264167E56538@msu.edu> Message-ID: <9F2AF1BF-03FF-4554-8CB0-7197A949AEAF@msu.edu> Thanks a lot for the quick fix, Patrick! Yes, automatic DMStag fieldsplit is definitely useful. We still look forward to that. Qi On Jul 20, 2021, at 2:18 AM, Patrick Sanan > wrote: Hi Qi - I just opened a PR to make DMStagStencilToIndexLocal() public https://gitlab.com/petsc/petsc/-/merge_requests/4180 (Sorry for my inattention - I think I may have missed some communications in processing the flood of PETSc emails too quickly - I still plan to get some more automatic DMStag fieldsplit capabilities into main, if it's not too late). Am 20.07.2021 um 02:47 schrieb Tang, Qi >: Hi, As a part of implementing this process by ourself, we would like to eliminate boundary dofs. By reading DMStag code, we guess we can use DMStagStencilToIndexLocal MatZeroRowsLocal We note that DMStagStencilToIndexLocal is not explicitly defined in the header file. Is this function ready to use? And will we be able to eliminate the dofs using the above functions? Thanks, Qi On Jul 16, 2021, at 8:17 PM, Barry Smith > wrote: Zakariae, MatGetSubMatrix() was removed a long time ago, the routine is now MatCreateSubMatrix() but it does not work in way you had hoped. There is currently no mechanism to move values you put into the sub matrix back into the original larger matrix (though perhaps there should be?). Please look at MatCreateSubMatrixVirtual() and also MatCreateNest() to see if either of those approaches satisfy your needs. Please let us know if there are extensions that would be useful for you to accomplish what you need. Barry On Jul 16, 2021, at 7:45 PM, Jorti, Zakariae via petsc-users > wrote: Hello, I have a matrix A = [A00 , A01 ; A10, A11]. I extract the submatrix A11 with MatGetSubMatrix. I only know the global IS is1 and is2, so to get A11 I call: MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? Many thanks. Best regards, Zakariae -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Tue Jul 20 21:25:14 2021 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Tue, 20 Jul 2021 22:25:14 -0400 Subject: [petsc-users] How to combine different element types into a single DMPlex? In-Reply-To: References: Message-ID: Hi, On 2021-07-14 3:14 p.m., Matthew Knepley wrote: > On Wed, Jul 14, 2021 at 1:25 PM Eric Chamberland > > wrote: > > Hi, > > while playing with DMPlexBuildFromCellListParallel, I noticed we > have to > specify "numCorners" which is a fixed value, then gives a fixed > number > of nodes for a series of elements. > > How can I then add, for example, triangles and quadrangles into a > DMPlex? > > > You can't with that function. It would be much mich more complicated > if you could, and I am not sure > it is worth it for that function. The reason is that you would need > index information to offset?into the > connectivity list, and that would need to be replicated to some extent > so that all processes know what > the others are doing. Possible, but complicated. > > Maybe I can help suggest something for what you are trying?to do? Yes: we are trying to partition our parallel mesh with PETSc functions.? The mesh has been read in parallel so each process owns a part of it, but we have to manage mixed elements types. When we directly use ParMETIS_V3_PartMeshKway, we give two arrays to describe the elements which allows mixed elements. So, how would I read my mixed mesh in parallel and give it to PETSc DMPlex so I can use a PetscPartitioner with DMPlexDistribute ? A second goal we have is to use PETSc to compute the overlap, which is something I can't find in PARMetis (and any other partitionning library?) Thanks, Eric > > ? Thanks, > > ? ? ? Matt > > Thanks, > > Eric > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Tue Jul 20 21:39:46 2021 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Tue, 20 Jul 2021 22:39:46 -0400 Subject: [petsc-users] Is it possible to keep track of original elements # after a call to DMPlexDistribute ? In-Reply-To: References: <7236c736-6066-1ba3-55b1-60782d8e754f@giref.ulaval.ca> Message-ID: <7b11445b-4d50-20d4-7c25-6cb2eab043b6@giref.ulaval.ca> On 2021-07-14 6:42 p.m., Matthew Knepley wrote: > > Ah, there was a confusion of intent. GlobalToNatural() is for people > that want data transformed back into the original > order. I thought that was what you wanted. If you just want mesh > points in the original order, we give you the > transformation as part of the output of DMPlexDistribute(). The > migrationSF that is output maps the original point to > the distributed point. You run it backwards to get the original ordering. > > ? Thanks, > > ? ? ?Matt Hi, that seems to work better!? However, if I understand well the migrationSF is giving information on the originating process where the elements have been migrated from. Is there a PETSc way to either: 1) send back the information to the originating process (somewhat "inverting" the migrationSF) ?? So I can retrieve the "partitioning array"? (just like the "part" parameter in ParMETIS_V3_PartMeshKway) on the sender process. or 2) Have the pre-migrationSF: I mean I would like to extract the "where are the elements going to be sent?" (again like "part" parameter) If not, I can always build the communication myself... Thanks, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From tangqi at msu.edu Tue Jul 20 22:30:19 2021 From: tangqi at msu.edu (Tang, Qi) Date: Wed, 21 Jul 2021 03:30:19 +0000 Subject: [petsc-users] Question about MatGetSubMatrix In-Reply-To: References: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> <76EF3B4F-7E35-4A9B-9BCA-264167E56538@msu.edu> Message-ID: <8428EC8F-39C7-4C41-A82D-8FBD14F0D650@msu.edu> Hi, Now I think the DMStagStencilToIndexLocal provides the local index for given (stencil) positions. How can we use that local index information to eliminate the rows? Is the following code possible: MatSetLocalToGlobalMapping(?); If (is_boundary){ PetscInt ix; DMStagStencilToIndexLocal(?, &ix); MatZeroRowsLocal(? &ix, ?); } The comment of MatZeroRowsLocal said "rows - the global row indices?. But this seems inconsistent with its name, so I am confused. Thanks, Qi On Jul 20, 2021, at 2:18 AM, Patrick Sanan > wrote: Hi Qi - I just opened a PR to make DMStagStencilToIndexLocal() public https://gitlab.com/petsc/petsc/-/merge_requests/4180 (Sorry for my inattention - I think I may have missed some communications in processing the flood of PETSc emails too quickly - I still plan to get some more automatic DMStag fieldsplit capabilities into main, if it's not too late). Am 20.07.2021 um 02:47 schrieb Tang, Qi >: Hi, As a part of implementing this process by ourself, we would like to eliminate boundary dofs. By reading DMStag code, we guess we can use DMStagStencilToIndexLocal MatZeroRowsLocal We note that DMStagStencilToIndexLocal is not explicitly defined in the header file. Is this function ready to use? And will we be able to eliminate the dofs using the above functions? Thanks, Qi On Jul 16, 2021, at 8:17 PM, Barry Smith > wrote: Zakariae, MatGetSubMatrix() was removed a long time ago, the routine is now MatCreateSubMatrix() but it does not work in way you had hoped. There is currently no mechanism to move values you put into the sub matrix back into the original larger matrix (though perhaps there should be?). Please look at MatCreateSubMatrixVirtual() and also MatCreateNest() to see if either of those approaches satisfy your needs. Please let us know if there are extensions that would be useful for you to accomplish what you need. Barry On Jul 16, 2021, at 7:45 PM, Jorti, Zakariae via petsc-users > wrote: Hello, I have a matrix A = [A00 , A01 ; A10, A11]. I extract the submatrix A11 with MatGetSubMatrix. I only know the global IS is1 and is2, so to get A11 I call: MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? Many thanks. Best regards, Zakariae -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Jul 21 06:06:27 2021 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 21 Jul 2021 13:06:27 +0200 Subject: [petsc-users] SLEPc: HDF5 support for SVD/EPSValuesView? In-Reply-To: References: <56981a2017944684a158b8736a6c5023@mek.dtu.dk> <1A99ADA9-DBF2-428A-BA95-999E222D531B@dsic.upv.es> <95f1ad68b5114231bd2e1239d4936a5f@mek.dtu.dk> Message-ID: <0615B8F4-D1DD-4452-8BBE-C952786993BB@dsic.upv.es> I have implemented a solution in https://gitlab.com/slepc/slepc/-/merge_requests/234 Let me know if you have suggestions. Jose > El 4 jun 2021, a las 17:38, Matthew Knepley escribi?: > > On Fri, Jun 4, 2021 at 10:59 AM Peder J?rgensgaard Olesen wrote: > Excellent question. It may well be some quirk of my own setup that somehow makes VecView run rather sluggishly, making it hardly relevant to the general case. > > Maybe I can help speed things up for you. Is this in serial or parallel? Is the vector big or small? Is it possible to send the output of -log_view? > > Thanks > > Matt > - Peder > > Fra: Matthew Knepley > Sendt: 4. juni 2021 16:48:05 > Til: Peder J?rgensgaard Olesen > Cc: Jose E. Roman; petsc-users at mcs.anl.gov > Emne: Re: [petsc-users] SLEPc: HDF5 support for SVD/EPSValuesView? > > On Fri, Jun 4, 2021 at 10:28 AM Peder J?rgensgaard Olesen wrote: > What Matt suggests could perhaps be used as a workaround, though it appears neither elegant nor in my experience efficient. I may be wrong on this. > > Why is it not efficient? > > Thanks, > > Matt > My attempted solution was to use a binary viewer instead, but I can't find a clear way to retrieve values thus stored with PETSc, as these do not lend themselves to be read using VecLoad. Again one might go about that by wrapping them inside a vector and send that to the viewer, which just brings us back around to the original problem. > > > > - Peder > > Fra: Jose E. Roman > Sendt: 4. juni 2021 16:01:04 > Til: Matthew Knepley > Cc: Peder J?rgensgaard Olesen; petsc-users at mcs.anl.gov > Emne: Re: [petsc-users] SLEPc: HDF5 support for SVD/EPSValuesView? > > The problem is that here I am writing PetscReal's not PetscScalar's, and in EPS I am writing PetscComplex even in real scalars. That is why I did not implement it as you are suggesting. > > Jose > > > > El 4 jun 2021, a las 15:56, Matthew Knepley escribi?: > > > > On Fri, Jun 4, 2021 at 9:31 AM Jose E. Roman wrote: > > I could try and adapt the HDF5 code in PETSc for this, but I am no HDF5 expert. Furthermore I am busy at the moment, so it is faster if you could prepare a merge request with this addition. > > > > I think the easiest thing is to wrap a Vec around the values and just VecView() it into the same HDF5 file. > > > > THanks, > > > > Matt > > > > Jose > > > > > > > El 4 jun 2021, a las 13:10, Peder J?rgensgaard Olesen via petsc-users escribi?: > > > > > > Hello > > > > > > In SLEPc one may write singular vectors to an HDF5 file using SVDVectorsView(), but a similar option doesn't seem to work for singular values using SVDValuesView(). The values are viewed correctly using other viewers, but nothing is seemingly produced when using an HDF5 viewer. Looking at the source code seems to confirm this, and suggests that there is a similar situation with EPS. > > > > > > Is this correct, and if so, might there be plans to include such functionality in future releases? > > > > > > Med venlig hilsen / Best Regards > > > > > > Peder J?rgensgaard Olesen > > > PhD Student, Turbulence Research Lab > > > Dept. of Mechanical Engineering > > > Technical University of Denmark > > > Koppels All? > > > Bygning 403, Rum 105 > > > DK-2800 Kgs. Lyngby > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From matteo.semplice at uninsubria.it Wed Jul 21 07:04:05 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Wed, 21 Jul 2021 14:04:05 +0200 Subject: [petsc-users] parallel HDF5 output of DMDA data with dof>1 In-Reply-To: <6c443aac-e9c6-0704-beb5-05afa8c38798@uninsubria.it> References: <69d928b7-09c4-cc73-6c7e-dac4ee98f84a@uninsubria.it> <6c443aac-e9c6-0704-beb5-05afa8c38798@uninsubria.it> Message-ID: <627157f2-34c4-b549-422d-33236ad72337@uninsubria.it> Hi all. I have asked Thibault (author or this report on HDF5 https://lists.mcs.anl.gov/pipermail/petsc-users/2021-July/044045.html some days before mine) to run my MWE and it does not work for him either. Further, I have tried on another machine of mine with --download-hdf5 --download-mpich and still it is not working. A detailed report follows at the end of this message. I am wondering if something is wrong/incompatible with the HDF5 version of VecView, at least when the Vec is associated with a DMDA. Of course it might just be that I didn't manage to write a correct xdmf, but I can't spot the mistake... I am of course available to run tests in order to find/fix this problem. Best ??? Matteo On 16/07/21 12:27, Matteo Semplice wrote: > > Il 15/07/21 17:44, Matteo Semplice ha scritto: >> Hi. >> >> When I write (HDF5 viewer) a vector associated to a DMDA with 1 dof, >> the output is independent of the number of cpus used. >> >> However, for a DMDA with dof=2, the output seems to be correct when I >> run on 1 or 2 cpus, but is scrambled when I run with 4 cpus. Judging >> from the ranges of the data, each field gets written to the correct >> part, and its the data witin the field that is scrambled. Here's my MWE: >> >> #include >> #include >> #include >> #include >> #include >> >> int main(int argc, char **argv) { >> >> ? PetscErrorCode ierr; >> ? ierr = PetscInitialize(&argc,&argv,(char*)0,help); CHKERRQ(ierr); >> ? PetscInt Nx=11; >> ? PetscInt Ny=11; >> ? PetscScalar dx = 1.0 / (Nx-1); >> ? PetscScalar dy = 1.0 / (Ny-1); >> ? DM dmda; >> ? ierr = DMDACreate2d(PETSC_COMM_WORLD, >> ????????????????????? DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, >> ????????????????????? DMDA_STENCIL_STAR, >> ????????????????????? Nx,Ny, //global dim >> ????????????????????? PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim >> ????????????????????? 2,1, //dof, stencil width >> ????????????????????? NULL, NULL, //n nodes per direction on each cpu >> ????????????????????? &dmda);????? CHKERRQ(ierr); >> ? ierr = DMSetFromOptions(dmda); CHKERRQ(ierr); >> ? ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr); >> ? ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0, 0.0, >> 1.0); CHKERRQ(ierr); >> ? ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr); >> ? ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr); >> ? DMDALocalInfo daInfo; >> ? ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr); >> ? IS *is; >> ? DM *daField; >> ? ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is, &daField); >> CHKERRQ(ierr); >> ? Vec U0; >> ? ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr); >> >> ? //Initial data >> ? typedef struct{ PetscScalar s,c;} data_type; >> ? data_type **u; >> ? ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr); >> ? for (PetscInt j=daInfo.ys; j> ??? PetscScalar y = j*dy; >> ??? for (PetscInt i=daInfo.xs; i> ????? PetscScalar x = i*dx; >> ????? u[j][i].s = x+2.*y; >> ????? u[j][i].c = 10. + 2.*x*x+y*y; >> ??? } >> ? } >> ? ierr = DMDAVecRestoreArray(dmda,U0,&u); CHKERRQ(ierr); >> >> ? PetscViewer viewer; >> ? ierr = >> PetscViewerHDF5Open(PETSC_COMM_WORLD,"solutionSC.hdf5",FILE_MODE_WRITE,&viewer); >> CHKERRQ(ierr); >> ? Vec uField; >> ? ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr); >> ? PetscObjectSetName((PetscObject) uField, "S"); >> ? ierr = VecView(uField,viewer); CHKERRQ(ierr); >> ? ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr); >> ? ierr = VecGetSubVector(U0,is[1],&uField); CHKERRQ(ierr); >> ? PetscObjectSetName((PetscObject) uField, "C"); >> ? ierr = VecView(uField,viewer); CHKERRQ(ierr); >> ? ierr = VecRestoreSubVector(U0,is[1],&uField); CHKERRQ(ierr); >> ? ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr); >> >> ? ierr = PetscFinalize(); >> ? return ierr; >> } >> >> and my xdmf file >> >> >> > xmlns:xi="https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.w3.org%2F2001%2FXInclude&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C7c270ed0c49c4f8d950708d948444e1c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620280470927505%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=q45En4ULjQX6H%2F1ZgzUxgKmDk7Y7jK0K2IuWDHpr4HM%3D&reserved=0" >> Version="2.0"> >> ? >> ??? >> ????? >> ????? >> ??????? >> ??????? >> ????????? > Dimensions="2">0.0 0.0 >> ????????? > Dimensions="2">0.1 0.1 >> ??????? >> ??????? >> ????????? solutionSC.hdf5:/S >> ??????? >> ??????? >> ????????? solutionSC.hdf5:/C >> ??????? >> ????? >> ??? >> ? >> >> >> Steps to reprduce: run code and open the xdmf with paraview. If the >> code was run with 1,2 or 3 cpus, the data are correct (except the >> plane xy has become the plane yz), but with 4 cpus the data are >> scrambled. >> >> Does anyone have any insight? >> >> (I am using Petsc Release Version 3.14.2, but I can compile a newer >> one if you think it's important.) > > Hi, > > ??? I have a small update on this issue. > > First, it is still here with version 3.15.2. > > Secondly, I have run the code under valgrind and > > - for 1 or 2 processes, I get no errors > > - for 4 processes, 3 out of 4, trigger the following > > ==25921== Conditional jump or move depends on uninitialised value(s) > ==25921==??? at 0xB3D6259: ??? (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so) > ==25921==??? by 0xB3D85C8: mca_fcoll_two_phase_file_write_all (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so) > ==25921==??? by 0xAAEB29B: mca_common_ompio_file_write_at_all (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/libmca_common_ompio.so.41.9.0) > ==25921==??? by 0xB316605: mca_io_ompio_file_write_at_all (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_io_ompio.so) > ==25921==??? by 0x73C7FE7: PMPI_File_write_at_all (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3) > ==25921==??? by 0x69E8700: H5FD__mpio_write (H5FDmpio.c:1466) > ==25921==??? by 0x670D6EB: H5FD_write (H5FDint.c:248) > ==25921==??? by 0x66DA0D3: H5F__accum_write (H5Faccum.c:826) > ==25921==??? by 0x684F091: H5PB_write (H5PB.c:1031) > ==25921==??? by 0x66E8055: H5F_shared_block_write (H5Fio.c:205) > ==25921==??? by 0x6674538: H5D__chunk_collective_fill (H5Dchunk.c:5064) > ==25921==??? by 0x6674538: H5D__chunk_allocate (H5Dchunk.c:4736) > ==25921==??? by 0x668C839: H5D__init_storage (H5Dint.c:2473) > ==25921==? Uninitialised value was created by a heap allocation > ==25921==??? at 0x483577F: malloc (vg_replace_malloc.c:299) > ==25921==??? by 0xB3D6155: ??? (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so) > ==25921==??? by 0xB3D85C8: mca_fcoll_two_phase_file_write_all (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_fcoll_two_phase.so) > ==25921==??? by 0xAAEB29B: mca_common_ompio_file_write_at_all (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/libmca_common_ompio.so.41.9.0) > ==25921==??? by 0xB316605: mca_io_ompio_file_write_at_all (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_io_ompio.so) > ==25921==??? by 0x73C7FE7: PMPI_File_write_at_all (in > /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3) > ==25921==??? by 0x69E8700: H5FD__mpio_write (H5FDmpio.c:1466) > ==25921==??? by 0x670D6EB: H5FD_write (H5FDint.c:248) > ==25921==??? by 0x66DA0D3: H5F__accum_write (H5Faccum.c:826) > ==25921==??? by 0x684F091: H5PB_write (H5PB.c:1031) > ==25921==??? by 0x66E8055: H5F_shared_block_write (H5Fio.c:205) > ==25921==??? by 0x6674538: H5D__chunk_collective_fill (H5Dchunk.c:5064) > ==25921==??? by 0x6674538: H5D__chunk_allocate (H5Dchunk.c:4736) > > Does anyone have any hint on what might be causing this? > > Is this the "buggy MPI-IO" that Matt was mentioning in > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.mcs.anl.gov%2Fpipermail%2Fpetsc-users%2F2021-July%2F044138.html&data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C7c270ed0c49c4f8d950708d948444e1c%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637620280470927505%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=gPAbClDgJ1toQxzVVnoRCrgPBNR2tjw%2BGfdrxv%2FwVmY%3D&reserved=0? > > I am using the release branch (commit c548142fde) and I have > configured with --download-hdf5; configure finds the installed openmpi > 3.1.3 from Debian buster. The relevant lines from configure.log are > > MPI: > ? Version:? 3 > ? Mpiexec: mpiexec --oversubscribe > ? OMPI_VERSION: 3.1.3 > hdf5: > ? Version:? 1.12.0 > ? Includes: -I/home/matteo/software/petsc/opt/include > ? Library:? -Wl,-rpath,/home/matteo/software/petsc/opt/lib > -L/home/matteo/software/petsc/opt/lib -lhdf5hl_fortran -lhdf5_fortran > -lhdf5_hl -lhdf5 Update 1: on a different machine, I have compiled petsc (release branch) with --download-hdf5 and --download-mpich and I have tried 3d HDF5 output at the end of my simulation. All's fine for 1 or 2 CPUs, but the output is funny for more CPUs. The smooth solution gives rise to an output that renders like little bricks, as if the data were written doing the 3 nested loops in the wrong order. Update 2: Thibault was kind enough to compile and run my MWE on his setup and he gets a crash related to the VecView with the HDF5 viewer. Here's the report that he sent me. On 21/07/21 10:59, Thibault Bridel-Bertomeu wrote: Hi Matteo, I ran your test, and actually it does not give me garbage for a number of processes greater than 1, it straight-up crashes ... Here is the error log for 2 processes : Compiled with Petsc Development GIT revision: v3.14.4-671-g707297fd510GIT Date: 2021-02-24 22:50:05 +0000 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: ---------------------Stack Frames ------------------------------------ [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [1]PETSC ERROR: INSTEAD the line number of the start of the function [1]PETSC ERROR: is given. [1]PETSC ERROR: [1] H5Dcreate2 line 716 /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c [1]PETSC ERROR: [1] VecView_MPI_HDF5 line 622 /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c [1]PETSC ERROR: [1] VecView_MPI line 815 /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c [1]PETSC ERROR: [1] VecView line 580 /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/interface/vector.c [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Signal received [1]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Development GIT revision: v3.14.4-671-g707297fd510GIT Date: 2021-02-24 22:50:05 +0000 [1]PETSC ERROR: /ccc/work/cont001/ocre/bridelbert/MWE_HDF5_Output/testHDF5 on anamed r1login by bridelbert Wed Jul 21 10:57:11 2021 [1]PETSC ERROR: Configure options --with-clean=1 --prefix=/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=INTI_UNS3D --with-fc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort --with-cc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc --with-cxx=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx --with-openmp=0 --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p1.tar.gz --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default --download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2 --download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz [1]PETSC ERROR: #1 User provided function() line 0 inunknown file [1]PETSC ERROR: Checking the memory for corruption. -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 50176059. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: ---------------------Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] H5Dcreate2 line 716 /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c [0]PETSC ERROR: [0] VecView_MPI_HDF5 line 622 /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c [0]PETSC ERROR: [0] VecView_MPI line 815 /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/impls/mpi/pdvec.c [0]PETSC ERROR: [0] VecView line 580 /ccc/work/cont001/ocre/bridelbert/04-PETSC/src/vec/vec/interface/vector.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Signal received [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.14.4-671-g707297fd510GIT Date: 2021-02-24 22:50:05 +0000 [0]PETSC ERROR: /ccc/work/cont001/ocre/bridelbert/MWE_HDF5_Output/testHDF5 on anamed r1login by bridelbert Wed Jul 21 10:57:11 2021 [0]PETSC ERROR: Configure options --with-clean=1 --prefix=/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti --with-make-np=8 --with-windows-graphics=0 --with-debugging=1 --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=INTI_UNS3D --with-fc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort --with-cc=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc --with-cxx=/ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx --with-openmp=0 --download-sowing=/ccc/work/cont001/ocre/bridelbert/v1.1.26-p1.tar.gz --download-metis=/ccc/work/cont001/ocre/bridelbert/git.metis.tar.gz --download-parmetis=/ccc/work/cont001/ocre/bridelbert/git.parmetis.tar.gz --download-fblaslapack=/ccc/work/cont001/ocre/bridelbert/git.fblaslapack.tar.gz --with-cmake-dir=/ccc/products/cmake-3.13.3/system/default --download-hdf5=/ccc/work/cont001/ocre/bridelbert/hdf5-1.12.0.tar.bz2 --download-zlib=/ccc/work/cont001/ocre/bridelbert/zlib-1.2.11.tar.gz [0]PETSC ERROR: #1 User provided function() line 0 inunknown file [r1login:24498] 1 more process has sent help message help-mpi-api.txt / mpi-abort [r1login:24498] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages I am starting to wonder if the PETSc configure script installs HDF5 with MPI correctly at all ... Here is my conf : Compilers: C Compiler: /ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc-fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g3 Version: gcc (GCC) 8.3.0 C++ Compiler: /ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicxx-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g-fPIC Version: g++ (GCC) 8.3.0 Fortran Compiler: /ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpifort-fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g Version: GNU Fortran (GCC) 8.3.0 Linkers: Shared linker: /ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc-shared-fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g3 Dynamic linker: /ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpicc-shared-fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g3 Libraries linked against: -lquadmath -lstdc++ -ldl BlasLapack: Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -lflapack -lfblas uses 4 byte integers MPI: Version:3 Mpiexec: /ccc/products/openmpi-2.0.4/gcc--8.3.0/default/bin/mpiexec OMPI_VERSION: 2.0.4 fblaslapack: zlib: Version:1.2.11 Includes: -I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -lz hdf5: Version:1.12.0 Includes: -I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 cmake: Version:3.13.3 /ccc/products/cmake-3.13.3/system/default/bin/cmake metis: Version:5.1.0 Includes: -I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -lmetis parmetis: Version:4.0.3 Includes: -I/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/include Library:-Wl,-rpath,/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -L/ccc/work/cont001/ocre/bridelbert/04-PETSC/build_uns3D_inti/lib -lparmetis regex: sowing: Version:1.1.26 /ccc/work/cont001/ocre/bridelbert/04-PETSC/INTI_UNS3D/bin/bfort Language used to compile PETSc: C Please don't hesitate to ask if you need something else from me ! Cheers, Thibault -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Wed Jul 21 08:58:58 2021 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Wed, 21 Jul 2021 15:58:58 +0200 Subject: [petsc-users] Question about MatGetSubMatrix In-Reply-To: <8428EC8F-39C7-4C41-A82D-8FBD14F0D650@msu.edu> References: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> <76EF3B4F-7E35-4A9B-9BCA-264167E56538@msu.edu> <8428EC8F-39C7-4C41-A82D-8FBD14F0D650@msu.edu> Message-ID: <5AE720E1-E718-40EA-AD05-18A63FF45BCF@gmail.com> > Am 21.07.2021 um 05:30 schrieb Tang, Qi : > > Hi, > > Now I think the DMStagStencilToIndexLocal provides the local index for given (stencil) positions. How can we use that local index information to eliminate the rows? > > Is the following code possible: > > MatSetLocalToGlobalMapping(?); > If (is_boundary){ > PetscInt ix; > DMStagStencilToIndexLocal(?, &ix); > MatZeroRowsLocal(? &ix, ?); > } > > The comment of MatZeroRowsLocal said "rows - the global row indices?. But this seems inconsistent with its name, so I am confused. That was indeed a typo on the man page, fix here: https://gitlab.com/petsc/petsc/-/merge_requests/4183 > > Thanks, > Qi > > >> On Jul 20, 2021, at 2:18 AM, Patrick Sanan > wrote: >> >> Hi Qi - >> >> I just opened a PR to make DMStagStencilToIndexLocal() public >> https://gitlab.com/petsc/petsc/-/merge_requests/4180 >> >> (Sorry for my inattention - I think I may have missed some communications in processing the flood of PETSc emails too quickly - I still plan to get some more automatic DMStag fieldsplit capabilities into main, if it's not too late). >> >>> Am 20.07.2021 um 02:47 schrieb Tang, Qi >: >>> >>> Hi, >>> As a part of implementing this process by ourself, we would like to eliminate boundary dofs. By reading DMStag code, we guess we can use >>> DMStagStencilToIndexLocal >>> MatZeroRowsLocal >>> >>> We note that DMStagStencilToIndexLocal is not explicitly defined in the header file. Is this function ready to use? And will we be able to eliminate the dofs using the above functions? >>> >>> Thanks, >>> Qi >>> >>> >>> >>>> On Jul 16, 2021, at 8:17 PM, Barry Smith > wrote: >>>> >>>> >>>> Zakariae, >>>> >>>> MatGetSubMatrix() was removed a long time ago, the routine is now MatCreateSubMatrix() but it does not work in way you had hoped. There is currently no mechanism to move values you put into the sub matrix back into the original larger matrix (though perhaps there should be?). >>>> >>>> Please look at MatCreateSubMatrixVirtual() and also MatCreateNest() to see if either of those approaches satisfy your needs. >>>> >>>> Please let us know if there are extensions that would be useful for you to accomplish what you need. >>>> >>>> Barry >>>> >>>> >>>>> On Jul 16, 2021, at 7:45 PM, Jorti, Zakariae via petsc-users > wrote: >>>>> >>>>> Hello, >>>>> >>>>> I have a matrix A = [A00 , A01 ; A10, A11]. >>>>> I extract the submatrix A11 with MatGetSubMatrix. >>>>> I only know the global IS is1 and is2, so to get A11 I call: >>>>> MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); >>>>> I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. >>>>> Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? >>>>> Many thanks. >>>>> Best regards, >>>>> >>>>> Zakariae >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Wed Jul 21 13:54:47 2021 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Wed, 21 Jul 2021 14:54:47 -0400 Subject: [petsc-users] Is it possible to keep track of original elements # after a call to DMPlexDistribute ? In-Reply-To: <7b11445b-4d50-20d4-7c25-6cb2eab043b6@giref.ulaval.ca> References: <7236c736-6066-1ba3-55b1-60782d8e754f@giref.ulaval.ca> <7b11445b-4d50-20d4-7c25-6cb2eab043b6@giref.ulaval.ca> Message-ID: Hi Matthew, we did it with PetscSFCreateInverseSF ! It is working well without overlap, so we can go forward with this and compute the overlap afterward with DMPlexDistributeOverlap. Thanks, Eric On 2021-07-20 10:39 p.m., Eric Chamberland wrote: > > > On 2021-07-14 6:42 p.m., Matthew Knepley wrote: >> >> Ah, there was a confusion of intent. GlobalToNatural() is for people >> that want data transformed back into the original >> order. I thought that was what you wanted. If you just want mesh >> points in the original order, we give you the >> transformation as part of the output of DMPlexDistribute(). The >> migrationSF that is output maps the original point to >> the distributed point. You run it backwards to get the original ordering. >> >> ? Thanks, >> >> ? ? ?Matt > > Hi, > > that seems to work better!? However, if I understand well the > migrationSF is giving information on the originating process where the > elements have been migrated from. > > Is there a PETSc way to either: > > 1) send back the information to the originating process (somewhat > "inverting" the migrationSF) ?? So I can retrieve the "partitioning > array"? (just like the "part" parameter in ParMETIS_V3_PartMeshKway) > on the sender process. > > or > > 2) Have the pre-migrationSF: I mean I would like to extract the "where > are the elements going to be sent?" (again like "part" parameter) > > If not, I can always build the communication myself... > > Thanks, > > Eric > > > > -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 21 14:07:51 2021 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 21 Jul 2021 15:07:51 -0400 Subject: [petsc-users] Is it possible to keep track of original elements # after a call to DMPlexDistribute ? In-Reply-To: References: <7236c736-6066-1ba3-55b1-60782d8e754f@giref.ulaval.ca> <7b11445b-4d50-20d4-7c25-6cb2eab043b6@giref.ulaval.ca> Message-ID: On Wed, Jul 21, 2021 at 2:54 PM Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Hi Matthew, > > we did it with PetscSFCreateInverseSF ! > > It is working well without overlap, so we can go forward with this and > compute the overlap afterward with DMPlexDistributeOverlap. > > That works. I think you can also do what you want with https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PetscSF/PetscSFComputeDegreeBegin.html This is how I usually make the 2-sided information, unless I really need the inverse SF. Thanks, Matt > Thanks, > > Eric > > > On 2021-07-20 10:39 p.m., Eric Chamberland wrote: > > > On 2021-07-14 6:42 p.m., Matthew Knepley wrote: > > > Ah, there was a confusion of intent. GlobalToNatural() is for people that > want data transformed back into the original > order. I thought that was what you wanted. If you just want mesh points in > the original order, we give you the > transformation as part of the output of DMPlexDistribute(). The > migrationSF that is output maps the original point to > the distributed point. You run it backwards to get the original ordering. > > Thanks, > > Matt > > Hi, > > that seems to work better! However, if I understand well the migrationSF > is giving information on the originating process where the elements have > been migrated from. > > Is there a PETSc way to either: > > 1) send back the information to the originating process (somewhat > "inverting" the migrationSF) ? So I can retrieve the "partitioning array" > (just like the "part" parameter in ParMETIS_V3_PartMeshKway) on the sender > process. > > or > > 2) Have the pre-migrationSF: I mean I would like to extract the "where are > the elements going to be sent?" (again like "part" parameter) > > If not, I can always build the communication myself... > > Thanks, > > Eric > > > > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmunson at mcs.anl.gov Wed Jul 21 17:41:31 2021 From: tmunson at mcs.anl.gov (Munson, Todd) Date: Wed, 21 Jul 2021 22:41:31 +0000 Subject: [petsc-users] DOE Small Business Programs Topics for 2022 Message-ID: <94610F4F-B1BF-49DD-B864-FDF45806A58F@anl.gov> Hi all, For those interested, the DOE released their topics for the Phase I SBIR and STTR programs for 2022. The information can be found at: https://science.osti.gov/-/media/sbir/pdf/TechnicalTopics/FY22-Phase-I-Release-1-TopicsV307152021.pdf?la=en&hash=AED45487511C817965F63687237B0385A88BFF2C All the best, Todd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 22 18:25:34 2021 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 22 Jul 2021 18:25:34 -0500 Subject: [petsc-users] parallel HDF5 output of DMDA data with dof>1 In-Reply-To: <69d928b7-09c4-cc73-6c7e-dac4ee98f84a@uninsubria.it> References: <69d928b7-09c4-cc73-6c7e-dac4ee98f84a@uninsubria.it> Message-ID: I have run your code and looked at the output in Matlab and then looked at your code. It seems you expect the output to be independent of the number of MPI ranks? It will not be because you have written > ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr); > PetscObjectSetName((PetscObject) uField, "S"); > ierr = VecView(uField,viewer); CHKERRQ(ierr); > ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr); The subvector you create loses the DMDA context (information that the vector is associated with a 2d array sliced up among the MPI ranks) since you just taking a part of the vector out via VecGetSubVector() which only sees an IS so has no way to connect the new subvector to a DMDA of dimension 2 (with a single field) and automatically do the reordering to natural ordering when the vector is directly associated with a DMDA and then viewed to a file. In order to get the behavior you hope for, you need to have your subvector be associated with the DMDA that comes out as the final argument to DMCreateFieldDecomposition(). There are multiple ways you can do this, none of them are particularly "clear", because it appears no one has bothered to include in the DM API a clear way to transfer vectors and parts of vectors between DMs and sub-DMs (that is a DM with the same topology as the original DM but fewer or more "fields"). I would suggest using DMCreateGlobalVector(daField[0], &uField); VecStrideGather(U0,0,uField,INSERT_VALUES); PetscObjectSetName((PetscObject) uField, "S"); ierr = VecView(uField,viewer); CHKERRQ(ierr); DMRestoreGlobalVector(daField[0], &uField); For the second field you would use U0,1 instead of U0,0. This will be fine for DMDA but I cannot say if it is appropriate for all types of DMs in all circumstances. Barry > On Jul 15, 2021, at 10:44 AM, Matteo Semplice wrote: > > Hi. > > When I write (HDF5 viewer) a vector associated to a DMDA with 1 dof, the output is independent of the number of cpus used. > > However, for a DMDA with dof=2, the output seems to be correct when I run on 1 or 2 cpus, but is scrambled when I run with 4 cpus. Judging from the ranges of the data, each field gets written to the correct part, and its the data witin the field that is scrambled. Here's my MWE: > > #include > #include > #include > #include > #include > > int main(int argc, char **argv) { > > PetscErrorCode ierr; > ierr = PetscInitialize(&argc,&argv,(char*)0,help); CHKERRQ(ierr); > PetscInt Nx=11; > PetscInt Ny=11; > PetscScalar dx = 1.0 / (Nx-1); > PetscScalar dy = 1.0 / (Ny-1); > DM dmda; > ierr = DMDACreate2d(PETSC_COMM_WORLD, > DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, > DMDA_STENCIL_STAR, > Nx,Ny, //global dim > PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim > 2,1, //dof, stencil width > NULL, NULL, //n nodes per direction on each cpu > &dmda); CHKERRQ(ierr); > ierr = DMSetFromOptions(dmda); CHKERRQ(ierr); > ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr); > ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0); CHKERRQ(ierr); > ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr); > ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr); > DMDALocalInfo daInfo; > ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr); > IS *is; > DM *daField; > ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is, &daField); CHKERRQ(ierr); > Vec U0; > ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr); > > //Initial data > typedef struct{ PetscScalar s,c;} data_type; > data_type **u; > ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr); > for (PetscInt j=daInfo.ys; j PetscScalar y = j*dy; > for (PetscInt i=daInfo.xs; i PetscScalar x = i*dx; > u[j][i].s = x+2.*y; > u[j][i].c = 10. + 2.*x*x+y*y; > } > } > ierr = DMDAVecRestoreArray(dmda,U0,&u); CHKERRQ(ierr); > > PetscViewer viewer; > ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD,"solutionSC.hdf5",FILE_MODE_WRITE,&viewer); CHKERRQ(ierr); > Vec uField; > ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr); > PetscObjectSetName((PetscObject) uField, "S"); > ierr = VecView(uField,viewer); CHKERRQ(ierr); > ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr); > ierr = VecGetSubVector(U0,is[1],&uField); CHKERRQ(ierr); > PetscObjectSetName((PetscObject) uField, "C"); > ierr = VecView(uField,viewer); CHKERRQ(ierr); > ierr = VecRestoreSubVector(U0,is[1],&uField); CHKERRQ(ierr); > ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr); > > ierr = PetscFinalize(); > return ierr; > } > > and my xdmf file > > > > > > > > > > 0.0 0.0 > 0.1 0.1 > > > solutionSC.hdf5:/S > > > solutionSC.hdf5:/C > > > > > > > Steps to reprduce: run code and open the xdmf with paraview. If the code was run with 1,2 or 3 cpus, the data are correct (except the plane xy has become the plane yz), but with 4 cpus the data are scrambled. > > Does anyone have any insight? > > (I am using Petsc Release Version 3.14.2, but I can compile a newer one if you think it's important.) > > Best > > Matteo > From matteo.semplice at uninsubria.it Fri Jul 23 03:50:11 2021 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Fri, 23 Jul 2021 10:50:11 +0200 Subject: [petsc-users] parallel HDF5 output of DMDA data with dof>1 In-Reply-To: References: <69d928b7-09c4-cc73-6c7e-dac4ee98f84a@uninsubria.it> Message-ID: <66d9fb72-6c15-3e8c-5c90-16e7ac453b83@uninsubria.it> Dear Barry, ??? this fixes both my example and my main code. I am only puzzled by the pairing of DMCreateGlobalVector with DMRestoreGlobalVector. Anyway, it works both like you suggested and with DMGetGlobalVector...DMRestoreGlobalVector without leaving garbage around. Thank you very much for your time and for the tip! On a side note, was there a better way to handle this output case? If one wrote the full vector I guess one would obtain a Nx*Ny*Nz*Ndof data set... Would it possible to then write a xdmf file to make paraview see each?dof as a separate Nx*Ny*Nz data set? ??? Matteo Il 23/07/21 01:25, Barry Smith ha scritto: > I have run your code and looked at the output in Matlab and then looked at your code. > > It seems you expect the output to be independent of the number of MPI ranks? It will not be because you have written > >> ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr); >> PetscObjectSetName((PetscObject) uField, "S"); >> ierr = VecView(uField,viewer); CHKERRQ(ierr); >> ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr); > The subvector you create loses the DMDA context (information that the vector is associated with a 2d array sliced up among the MPI ranks) since you just taking a part of the vector out via VecGetSubVector() which only sees an IS so has no way to connect the new subvector to a DMDA of dimension 2 (with a single field) and automatically do the reordering to natural ordering when the vector is directly associated with a DMDA and then viewed to a file. > > In order to get the behavior you hope for, you need to have your subvector be associated with the DMDA that comes out as the final argument to DMCreateFieldDecomposition(). There are multiple ways you can do this, none of them are particularly "clear", because it appears no one has bothered to include in the DM API a clear way to transfer vectors and parts of vectors between DMs and sub-DMs (that is a DM with the same topology as the original DM but fewer or more "fields"). > > I would suggest using > > DMCreateGlobalVector(daField[0], &uField); > VecStrideGather(U0,0,uField,INSERT_VALUES); > PetscObjectSetName((PetscObject) uField, "S"); > ierr = VecView(uField,viewer); CHKERRQ(ierr); > DMRestoreGlobalVector(daField[0], &uField); > > For the second field you would use U0,1 instead of U0,0. > > This will be fine for DMDA but I cannot say if it is appropriate for all types of DMs in all circumstances. > > Barry > > > > > > >> On Jul 15, 2021, at 10:44 AM, Matteo Semplice wrote: >> >> Hi. >> >> When I write (HDF5 viewer) a vector associated to a DMDA with 1 dof, the output is independent of the number of cpus used. >> >> However, for a DMDA with dof=2, the output seems to be correct when I run on 1 or 2 cpus, but is scrambled when I run with 4 cpus. Judging from the ranges of the data, each field gets written to the correct part, and its the data witin the field that is scrambled. Here's my MWE: >> >> #include >> #include >> #include >> #include >> #include >> >> int main(int argc, char **argv) { >> >> PetscErrorCode ierr; >> ierr = PetscInitialize(&argc,&argv,(char*)0,help); CHKERRQ(ierr); >> PetscInt Nx=11; >> PetscInt Ny=11; >> PetscScalar dx = 1.0 / (Nx-1); >> PetscScalar dy = 1.0 / (Ny-1); >> DM dmda; >> ierr = DMDACreate2d(PETSC_COMM_WORLD, >> DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, >> DMDA_STENCIL_STAR, >> Nx,Ny, //global dim >> PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim >> 2,1, //dof, stencil width >> NULL, NULL, //n nodes per direction on each cpu >> &dmda); CHKERRQ(ierr); >> ierr = DMSetFromOptions(dmda); CHKERRQ(ierr); >> ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr); >> ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0); CHKERRQ(ierr); >> ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr); >> ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr); >> DMDALocalInfo daInfo; >> ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr); >> IS *is; >> DM *daField; >> ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is, &daField); CHKERRQ(ierr); >> Vec U0; >> ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr); >> >> //Initial data >> typedef struct{ PetscScalar s,c;} data_type; >> data_type **u; >> ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr); >> for (PetscInt j=daInfo.ys; j> PetscScalar y = j*dy; >> for (PetscInt i=daInfo.xs; i> PetscScalar x = i*dx; >> u[j][i].s = x+2.*y; >> u[j][i].c = 10. + 2.*x*x+y*y; >> } >> } >> ierr = DMDAVecRestoreArray(dmda,U0,&u); CHKERRQ(ierr); >> >> PetscViewer viewer; >> ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD,"solutionSC.hdf5",FILE_MODE_WRITE,&viewer); CHKERRQ(ierr); >> Vec uField; >> ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr); >> PetscObjectSetName((PetscObject) uField, "S"); >> ierr = VecView(uField,viewer); CHKERRQ(ierr); >> ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr); >> ierr = VecGetSubVector(U0,is[1],&uField); CHKERRQ(ierr); >> PetscObjectSetName((PetscObject) uField, "C"); >> ierr = VecView(uField,viewer); CHKERRQ(ierr); >> ierr = VecRestoreSubVector(U0,is[1],&uField); CHKERRQ(ierr); >> ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr); >> >> ierr = PetscFinalize(); >> return ierr; >> } >> >> and my xdmf file >> >> >> >> >> >> >> >> >> >> 0.0 0.0 >> 0.1 0.1 >> >> >> solutionSC.hdf5:/S >> >> >> solutionSC.hdf5:/C >> >> >> >> >> >> >> Steps to reprduce: run code and open the xdmf with paraview. If the code was run with 1,2 or 3 cpus, the data are correct (except the plane xy has become the plane yz), but with 4 cpus the data are scrambled. >> >> Does anyone have any insight? >> >> (I am using Petsc Release Version 3.14.2, but I can compile a newer one if you think it's important.) >> >> Best >> >> Matteo From tangqi at msu.edu Fri Jul 23 10:50:35 2021 From: tangqi at msu.edu (Tang, Qi) Date: Fri, 23 Jul 2021 15:50:35 +0000 Subject: [petsc-users] Question about MatGetSubMatrix In-Reply-To: <8428EC8F-39C7-4C41-A82D-8FBD14F0D650@msu.edu> References: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> <76EF3B4F-7E35-4A9B-9BCA-264167E56538@msu.edu> <8428EC8F-39C7-4C41-A82D-8FBD14F0D650@msu.edu> Message-ID: How can we use MatZeroRowsLocal? Is there any doc for the local index vs global index for a matrix? I am asking because we are not sure about how to prepare a proper local index. Does the index include the ghost point region or not? Qi On Jul 20, 2021, at 9:30 PM, Tang, Qi > wrote: Hi, Now I think the DMStagStencilToIndexLocal provides the local index for given (stencil) positions. How can we use that local index information to eliminate the rows? Is the following code possible: MatSetLocalToGlobalMapping(?); If (is_boundary){ PetscInt ix; DMStagStencilToIndexLocal(?, &ix); MatZeroRowsLocal(? &ix, ?); } The comment of MatZeroRowsLocal said "rows - the global row indices?. But this seems inconsistent with its name, so I am confused. Thanks, Qi On Jul 20, 2021, at 2:18 AM, Patrick Sanan > wrote: Hi Qi - I just opened a PR to make DMStagStencilToIndexLocal() public https://gitlab.com/petsc/petsc/-/merge_requests/4180 (Sorry for my inattention - I think I may have missed some communications in processing the flood of PETSc emails too quickly - I still plan to get some more automatic DMStag fieldsplit capabilities into main, if it's not too late). Am 20.07.2021 um 02:47 schrieb Tang, Qi >: Hi, As a part of implementing this process by ourself, we would like to eliminate boundary dofs. By reading DMStag code, we guess we can use DMStagStencilToIndexLocal MatZeroRowsLocal We note that DMStagStencilToIndexLocal is not explicitly defined in the header file. Is this function ready to use? And will we be able to eliminate the dofs using the above functions? Thanks, Qi On Jul 16, 2021, at 8:17 PM, Barry Smith > wrote: Zakariae, MatGetSubMatrix() was removed a long time ago, the routine is now MatCreateSubMatrix() but it does not work in way you had hoped. There is currently no mechanism to move values you put into the sub matrix back into the original larger matrix (though perhaps there should be?). Please look at MatCreateSubMatrixVirtual() and also MatCreateNest() to see if either of those approaches satisfy your needs. Please let us know if there are extensions that would be useful for you to accomplish what you need. Barry On Jul 16, 2021, at 7:45 PM, Jorti, Zakariae via petsc-users > wrote: Hello, I have a matrix A = [A00 , A01 ; A10, A11]. I extract the submatrix A11 with MatGetSubMatrix. I only know the global IS is1 and is2, so to get A11 I call: MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? Many thanks. Best regards, Zakariae -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Jul 24 00:18:42 2021 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 24 Jul 2021 00:18:42 -0500 Subject: [petsc-users] parallel HDF5 output of DMDA data with dof>1 In-Reply-To: <66d9fb72-6c15-3e8c-5c90-16e7ac453b83@uninsubria.it> References: <69d928b7-09c4-cc73-6c7e-dac4ee98f84a@uninsubria.it> <66d9fb72-6c15-3e8c-5c90-16e7ac453b83@uninsubria.it> Message-ID: <9FFA9C02-6F43-45F2-A2D8-C800B5535EA0@petsc.dev> > On Jul 23, 2021, at 3:50 AM, Matteo Semplice wrote: > > Dear Barry, > > this fixes both my example and my main code. > > I am only puzzled by the pairing of DMCreateGlobalVector with DMRestoreGlobalVector. Anyway, it works both like you suggested and with DMGetGlobalVector...DMRestoreGlobalVector without leaving garbage around. > > Thank you very much for your time and for the tip! > > On a side note, was there a better way to handle this output case? If one wrote the full vector I guess one would obtain a Nx*Ny*Nz*Ndof data set... Would it possible to then write a xdmf file to make paraview see each dof as a separate Nx*Ny*Nz data set? Sorry, I don't know HDF5, XDMF, and Paraview. From my perspective I agree with you, yes, conceptually you should be able to just PetscView the entire Nx*Ny*Nz*Ndof data and in the visualization tool indicate the dof you wish to visualize. Barry > > Matteo > > Il 23/07/21 01:25, Barry Smith ha scritto: >> I have run your code and looked at the output in Matlab and then looked at your code. >> >> It seems you expect the output to be independent of the number of MPI ranks? It will not be because you have written >> >>> ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr); >>> PetscObjectSetName((PetscObject) uField, "S"); >>> ierr = VecView(uField,viewer); CHKERRQ(ierr); >>> ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr); >> The subvector you create loses the DMDA context (information that the vector is associated with a 2d array sliced up among the MPI ranks) since you just taking a part of the vector out via VecGetSubVector() which only sees an IS so has no way to connect the new subvector to a DMDA of dimension 2 (with a single field) and automatically do the reordering to natural ordering when the vector is directly associated with a DMDA and then viewed to a file. >> >> In order to get the behavior you hope for, you need to have your subvector be associated with the DMDA that comes out as the final argument to DMCreateFieldDecomposition(). There are multiple ways you can do this, none of them are particularly "clear", because it appears no one has bothered to include in the DM API a clear way to transfer vectors and parts of vectors between DMs and sub-DMs (that is a DM with the same topology as the original DM but fewer or more "fields"). >> >> I would suggest using >> >> DMCreateGlobalVector(daField[0], &uField); >> VecStrideGather(U0,0,uField,INSERT_VALUES); >> PetscObjectSetName((PetscObject) uField, "S"); >> ierr = VecView(uField,viewer); CHKERRQ(ierr); >> DMRestoreGlobalVector(daField[0], &uField); >> >> For the second field you would use U0,1 instead of U0,0. >> >> This will be fine for DMDA but I cannot say if it is appropriate for all types of DMs in all circumstances. >> >> Barry >> >> >> >> >>> On Jul 15, 2021, at 10:44 AM, Matteo Semplice wrote: >>> >>> Hi. >>> >>> When I write (HDF5 viewer) a vector associated to a DMDA with 1 dof, the output is independent of the number of cpus used. >>> >>> However, for a DMDA with dof=2, the output seems to be correct when I run on 1 or 2 cpus, but is scrambled when I run with 4 cpus. Judging from the ranges of the data, each field gets written to the correct part, and its the data witin the field that is scrambled. Here's my MWE: >>> >>> #include >>> #include >>> #include >>> #include >>> #include >>> >>> int main(int argc, char **argv) { >>> >>> PetscErrorCode ierr; >>> ierr = PetscInitialize(&argc,&argv,(char*)0,help); CHKERRQ(ierr); >>> PetscInt Nx=11; >>> PetscInt Ny=11; >>> PetscScalar dx = 1.0 / (Nx-1); >>> PetscScalar dy = 1.0 / (Ny-1); >>> DM dmda; >>> ierr = DMDACreate2d(PETSC_COMM_WORLD, >>> DM_BOUNDARY_NONE,DM_BOUNDARY_NONE, >>> DMDA_STENCIL_STAR, >>> Nx,Ny, //global dim >>> PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim >>> 2,1, //dof, stencil width >>> NULL, NULL, //n nodes per direction on each cpu >>> &dmda); CHKERRQ(ierr); >>> ierr = DMSetFromOptions(dmda); CHKERRQ(ierr); >>> ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr); >>> ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0); CHKERRQ(ierr); >>> ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr); >>> ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr); >>> DMDALocalInfo daInfo; >>> ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr); >>> IS *is; >>> DM *daField; >>> ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is, &daField); CHKERRQ(ierr); >>> Vec U0; >>> ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr); >>> >>> //Initial data >>> typedef struct{ PetscScalar s,c;} data_type; >>> data_type **u; >>> ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr); >>> for (PetscInt j=daInfo.ys; j>> PetscScalar y = j*dy; >>> for (PetscInt i=daInfo.xs; i>> PetscScalar x = i*dx; >>> u[j][i].s = x+2.*y; >>> u[j][i].c = 10. + 2.*x*x+y*y; >>> } >>> } >>> ierr = DMDAVecRestoreArray(dmda,U0,&u); CHKERRQ(ierr); >>> >>> PetscViewer viewer; >>> ierr = PetscViewerHDF5Open(PETSC_COMM_WORLD,"solutionSC.hdf5",FILE_MODE_WRITE,&viewer); CHKERRQ(ierr); >>> Vec uField; >>> ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr); >>> PetscObjectSetName((PetscObject) uField, "S"); >>> ierr = VecView(uField,viewer); CHKERRQ(ierr); >>> ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr); >>> ierr = VecGetSubVector(U0,is[1],&uField); CHKERRQ(ierr); >>> PetscObjectSetName((PetscObject) uField, "C"); >>> ierr = VecView(uField,viewer); CHKERRQ(ierr); >>> ierr = VecRestoreSubVector(U0,is[1],&uField); CHKERRQ(ierr); >>> ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr); >>> >>> ierr = PetscFinalize(); >>> return ierr; >>> } >>> >>> and my xdmf file >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> 0.0 0.0 >>> 0.1 0.1 >>> >>> >>> solutionSC.hdf5:/S >>> >>> >>> solutionSC.hdf5:/C >>> >>> >>> >>> >>> >>> >>> Steps to reprduce: run code and open the xdmf with paraview. If the code was run with 1,2 or 3 cpus, the data are correct (except the plane xy has become the plane yz), but with 4 cpus the data are scrambled. >>> >>> Does anyone have any insight? >>> >>> (I am using Petsc Release Version 3.14.2, but I can compile a newer one if you think it's important.) >>> >>> Best >>> >>> Matteo From bsmith at petsc.dev Sat Jul 24 00:32:48 2021 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 24 Jul 2021 00:32:48 -0500 Subject: [petsc-users] Question about MatGetSubMatrix In-Reply-To: References: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> <76EF3B4F-7E35-4A9B-9BCA-264167E56538@msu.edu> <8428EC8F-39C7-4C41-A82D-8FBD14F0D650@msu.edu> Message-ID: <5A5A2F75-B242-4D34-A127-E303A34398B5@petsc.dev> Qi, MatSetLocalGlobalToGlobalMapping() is the routine that provides the information to the matrix so that one can use a "local" ordering and it will get automatically translated to the parallel PETSc ordering on the parallel matrix. Generically yes it includes the ghost point region on each MPI rank. Some DM's provide this information automatically; for DMDA it is pretty simple, for DMStag a bit more complicated. You are correct, the previous manual page for MatZeroRowsLocal() incorrectly states "global" rows from 16+ years ago, likely a cut and paste error. It looks like it has been fixed in the main PETSc repository, Barry > On Jul 23, 2021, at 10:50 AM, Tang, Qi wrote: > > How can we use MatZeroRowsLocal? Is there any doc for the local index vs global index for a matrix? > > I am asking because we are not sure about how to prepare a proper local index. Does the index include the ghost point region or not? > > Qi > > > > >> On Jul 20, 2021, at 9:30 PM, Tang, Qi > wrote: >> >> Hi, >> >> Now I think the DMStagStencilToIndexLocal provides the local index for given (stencil) positions. How can we use that local index information to eliminate the rows? >> >> Is the following code possible: >> >> MatSetLocalToGlobalMapping(?); >> If (is_boundary){ >> PetscInt ix; >> DMStagStencilToIndexLocal(?, &ix); >> MatZeroRowsLocal(? &ix, ?); >> } >> >> The comment of MatZeroRowsLocal said "rows - the global row indices?. But this seems inconsistent with its name, so I am confused. >> >> Thanks, >> Qi >> >> >>> On Jul 20, 2021, at 2:18 AM, Patrick Sanan > wrote: >>> >>> Hi Qi - >>> >>> I just opened a PR to make DMStagStencilToIndexLocal() public >>> https://gitlab.com/petsc/petsc/-/merge_requests/4180 >>> >>> (Sorry for my inattention - I think I may have missed some communications in processing the flood of PETSc emails too quickly - I still plan to get some more automatic DMStag fieldsplit capabilities into main, if it's not too late). >>> >>>> Am 20.07.2021 um 02:47 schrieb Tang, Qi >: >>>> >>>> Hi, >>>> As a part of implementing this process by ourself, we would like to eliminate boundary dofs. By reading DMStag code, we guess we can use >>>> DMStagStencilToIndexLocal >>>> MatZeroRowsLocal >>>> >>>> We note that DMStagStencilToIndexLocal is not explicitly defined in the header file. Is this function ready to use? And will we be able to eliminate the dofs using the above functions? >>>> >>>> Thanks, >>>> Qi >>>> >>>> >>>> >>>>> On Jul 16, 2021, at 8:17 PM, Barry Smith > wrote: >>>>> >>>>> >>>>> Zakariae, >>>>> >>>>> MatGetSubMatrix() was removed a long time ago, the routine is now MatCreateSubMatrix() but it does not work in way you had hoped. There is currently no mechanism to move values you put into the sub matrix back into the original larger matrix (though perhaps there should be?). >>>>> >>>>> Please look at MatCreateSubMatrixVirtual() and also MatCreateNest() to see if either of those approaches satisfy your needs. >>>>> >>>>> Please let us know if there are extensions that would be useful for you to accomplish what you need. >>>>> >>>>> Barry >>>>> >>>>> >>>>>> On Jul 16, 2021, at 7:45 PM, Jorti, Zakariae via petsc-users > wrote: >>>>>> >>>>>> Hello, >>>>>> >>>>>> I have a matrix A = [A00 , A01 ; A10, A11]. >>>>>> I extract the submatrix A11 with MatGetSubMatrix. >>>>>> I only know the global IS is1 and is2, so to get A11 I call: >>>>>> MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); >>>>>> I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. >>>>>> Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? >>>>>> Many thanks. >>>>>> Best regards, >>>>>> >>>>>> Zakariae >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel.td19 at outlook.com Sat Jul 24 02:52:12 2021 From: miguel.td19 at outlook.com (Miguel Angel Tapia) Date: Sat, 24 Jul 2021 07:52:12 +0000 Subject: [petsc-users] DMPlex doubt Message-ID: Hello. I am a master's student in Mexico. I am currently working on a project in which we are implementing DMPlex in a code for electromagnetic modeling. Right now I am working on understanding the tool in C. But I'm stuck on something and that's why my next doubt: I am trying to get the coordinates of the nodes of a mesh in DMPlex. I already understood how the DAG is structured, how to obtain the nodes that make up some point. But the ordering of the nodes changes in DMPlex. So I need to know the coordinates of each node to compare them with my initial mesh and confirm that the same nodes form the same point in the software I am using as well as in the DMPlex DAG. It would be great if you could guide me a bit on how to do this or indicate which DMPlex examples would be good to review or which examples solve something similar to my situation. Thank you in advance. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bldenton at buffalo.edu Sat Jul 24 20:31:20 2021 From: bldenton at buffalo.edu (Brandon Denton) Date: Sat, 24 Jul 2021 21:31:20 -0400 Subject: [petsc-users] DMPlex doubt In-Reply-To: References: Message-ID: Good Evening Miguel, I've successfully used the following to get the coordinates of all nodes/vertices from a DMPlex. The steps I use are as follow ----- CODE ------ Vec coordinates; // Define a Petsc vector to store the coordinates. PetscInt cdm; // Define a PetscInt to store coordinate dimension of DMPlex PetscInt *vSize; // PetscInt to store number of elements in vector (used below) PetscErrorCode ierr; ierr = DMGetCoordinateDM(dm, &cdm); CHKERRQ(ierr); // Get coordinate dimension of dm ierr = DMGetCoordinatesLocal(dm, &coordinates);CHKERRQ(ierr); // Populate Vector with (x, y, z) coordinates of all nodes/vectors in DM ierr = VecGetSize(coordinates, &vSize); // Get Number of elements in coordinates vector PetscScalar coords[vSize]; // Define array where the (x, y, z) values will be stored const PetscInt ix[vSize]; // Define array to store indices of coordinates you'd like to get values for // Load ix[] with indices of Vector coordinate[] you want values for. In your case, you would like all of them for (int ii = 0; ii < vSize; ++ii){ ix[ii] = ii; // Note: Petsc uses 0-based indexing } ierr = VecGetValues(coordinates, vSize, ix, coords); // Get (x, y, z) values from Vector coordinates and store in coords[] array. // All (x, y, z) coordinates for all nodes/vertices should now be in the coords[] array. // They are stored interlaced. i.e. (x_0, y_0, z_0, x_1, y_1, z_1, .... x_n-1, y_n-1, z_n-1) // Print out each nodes (x, y, z) coordinates to screen. I assuming 3 dimensions for (int ii = 0; ii < vSize; ii+3){ ierr = PetscPrintf(PETSC_COMM_SELF, " Node %d :: (x,y,z) = (%lf, %lf, %lf) \n", coords[ii], coords[ii+1],coords[ii+2]); } ---- END OF CODE ---- Please forgive any coding errors/typos that may be above but the technique should work. I don't claim that this is the most elegant solution. Good Luck -Brandon On Sat, Jul 24, 2021 at 9:58 AM Miguel Angel Tapia wrote: > Hello. I am a master's student in Mexico. I am currently working on a > project in which we are implementing DMPlex in a code for electromagnetic > modeling. Right now I am working on understanding the tool in C. But I'm > stuck on something and that's why my next doubt: > > I am trying to get the coordinates of the nodes of a mesh in DMPlex. I > already understood how the DAG is structured, how to obtain the nodes that > make up some point. But the ordering of the nodes changes in DMPlex. So I > need to know the coordinates of each node to compare them with my initial > mesh and confirm that the same nodes form the same point in the software I > am using as well as in the DMPlex DAG. > > It would be great if you could guide me a bit on how to do this or > indicate which DMPlex examples would be good to review or which examples > solve something similar to my situation. > > Thank you in advance. Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Jul 25 06:43:50 2021 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 25 Jul 2021 07:43:50 -0400 Subject: [petsc-users] DMPlex doubt In-Reply-To: References: Message-ID: This is a Matt question and I recall a question on getting the original node ordering recently, but I am not finding it. How is your Plex created? If you give it a mesh and don't distribute it the node ordering does not change. In that case you can use a simpler version Brandon's code: ierr = DMGetCoordinatesLocal(dm, &coordinates);CHKERRQ(ierr); Then use VecView. However, I don't understand what you are trying to do exactly. Are you just verifying that PLex has the correct coordinates? Mark On Sat, Jul 24, 2021 at 9:58 AM Miguel Angel Tapia wrote: > Hello. I am a master's student in Mexico. I am currently working on a > project in which we are implementing DMPlex in a code for electromagnetic > modeling. Right now I am working on understanding the tool in C. But I'm > stuck on something and that's why my next doubt: > > I am trying to get the coordinates of the nodes of a mesh in DMPlex. I > already understood how the DAG is structured, how to obtain the nodes that > make up some point. But the ordering of the nodes changes in DMPlex. So I > need to know the coordinates of each node to compare them with my initial > mesh and confirm that the same nodes form the same point in the software I > am using as well as in the DMPlex DAG. > > It would be great if you could guide me a bit on how to do this or > indicate which DMPlex examples would be good to review or which examples > solve something similar to my situation. > > Thank you in advance. Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel.td19 at outlook.com Sun Jul 25 20:43:32 2021 From: miguel.td19 at outlook.com (Miguel Angel Tapia) Date: Mon, 26 Jul 2021 01:43:32 +0000 Subject: [petsc-users] DMPlex doubt In-Reply-To: References: , Message-ID: I'm really sorry, I should have explained it more specifically. This is the situation. I am using a very simple mesh to learn and test with DMPlex. My mesh has these nodes: 1 0 0 0 2 1 0 0 3 1 1 0 4 0 1 0 5 0 0 1 6 1 0 1 7 1 1 1 8 0 1 1 9 0.5 0.5 1 10 0.5 0 0.5 11 1 0.5 0.5 12 0.5 1 0.5 13 0 0.5 0.5 14 0.5 0.5 0 Executing in a single process these 14 nodes take values ??from 24 to 37. I would like to know how to obtain the coordinates through DMPlex. I know that using DMGetCoordinatesLocal I get the vector of all coordinates. But I would like to be more specific. For example, I choose point 2 of the DAG of the mesh that I am using, this point is formed by nodes 32, 33, 34 and 36 according to the order of the DAG. How can I know the coordinates of those specific nodes? And if I know those coordinates, I can see which points are from the original mesh and verify that they are the same in the software to which I want to implement DMPlex. Maybe this is very simple, but I am just learning the use of the tool and right now my job is to know if I can obtain the same results with DMPlex as the code I want to modify. ________________________________ De: Mark Adams Enviado: domingo, 25 de julio de 2021 06:43 a. m. Para: Miguel Angel Tapia CC: petsc-users at mcs.anl.gov Asunto: Re: [petsc-users] DMPlex doubt This is a Matt question and I recall a question on getting the original node ordering recently, but I am not finding it. How is your Plex created? If you give it a mesh and don't distribute it the node ordering does not change. In that case you can use a simpler version Brandon's code: ierr = DMGetCoordinatesLocal(dm, &coordinates);CHKERRQ(ierr); Then use VecView. However, I don't understand what you are trying to do exactly. Are you just verifying that PLex has the correct coordinates? Mark On Sat, Jul 24, 2021 at 9:58 AM Miguel Angel Tapia > wrote: Hello. I am a master's student in Mexico. I am currently working on a project in which we are implementing DMPlex in a code for electromagnetic modeling. Right now I am working on understanding the tool in C. But I'm stuck on something and that's why my next doubt: I am trying to get the coordinates of the nodes of a mesh in DMPlex. I already understood how the DAG is structured, how to obtain the nodes that make up some point. But the ordering of the nodes changes in DMPlex. So I need to know the coordinates of each node to compare them with my initial mesh and confirm that the same nodes form the same point in the software I am using as well as in the DMPlex DAG. It would be great if you could guide me a bit on how to do this or indicate which DMPlex examples would be good to review or which examples solve something similar to my situation. Thank you in advance. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Mon Jul 26 05:57:37 2021 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Mon, 26 Jul 2021 12:57:37 +0200 Subject: [petsc-users] Question about MatGetSubMatrix In-Reply-To: <5A5A2F75-B242-4D34-A127-E303A34398B5@petsc.dev> References: <2ce221c70e95442d8092b0bdd9140e7f@lanl.gov> <00F666F7-A854-4D5A-A0F3-3C00F4B815EF@petsc.dev> <76EF3B4F-7E35-4A9B-9BCA-264167E56538@msu.edu> <8428EC8F-39C7-4C41-A82D-8FBD14F0D650@msu.edu> <5A5A2F75-B242-4D34-A127-E303A34398B5@petsc.dev> Message-ID: <433E093C-25CF-4575-8CF5-F49D79E7C1CD@gmail.com> > Am 24.07.2021 um 07:32 schrieb Barry Smith : > > > Qi, > > MatSetLocalGlobalToGlobalMapping() is the routine that provides the information to the matrix so that one can use a "local" ordering and it will get automatically translated to the parallel PETSc ordering on the parallel matrix. Generically yes it includes the ghost point region on each MPI rank. Some DM's provide this information automatically; for DMDA it is pretty simple, for DMStag a bit more complicated. > The local-to-global mapping should get automatically attached to the Mat if you use a DM to create an operator/Mat, with DMCreateMatrix(). Thus, this is the recommended way to proceed! https://petsc.org/release/docs/manualpages/DM/DMCreateMatrix.html > You are correct, the previous manual page for MatZeroRowsLocal() incorrectly states "global" rows from 16+ years ago, likely a cut and paste error. It looks like it has been fixed in the main PETSc repository, > > Barry > > >> On Jul 23, 2021, at 10:50 AM, Tang, Qi > wrote: >> >> How can we use MatZeroRowsLocal? Is there any doc for the local index vs global index for a matrix? >> >> I am asking because we are not sure about how to prepare a proper local index. Does the index include the ghost point region or not? >> >> Qi >> >> >> >> >>> On Jul 20, 2021, at 9:30 PM, Tang, Qi > wrote: >>> >>> Hi, >>> >>> Now I think the DMStagStencilToIndexLocal provides the local index for given (stencil) positions. How can we use that local index information to eliminate the rows? >>> >>> Is the following code possible: >>> >>> MatSetLocalToGlobalMapping(?); >>> If (is_boundary){ >>> PetscInt ix; >>> DMStagStencilToIndexLocal(?, &ix); >>> MatZeroRowsLocal(? &ix, ?); >>> } >>> >>> The comment of MatZeroRowsLocal said "rows - the global row indices?. But this seems inconsistent with its name, so I am confused. >>> >>> Thanks, >>> Qi >>> >>> >>>> On Jul 20, 2021, at 2:18 AM, Patrick Sanan > wrote: >>>> >>>> Hi Qi - >>>> >>>> I just opened a PR to make DMStagStencilToIndexLocal() public >>>> https://gitlab.com/petsc/petsc/-/merge_requests/4180 >>>> >>>> (Sorry for my inattention - I think I may have missed some communications in processing the flood of PETSc emails too quickly - I still plan to get some more automatic DMStag fieldsplit capabilities into main, if it's not too late). >>>> >>>>> Am 20.07.2021 um 02:47 schrieb Tang, Qi >: >>>>> >>>>> Hi, >>>>> As a part of implementing this process by ourself, we would like to eliminate boundary dofs. By reading DMStag code, we guess we can use >>>>> DMStagStencilToIndexLocal >>>>> MatZeroRowsLocal >>>>> >>>>> We note that DMStagStencilToIndexLocal is not explicitly defined in the header file. Is this function ready to use? And will we be able to eliminate the dofs using the above functions? >>>>> >>>>> Thanks, >>>>> Qi >>>>> >>>>> >>>>> >>>>>> On Jul 16, 2021, at 8:17 PM, Barry Smith > wrote: >>>>>> >>>>>> >>>>>> Zakariae, >>>>>> >>>>>> MatGetSubMatrix() was removed a long time ago, the routine is now MatCreateSubMatrix() but it does not work in way you had hoped. There is currently no mechanism to move values you put into the sub matrix back into the original larger matrix (though perhaps there should be?). >>>>>> >>>>>> Please look at MatCreateSubMatrixVirtual() and also MatCreateNest() to see if either of those approaches satisfy your needs. >>>>>> >>>>>> Please let us know if there are extensions that would be useful for you to accomplish what you need. >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>>> On Jul 16, 2021, at 7:45 PM, Jorti, Zakariae via petsc-users > wrote: >>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> I have a matrix A = [A00 , A01 ; A10, A11]. >>>>>>> I extract the submatrix A11 with MatGetSubMatrix. >>>>>>> I only know the global IS is1 and is2, so to get A11 I call: >>>>>>> MatGetSubMatrix(A,is2,is2,MAT_INITIAL_MATRIX,&A11); >>>>>>> I want to modify A11 and update the changes on the global matrix A but I could not find any MatRestoreSubMatrix routine. >>>>>>> Is there something similar to VecGetSubVector and VecRestoreSubVector for matrices that uses only global indices? >>>>>>> Many thanks. >>>>>>> Best regards, >>>>>>> >>>>>>> Zakariae >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Jul 26 07:59:08 2021 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 26 Jul 2021 08:59:08 -0400 Subject: [petsc-users] DMPlex doubt In-Reply-To: References: Message-ID: By 'point 2' if you mean the nodes that touch a cells (c==2 below) you can use something like: PetscScalar *coef = NULL; Vec coords; PetscInt csize,Nv,d,nz; DM cdm; PetscSection cs; ierr = DMPlexGetHeightStratum(plex,0,&cStart,&cEnd);CHKERRQ(ierr); ierr = DMGetCoordinatesLocal(dm, &coords);CHKERRQ(ierr); ierr = DMGetCoordinateDM(dm, &cdm);CHKERRQ(ierr); ierr = DMGetLocalSection(cdm, &cs);CHKERRQ(ierr); for (c = cStart; c < cEnd; c++) { ierr = DMPlexVecGetClosure(cdm, cs, coords, c, &csize, &coef);CHKERRQ(ierr); If you want indices and offsets use DMPlexGetClosureIndices ( https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexGetClosureIndices.html) instead. I think you want the offsets. Mark On Sun, Jul 25, 2021 at 9:43 PM Miguel Angel Tapia wrote: > I'm really sorry, I should have explained it more specifically. > > This is the situation. I am using a very simple mesh to learn and test > with DMPlex. My mesh has these nodes: > 1 0 0 0 > 2 1 0 0 > 3 1 1 0 > 4 0 1 0 > 5 0 0 1 > 6 1 0 1 > 7 1 1 1 > 8 0 1 1 > 9 0.5 0.5 1 > 10 0.5 0 0.5 > 11 1 0.5 0.5 > 12 0.5 1 0.5 > 13 0 0.5 0.5 > 14 0.5 0.5 0 > > Executing in a single process these 14 nodes take values ??from 24 to 37. > I would like to know how to obtain the coordinates through DMPlex. I know > that using DMGetCoordinatesLocal I get the vector of all coordinates. But I > would like to be more specific. For example, I choose point 2 of the DAG of > the mesh that I am using, this point is formed by nodes 32, 33, 34 and 36 > according to the order of the DAG. How can I know the coordinates of those > specific nodes? > > And if I know those coordinates, I can see which points are from the > original mesh and verify that they are the same in the software to which I > want to implement DMPlex. Maybe this is very simple, but I am just learning > the use of the tool and right now my job is to know if I can obtain the > same results with DMPlex as the code I want to modify. > ------------------------------ > *De:* Mark Adams > *Enviado:* domingo, 25 de julio de 2021 06:43 a. m. > *Para:* Miguel Angel Tapia > *CC:* petsc-users at mcs.anl.gov > *Asunto:* Re: [petsc-users] DMPlex doubt > > This is a Matt question and I recall a question on getting the original > node ordering recently, but I am not finding it. > > How is your Plex created? If you give it a mesh and don't distribute it > the node ordering does not change. > In that case you can use a simpler version Brandon's code: > ierr = DMGetCoordinatesLocal(dm, &coordinates);CHKERRQ(ierr); > Then use VecView. > > However, I don't understand what you are trying to do exactly. Are you > just verifying that PLex has the correct coordinates? > > Mark > > On Sat, Jul 24, 2021 at 9:58 AM Miguel Angel Tapia < > miguel.td19 at outlook.com> wrote: > > Hello. I am a master's student in Mexico. I am currently working on a > project in which we are implementing DMPlex in a code for electromagnetic > modeling. Right now I am working on understanding the tool in C. But I'm > stuck on something and that's why my next doubt: > > I am trying to get the coordinates of the nodes of a mesh in DMPlex. I > already understood how the DAG is structured, how to obtain the nodes that > make up some point. But the ordering of the nodes changes in DMPlex. So I > need to know the coordinates of each node to compare them with my initial > mesh and confirm that the same nodes form the same point in the software I > am using as well as in the DMPlex DAG. > > It would be great if you could guide me a bit on how to do this or > indicate which DMPlex examples would be good to review or which examples > solve something similar to my situation. > > Thank you in advance. Regards. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 28 20:31:06 2021 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 28 Jul 2021 21:31:06 -0400 Subject: [petsc-users] is PETSc's random deterministic? Message-ID: Also, when I google function the Argonne web pages are not found (MIT seems to have mirrored this and that works). Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangc20 at rpi.edu Wed Jul 28 20:58:16 2021 From: zhangc20 at rpi.edu (Zhang, Chonglin) Date: Thu, 29 Jul 2021 01:58:16 +0000 Subject: [petsc-users] is PETSc's random deterministic? In-Reply-To: References: Message-ID: I was having the same website not found problem the other day. I remember email by Satish saying PETSc has a new website. It seems now that all the manual pages are hosted there: https://petsc.org/release/documentation/manualpages/; https://petsc.org/release/docs/manualpages/singleindex.html. Thanks! Chonglin On Jul 28, 2021, at 9:31 PM, Mark Adams > wrote: Also, when I google function the Argonne web pages are not found (MIT seems to have mirrored this and that works). Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Jul 28 22:17:41 2021 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 28 Jul 2021 22:17:41 -0500 Subject: [petsc-users] is PETSc's random deterministic? In-Reply-To: References: Message-ID: The default PETSc random number generator (on CPUs) PETSCRANDER48 is deterministic and should return the same random numbers independent of the underlying hardware and software. The manual page for PETSCRANDER48 indicates this. As we are moving the PETSc docs to petsc.org there may be a bit of time before google again automatically finds the most appropriate page. At the moment google and duckduckgo seem terribly confused. > On Jul 28, 2021, at 8:31 PM, Mark Adams wrote: > > Also, when I google function the Argonne web pages are not found (MIT seems to have mirrored this and that works). > Thanks, > Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Jul 28 22:49:30 2021 From: jed at jedbrown.org (Jed Brown) Date: Wed, 28 Jul 2021 21:49:30 -0600 Subject: [petsc-users] is PETSc's random deterministic? In-Reply-To: References: Message-ID: <87wnp9wzhx.fsf@jedbrown.org> Barry Smith writes: > As we are moving the PETSc docs to petsc.org there may be a bit of time before google again automatically finds the most appropriate page. At the moment google and duckduckgo seem terribly confused. The 301 Redirects are evidently still missing. https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/CHKERRQ.html On March 20, 2007, MCS began revamping its web presence. [...] As I understand the crawling/ranking algorithms, sitting around with these broken links for too long can cost us reputation so that we might not even be the top result once the 301 redirects are inserted. I understand we need an MCS sysadmin to add the 301 redirect rules. From mfadams at lbl.gov Thu Jul 29 08:10:55 2021 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 29 Jul 2021 09:10:55 -0400 Subject: [petsc-users] Is PetscSFBcast deterministic (just checking, I assume it is) Message-ID: GAMG is not deterministic and I'm trying to figure out why. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Jul 29 11:45:10 2021 From: jed at jedbrown.org (Jed Brown) Date: Thu, 29 Jul 2021 10:45:10 -0600 Subject: [petsc-users] Is PetscSFBcast deterministic (just checking, I assume it is) In-Reply-To: References: Message-ID: <878s1pvzl5.fsf@jedbrown.org> Provided it's the same SF, yes. (We use MPI_Waitall instead of MPI_Waitsome or this, though we may be able to shave some time using MPI_Waitsome.) Matrix assembly from the stash is a common place; try -matstash_reproduce. Mark Adams writes: > GAMG is not deterministic and I'm trying to figure out why. > Thanks, > Mark From bsmith at petsc.dev Fri Jul 30 10:45:12 2021 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 30 Jul 2021 10:45:12 -0500 Subject: [petsc-users] Is PetscSFBcast deterministic (just checking, I assume it is) In-Reply-To: <878s1pvzl5.fsf@jedbrown.org> References: <878s1pvzl5.fsf@jedbrown.org> Message-ID: We might consider having a universal reproduce flag that fixes everything to be reproducible (for debugging purposes) instead of operation specific ones like -matstash_reproduce. > On Jul 29, 2021, at 11:45 AM, Jed Brown wrote: > > Provided it's the same SF, yes. (We use MPI_Waitall instead of MPI_Waitsome or this, though we may be able to shave some time using MPI_Waitsome.) > > Matrix assembly from the stash is a common place; try -matstash_reproduce. > > Mark Adams writes: > >> GAMG is not deterministic and I'm trying to figure out why. >> Thanks, >> Mark From jed at jedbrown.org Fri Jul 30 23:39:00 2021 From: jed at jedbrown.org (Jed Brown) Date: Fri, 30 Jul 2021 22:39:00 -0600 Subject: [petsc-users] Is PetscSFBcast deterministic (just checking, I assume it is) In-Reply-To: References: <878s1pvzl5.fsf@jedbrown.org> Message-ID: <87pmuzt7vf.fsf@jedbrown.org> Track it here. https://gitlab.com/petsc/petsc/-/issues/970 Barry Smith writes: > We might consider having a universal reproduce flag that fixes everything to be reproducible (for debugging purposes) instead of operation specific ones like -matstash_reproduce. > > > >> On Jul 29, 2021, at 11:45 AM, Jed Brown wrote: >> >> Provided it's the same SF, yes. (We use MPI_Waitall instead of MPI_Waitsome or this, though we may be able to shave some time using MPI_Waitsome.) >> >> Matrix assembly from the stash is a common place; try -matstash_reproduce. >> >> Mark Adams writes: >> >>> GAMG is not deterministic and I'm trying to figure out why. >>> Thanks, >>> Mark From thibault.bridelbertomeu at gmail.com Sat Jul 31 05:00:33 2021 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Sat, 31 Jul 2021 12:00:33 +0200 Subject: [petsc-users] DMPlex box mesh periodicity bug (?) Message-ID: Dear all, I have noticed what I think is a bug with a 3D DMPlex box mesh with periodic boundaries. When I project a function onto it, it behaves as if the last row of cells in X and in Y direction do not have the right coordinates. I attach to this email a minimal example that reproduces the bug (files mwe_periodic_3d.F90, wrapper_petsc.c, wrapper_petsc.h90, makefile), as well as the output of this code (initmesh.vtu for an output of the DM, solution.vtu for an output of the data projected onto the mesh). There is also a screenshot of what's going on. If one considers the function I project onto the mesh, what should normally happen is that there is a "hole" is the density field around the x=0, y=0 region, the rest being equal to one. I hope it is just a mishandling from my end !! Thank you in advance for your help, Cheers, Thibault -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: wrapper_petsc.c Type: application/octet-stream Size: 2474 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mwe_periodic_3d.F90 Type: application/octet-stream Size: 3790 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 403 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: wrapper_petsc.h90 Type: application/octet-stream Size: 288 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: initmesh.vtu Type: application/octet-stream Size: 955335 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: solution.vtu Type: application/octet-stream Size: 988234 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture d?e?cran 2021-07-31 a? 11.53.06.png Type: image/png Size: 1082614 bytes Desc: not available URL: