From knepley at gmail.com Sat Mar 1 16:03:36 2025 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 1 Mar 2025 17:03:36 -0500 Subject: [petsc-users] Inquiry about resetting a petscsection for a dmplex In-Reply-To: References: Message-ID: On Fri, Feb 28, 2025 at 10:56?PM neil liu wrote: > Thanks a lot, Matt! It works well. > > I have another question regarding future p-adaptivity. Will the section > support defining different DOFs for each face and edge? Maybe I should try > this. > > Yes, you can have different dofs for different faces and edges. Thanks, Matt > Thanks, > > Xiaodong > > > On Thu, Feb 27, 2025 at 9:16?PM Matthew Knepley wrote: > >> On Thu, Feb 27, 2025 at 6:12?PM neil liu wrote: >> >>> Dear Pestc community, >>> >>> I am currently working on a 3D adaptive vector FEM solver. In my case, I >>> need to solve two systems: one for the primal equation using a low-order >>> discretization and another for the adjoint equation using a high-order >>> discretization. >>> >>> Afterward, I need to reset the section associated with the DMPlex. >>> Whichever is set first?20 DOFs (second-order) or 6 DOFs (first-order)?the >>> final mapping always follows that of the first-defined configuration. >>> >>> Did I miss something? >>> >>> When solving two systems like this on the same mesh, I recommend using >> DMClone(). What this does is create you a new >> DM with the same backend topology (Plex), but a different function space >> (Section). This is how I do everything internally in Plex. Does that make >> sense? >> >> Thanks, >> >> Matt >> >>> Thanks, >>> >>> >>> Xiaodong >>> >>> PetscErrorCode DMManage::SetupSection(CaseInfo &objCaseInfo){ >>> PetscSection s; >>> PetscInt edgeStart, edgeEnd, pStart, pEnd; >>> PetscInt cellStart, cellEnd; >>> PetscInt faceStart, faceEnd; >>> >>> PetscFunctionBeginUser; >>> DMPlexGetChart(dm, &pStart, &pEnd); >>> DMPlexGetHeightStratum(dm, 0, &cellStart, &cellEnd); >>> DMPlexGetHeightStratum(dm, 1, &faceStart, &faceEnd); >>> DMPlexGetHeightStratum(dm, 2, &edgeStart, &edgeEnd); /* edges */; >>> PetscSectionCreate(PetscObjectComm((PetscObject)dm), &s); >>> PetscSectionSetNumFields(s, 1); >>> PetscSectionSetFieldComponents(s, 0, 1); >>> if (objCaseInfo.getnumberDof_local() == 6){ >>> PetscSectionSetChart(s, edgeStart, edgeEnd); >>> for (PetscInt edgeIndex = edgeStart; edgeIndex < edgeEnd; ++edgeIndex) { >>> PetscSectionSetDof(s, edgeIndex, objCaseInfo.numdofPerEdge); >>> PetscSectionSetFieldDof(s, edgeIndex, 0, 1); >>> } >>> } >>> else if(objCaseInfo.getnumberDof_local() == 20){ >>> PetscSectionSetChart(s, faceStart, edgeEnd); >>> for (PetscInt faceIndex = faceStart; faceIndex < faceEnd; ++faceIndex) { >>> PetscSectionSetDof(s, faceIndex, objCaseInfo.numdofPerFace); >>> PetscSectionSetFieldDof(s, faceIndex, 0, 1); >>> } >>> //Test >>> for (PetscInt edgeIndex = edgeStart; edgeIndex < edgeEnd; ++edgeIndex) { >>> PetscSectionSetDof(s, edgeIndex, objCaseInfo.numdofPerEdge); >>> PetscSectionSetFieldDof(s, edgeIndex, 0, 1); >>> } >>> } >>> // >>> PetscSectionSetUp(s); >>> DMSetLocalSection(dm, s); >>> PetscSectionDestroy(&s); >>> >>> //Output map for check >>> ISLocalToGlobalMapping ltogm; >>> const PetscInt *g_idx; >>> DMGetLocalToGlobalMapping(dm, <ogm); >>> ISLocalToGlobalMappingView(ltogm, PETSC_VIEWER_STDOUT_WORLD); >>> ISLocalToGlobalMappingGetIndices(ltogm, &g_idx); >>> >>> PetscFunctionReturn(PETSC_SUCCESS); >>> } >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZsBh5OX58KUe1Hyg6ZiMop1edd2nEbpUAOf1uhS0PzEB_PPE-5mD6KGpu2ng0rlcFokwYpolDtqm-ofYOxqQ$ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZsBh5OX58KUe1Hyg6ZiMop1edd2nEbpUAOf1uhS0PzEB_PPE-5mD6KGpu2ng0rlcFokwYpolDtqm-ofYOxqQ$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From juaneah at gmail.com Tue Mar 4 13:02:12 2025 From: juaneah at gmail.com (Emmanuel Ayala) Date: Tue, 4 Mar 2025 13:02:12 -0600 Subject: [petsc-users] Doubts on direct solver with mumps Message-ID: Hello everyone. I'm trying to solve a linear system (which comes from 3D FEM with structured DM mesh) with a direct solver. I configured petsc installation with mumps (?download-mumps ?download-scalapack ?download-parmetis ?download-metis, --download-hwloc, without ptscotch) and I have the following functions: // K is the stiffness matrix, assembly correctly // U is the solution vector // RHS is the right hand side of the linear equation Mat Kfactor; ierr = MatGetFactor(K,MATSOLVERMUMPS, MAT_FACTOR_CHOLESKY, &Kfactor); CHKERRQ(ierr); ierr = MatCholeskyFactorSymbolic(Kfactor,K,0,0); CHKERRQ(ierr); ierr = MatCholeskyFactorNumeric(Kfactor,K,0); CHKERRQ(ierr); ierr = MatSolve(Kfactor,RHS,U); and run with options: -pc_type cholesky -pc_factor_mat_solver_type mumps -mat_mumps_icntl_1 1 -mat_mumps_icntl_13 0 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 PROBLEM: I got the correct solution, but the function MatCholeskyFactorNumeric( ) takes too much time to be completed. MatCholeskyFactorSymbolic() and MatSolve() are very fast. The test uses a square K matrix of 700k dofs, and the MatCholeskyFactorNumeric() takes around 14 minutes, while an iterative solver (KSPCG/PCJACOBI) takes 5 seconds to get the solution. Any suggestions? Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Mar 4 13:27:54 2025 From: jed at jedbrown.org (Jed Brown) Date: Tue, 04 Mar 2025 12:27:54 -0700 Subject: [petsc-users] Doubts on direct solver with mumps In-Reply-To: References: Message-ID: <87h648k345.fsf@jedbrown.org> Yup, this is expected for direct solvers: numeric factorization for a 3D problem scales as O(n^2) while solving with the factors is only O(n^{4/3}). So you expect it to be slow to factor and fast to solve. Iterative methods with good preconditioners can be O(n), and tend to be the choice for large problems any time they can be sufficiently robust (which is both a science and an art, but is available for many problem classes). Emmanuel Ayala writes: > Hello everyone. > > I'm trying to solve a linear system (which comes from 3D FEM with > structured DM mesh) with a direct solver. I configured petsc installation > with mumps (?download-mumps ?download-scalapack ?download-parmetis > ?download-metis, --download-hwloc, without ptscotch) and I have the > following functions: > > // K is the stiffness matrix, assembly correctly > // U is the solution vector > // RHS is the right hand side of the linear equation > > Mat Kfactor; > > ierr = MatGetFactor(K,MATSOLVERMUMPS, MAT_FACTOR_CHOLESKY, &Kfactor); > CHKERRQ(ierr); > ierr = MatCholeskyFactorSymbolic(Kfactor,K,0,0); CHKERRQ(ierr); > ierr = MatCholeskyFactorNumeric(Kfactor,K,0); CHKERRQ(ierr); > ierr = MatSolve(Kfactor,RHS,U); > > and run with options: > -pc_type cholesky -pc_factor_mat_solver_type mumps -mat_mumps_icntl_1 1 > -mat_mumps_icntl_13 0 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 > > PROBLEM: > I got the correct solution, but the function MatCholeskyFactorNumeric( ) > takes too much time to be completed. MatCholeskyFactorSymbolic() and > MatSolve() are very fast. The test uses a square K matrix of 700k dofs, and > the MatCholeskyFactorNumeric() takes around 14 minutes, while an iterative > solver (KSPCG/PCJACOBI) takes 5 seconds to get the solution. Any > suggestions? > > Thanks in advance. From juaneah at gmail.com Tue Mar 4 13:55:07 2025 From: juaneah at gmail.com (Emmanuel Ayala) Date: Tue, 4 Mar 2025 13:55:07 -0600 Subject: [petsc-users] Doubts on direct solver with mumps In-Reply-To: <87h648k345.fsf@jedbrown.org> References: <87h648k345.fsf@jedbrown.org> Message-ID: OK, I see. Thank you very much for the quick response and explanation. Regards. El mar, 4 mar 2025 a la(s) 1:27?p.m., Jed Brown (jed at jedbrown.org) escribi?: > Yup, this is expected for direct solvers: numeric factorization for a 3D > problem scales as O(n^2) while solving with the factors is only O(n^{4/3}). > So you expect it to be slow to factor and fast to solve. > > Iterative methods with good preconditioners can be O(n), and tend to be > the choice for large problems any time they can be sufficiently robust > (which is both a science and an art, but is available for many problem > classes). > > Emmanuel Ayala writes: > > > Hello everyone. > > > > I'm trying to solve a linear system (which comes from 3D FEM with > > structured DM mesh) with a direct solver. I configured petsc installation > > with mumps (?download-mumps ?download-scalapack ?download-parmetis > > ?download-metis, --download-hwloc, without ptscotch) and I have the > > following functions: > > > > // K is the stiffness matrix, assembly correctly > > // U is the solution vector > > // RHS is the right hand side of the linear equation > > > > Mat Kfactor; > > > > ierr = MatGetFactor(K,MATSOLVERMUMPS, MAT_FACTOR_CHOLESKY, &Kfactor); > > CHKERRQ(ierr); > > ierr = MatCholeskyFactorSymbolic(Kfactor,K,0,0); CHKERRQ(ierr); > > ierr = MatCholeskyFactorNumeric(Kfactor,K,0); CHKERRQ(ierr); > > ierr = MatSolve(Kfactor,RHS,U); > > > > and run with options: > > -pc_type cholesky -pc_factor_mat_solver_type mumps > -mat_mumps_icntl_1 1 > > -mat_mumps_icntl_13 0 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 > > > > PROBLEM: > > I got the correct solution, but the function MatCholeskyFactorNumeric( ) > > takes too much time to be completed. MatCholeskyFactorSymbolic() and > > MatSolve() are very fast. The test uses a square K matrix of 700k dofs, > and > > the MatCholeskyFactorNumeric() takes around 14 minutes, while an > iterative > > solver (KSPCG/PCJACOBI) takes 5 seconds to get the solution. Any > > suggestions? > > > > Thanks in advance. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Mar 4 14:22:36 2025 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 4 Mar 2025 15:22:36 -0500 Subject: [petsc-users] Doubts on direct solver with mumps In-Reply-To: References: Message-ID: <4E489DC9-2166-4FCE-89C2-17BC9C2B3740@petsc.dev> > On Mar 4, 2025, at 2:02?PM, Emmanuel Ayala wrote: > > Hello everyone. > > I'm trying to solve a linear system (which comes from 3D FEM with structured DM mesh) with a direct solver. I configured petsc installation with mumps (?download-mumps ?download-scalapack ?download-parmetis ?download-metis, --download-hwloc, without ptscotch) and I have the following functions: > > // K is the stiffness matrix, assembly correctly > // U is the solution vector > // RHS is the right hand side of the linear equation > > Mat Kfactor; > > ierr = MatGetFactor(K,MATSOLVERMUMPS, MAT_FACTOR_CHOLESKY, &Kfactor); CHKERRQ(ierr); > ierr = MatCholeskyFactorSymbolic(Kfactor,K,0,0); CHKERRQ(ierr); > ierr = MatCholeskyFactorNumeric(Kfactor,K,0); CHKERRQ(ierr); > ierr = MatSolve(Kfactor,RHS,U); Note 1) these four lines of code above are not needed if you use are using KSPSolve to solve the system. The options below will trigger this. > > and run with options: > -pc_type cholesky -pc_factor_mat_solver_type mumps -mat_mumps_icntl_1 1 -mat_mumps_icntl_13 0 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 Note 2) it is imperative you use a good BLAS library and optimization when using mumps. Do not use --download-blaslapack (or friends) and use --with-debugging=0 in the configure option). Note 3) If you are running sequentially (no MPI) then also ensure you have --with-openmp in your configure options and set an appropriate value for the number of OpenMP threads when you run your program. > > PROBLEM: > I got the correct solution, but the function MatCholeskyFactorNumeric( ) takes too much time to be completed. MatCholeskyFactorSymbolic() and MatSolve() are very fast. The test uses a square K matrix of 700k dofs, and the MatCholeskyFactorNumeric() takes around 14 minutes, while an iterative solver (KSPCG/PCJACOBI) takes 5 seconds to get the solution. Any suggestions? > > Thanks in advance. From juaneah at gmail.com Tue Mar 4 15:21:13 2025 From: juaneah at gmail.com (Emmanuel Ayala) Date: Tue, 4 Mar 2025 15:21:13 -0600 Subject: [petsc-users] Doubts on direct solver with mumps In-Reply-To: <4E489DC9-2166-4FCE-89C2-17BC9C2B3740@petsc.dev> References: <4E489DC9-2166-4FCE-89C2-17BC9C2B3740@petsc.dev> Message-ID: Thanks for the notes. El mar, 4 mar 2025 a la(s) 2:22?p.m., Barry Smith (bsmith at petsc.dev) escribi?: > > > > On Mar 4, 2025, at 2:02?PM, Emmanuel Ayala wrote: > > > > Hello everyone. > > > > I'm trying to solve a linear system (which comes from 3D FEM with > structured DM mesh) with a direct solver. I configured petsc installation > with mumps (?download-mumps ?download-scalapack ?download-parmetis > ?download-metis, --download-hwloc, without ptscotch) and I have the > following functions: > > > > // K is the stiffness matrix, assembly correctly > > // U is the solution vector > > // RHS is the right hand side of the linear equation > > > > Mat Kfactor; > > > > ierr = MatGetFactor(K,MATSOLVERMUMPS, MAT_FACTOR_CHOLESKY, > &Kfactor); CHKERRQ(ierr); > > ierr = MatCholeskyFactorSymbolic(Kfactor,K,0,0); CHKERRQ(ierr); > > ierr = MatCholeskyFactorNumeric(Kfactor,K,0); CHKERRQ(ierr); > > ierr = MatSolve(Kfactor,RHS,U); > > > Note 1) these four lines of code above are not needed if you use are > using KSPSolve to solve the system. The options below will trigger this. > > > > and run with options: > > -pc_type cholesky -pc_factor_mat_solver_type mumps > -mat_mumps_icntl_1 1 -mat_mumps_icntl_13 0 -mat_mumps_icntl_28 2 > -mat_mumps_icntl_29 2 > OK. I'm solving sepparetelly the direct solver and the iterative one. > > Note 2) it is imperative you use a good BLAS library and optimization when > using mumps. Do not use --download-blaslapack (or friends) and use > --with-debugging=0 in the configure option). > Right. I configured it with --with-debugging=0 and --download-fblaslapack. *So, which is a good BLAS library?* from petsc: ...One can use --download-f2cblaslapack --download-blis... > > Note 3) If you are running sequentially (no MPI) then also ensure you have > --with-openmp in your configure options and set an appropriate value for > the number of OpenMP threads when you run your program. > I'm running with MPI . Regards. > > > > > > PROBLEM: > > I got the correct solution, but the function MatCholeskyFactorNumeric( ) > takes too much time to be completed. MatCholeskyFactorSymbolic() and > MatSolve() are very fast. The test uses a square K matrix of 700k dofs, and > the MatCholeskyFactorNumeric() takes around 14 minutes, while an iterative > solver (KSPCG/PCJACOBI) takes 5 seconds to get the solution. Any > suggestions? > > > > Thanks in advance. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juaneah at gmail.com Tue Mar 4 16:17:43 2025 From: juaneah at gmail.com (Emmanuel Ayala) Date: Tue, 4 Mar 2025 16:17:43 -0600 Subject: [petsc-users] Doubts on direct solver with mumps In-Reply-To: References: <4E489DC9-2166-4FCE-89C2-17BC9C2B3740@petsc.dev> Message-ID: Well, I just tested the configuration with --download-f2cblaslapack --download-blis and the direct solver cholesky with mumps becomes 10x faster :D Thanks! El mar, 4 mar 2025 a la(s) 3:21?p.m., Emmanuel Ayala (juaneah at gmail.com) escribi?: > Thanks for the notes. > > El mar, 4 mar 2025 a la(s) 2:22?p.m., Barry Smith (bsmith at petsc.dev) > escribi?: > >> >> >> > On Mar 4, 2025, at 2:02?PM, Emmanuel Ayala wrote: >> > >> > Hello everyone. >> > >> > I'm trying to solve a linear system (which comes from 3D FEM with >> structured DM mesh) with a direct solver. I configured petsc installation >> with mumps (?download-mumps ?download-scalapack ?download-parmetis >> ?download-metis, --download-hwloc, without ptscotch) and I have the >> following functions: >> > >> > // K is the stiffness matrix, assembly correctly >> > // U is the solution vector >> > // RHS is the right hand side of the linear equation >> > >> > Mat Kfactor; >> > >> > ierr = MatGetFactor(K,MATSOLVERMUMPS, MAT_FACTOR_CHOLESKY, >> &Kfactor); CHKERRQ(ierr); >> > ierr = MatCholeskyFactorSymbolic(Kfactor,K,0,0); CHKERRQ(ierr); >> > ierr = MatCholeskyFactorNumeric(Kfactor,K,0); CHKERRQ(ierr); >> > ierr = MatSolve(Kfactor,RHS,U); >> >> >> Note 1) these four lines of code above are not needed if you use are >> using KSPSolve to solve the system. The options below will trigger this. >> > >> > and run with options: >> > -pc_type cholesky -pc_factor_mat_solver_type mumps >> -mat_mumps_icntl_1 1 -mat_mumps_icntl_13 0 -mat_mumps_icntl_28 2 >> -mat_mumps_icntl_29 2 >> > OK. I'm solving sepparetelly the direct solver and the iterative one. > >> >> Note 2) it is imperative you use a good BLAS library and optimization >> when using mumps. Do not use --download-blaslapack (or friends) and use >> --with-debugging=0 in the configure option). >> > Right. I configured it with --with-debugging=0 and --download-fblaslapack. *So, > which is a good BLAS library?* from petsc: ...One can use > --download-f2cblaslapack --download-blis... > >> >> Note 3) If you are running sequentially (no MPI) then also ensure you >> have --with-openmp in your configure options and set an appropriate value >> for the number of OpenMP threads when you run your program. >> > I'm running with MPI . > > Regards. > >> >> >> > >> > PROBLEM: >> > I got the correct solution, but the function MatCholeskyFactorNumeric( >> ) takes too much time to be completed. MatCholeskyFactorSymbolic() and >> MatSolve() are very fast. The test uses a square K matrix of 700k dofs, and >> the MatCholeskyFactorNumeric() takes around 14 minutes, while an iterative >> solver (KSPCG/PCJACOBI) takes 5 seconds to get the solution. Any >> suggestions? >> > >> > Thanks in advance. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Tue Mar 4 16:25:19 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Tue, 4 Mar 2025 23:25:19 +0100 Subject: [petsc-users] Doubts on direct solver with mumps In-Reply-To: References: <4E489DC9-2166-4FCE-89C2-17BC9C2B3740@petsc.dev> Message-ID: <0074DB0A-4116-4487-975E-D4F771D625FA@joliv.et> > On 4 Mar 2025, at 11:17?PM, Emmanuel Ayala wrote: > > > Well, I just tested the configuration with --download-f2cblaslapack --download-blis and the direct solver cholesky with mumps becomes 10x faster :D If your problem is big enough (and 700k unknowns should be by far big enough), you should turn on BLR (block low-rank) which will give both performance boost and memory savings. But a good BLAS is mandatory indeed. Thanks, Pierre > Thanks! > > El mar, 4 mar 2025 a la(s) 3:21?p.m., Emmanuel Ayala (juaneah at gmail.com ) escribi?: >> Thanks for the notes. >> >> El mar, 4 mar 2025 a la(s) 2:22?p.m., Barry Smith (bsmith at petsc.dev ) escribi?: >>> >>> >>> > On Mar 4, 2025, at 2:02?PM, Emmanuel Ayala > wrote: >>> > >>> > Hello everyone. >>> > >>> > I'm trying to solve a linear system (which comes from 3D FEM with structured DM mesh) with a direct solver. I configured petsc installation with mumps (?download-mumps ?download-scalapack ?download-parmetis ?download-metis, --download-hwloc, without ptscotch) and I have the following functions: >>> > >>> > // K is the stiffness matrix, assembly correctly >>> > // U is the solution vector >>> > // RHS is the right hand side of the linear equation >>> > >>> > Mat Kfactor; >>> > >>> > ierr = MatGetFactor(K,MATSOLVERMUMPS, MAT_FACTOR_CHOLESKY, &Kfactor); CHKERRQ(ierr); >>> > ierr = MatCholeskyFactorSymbolic(Kfactor,K,0,0); CHKERRQ(ierr); >>> > ierr = MatCholeskyFactorNumeric(Kfactor,K,0); CHKERRQ(ierr); >>> > ierr = MatSolve(Kfactor,RHS,U); >>> >>> >>> Note 1) these four lines of code above are not needed if you use are using KSPSolve to solve the system. The options below will trigger this. >>> > >>> > and run with options: >>> > -pc_type cholesky -pc_factor_mat_solver_type mumps -mat_mumps_icntl_1 1 -mat_mumps_icntl_13 0 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 >> OK. I'm solving sepparetelly the direct solver and the iterative one. >>> >>> Note 2) it is imperative you use a good BLAS library and optimization when using mumps. Do not use --download-blaslapack (or friends) and use --with-debugging=0 in the configure option). >> Right. I configured it with --with-debugging=0 and --download-fblaslapack. So, which is a good BLAS library? from petsc: ...One can use --download-f2cblaslapack --download-blis... >>> >>> Note 3) If you are running sequentially (no MPI) then also ensure you have --with-openmp in your configure options and set an appropriate value for the number of OpenMP threads when you run your program. >> I'm running with MPI . >> >> Regards. >>> >>> >>> > >>> > PROBLEM: >>> > I got the correct solution, but the function MatCholeskyFactorNumeric( ) takes too much time to be completed. MatCholeskyFactorSymbolic() and MatSolve() are very fast. The test uses a square K matrix of 700k dofs, and the MatCholeskyFactorNumeric() takes around 14 minutes, while an iterative solver (KSPCG/PCJACOBI) takes 5 seconds to get the solution. Any suggestions? >>> > >>> > Thanks in advance. >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Mar 4 17:04:34 2025 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 4 Mar 2025 18:04:34 -0500 Subject: [petsc-users] Doubts on direct solver with mumps In-Reply-To: References: <4E489DC9-2166-4FCE-89C2-17BC9C2B3740@petsc.dev> Message-ID: <67BECF9D-1015-442D-BC2E-13C4BCA7FDED@petsc.dev> :-) > On Mar 4, 2025, at 5:17?PM, Emmanuel Ayala wrote: > > > Well, I just tested the configuration with --download-f2cblaslapack --download-blis and the direct solver cholesky with mumps becomes 10x faster :D > > Thanks! > > El mar, 4 mar 2025 a la(s) 3:21?p.m., Emmanuel Ayala (juaneah at gmail.com ) escribi?: >> Thanks for the notes. >> >> El mar, 4 mar 2025 a la(s) 2:22?p.m., Barry Smith (bsmith at petsc.dev ) escribi?: >>> >>> >>> > On Mar 4, 2025, at 2:02?PM, Emmanuel Ayala > wrote: >>> > >>> > Hello everyone. >>> > >>> > I'm trying to solve a linear system (which comes from 3D FEM with structured DM mesh) with a direct solver. I configured petsc installation with mumps (?download-mumps ?download-scalapack ?download-parmetis ?download-metis, --download-hwloc, without ptscotch) and I have the following functions: >>> > >>> > // K is the stiffness matrix, assembly correctly >>> > // U is the solution vector >>> > // RHS is the right hand side of the linear equation >>> > >>> > Mat Kfactor; >>> > >>> > ierr = MatGetFactor(K,MATSOLVERMUMPS, MAT_FACTOR_CHOLESKY, &Kfactor); CHKERRQ(ierr); >>> > ierr = MatCholeskyFactorSymbolic(Kfactor,K,0,0); CHKERRQ(ierr); >>> > ierr = MatCholeskyFactorNumeric(Kfactor,K,0); CHKERRQ(ierr); >>> > ierr = MatSolve(Kfactor,RHS,U); >>> >>> >>> Note 1) these four lines of code above are not needed if you use are using KSPSolve to solve the system. The options below will trigger this. >>> > >>> > and run with options: >>> > -pc_type cholesky -pc_factor_mat_solver_type mumps -mat_mumps_icntl_1 1 -mat_mumps_icntl_13 0 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 >> OK. I'm solving sepparetelly the direct solver and the iterative one. >>> >>> Note 2) it is imperative you use a good BLAS library and optimization when using mumps. Do not use --download-blaslapack (or friends) and use --with-debugging=0 in the configure option). >> Right. I configured it with --with-debugging=0 and --download-fblaslapack. So, which is a good BLAS library? from petsc: ...One can use --download-f2cblaslapack --download-blis... >>> >>> Note 3) If you are running sequentially (no MPI) then also ensure you have --with-openmp in your configure options and set an appropriate value for the number of OpenMP threads when you run your program. >> I'm running with MPI . >> >> Regards. >>> >>> >>> > >>> > PROBLEM: >>> > I got the correct solution, but the function MatCholeskyFactorNumeric( ) takes too much time to be completed. MatCholeskyFactorSymbolic() and MatSolve() are very fast. The test uses a square K matrix of 700k dofs, and the MatCholeskyFactorNumeric() takes around 14 minutes, while an iterative solver (KSPCG/PCJACOBI) takes 5 seconds to get the solution. Any suggestions? >>> > >>> > Thanks in advance. >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From juaneah at gmail.com Tue Mar 4 22:58:58 2025 From: juaneah at gmail.com (Emmanuel Ayala) Date: Tue, 4 Mar 2025 22:58:58 -0600 Subject: [petsc-users] Doubts on direct solver with mumps In-Reply-To: <0074DB0A-4116-4487-975E-D4F771D625FA@joliv.et> References: <4E489DC9-2166-4FCE-89C2-17BC9C2B3740@petsc.dev> <0074DB0A-4116-4487-975E-D4F771D625FA@joliv.et> Message-ID: El mar, 4 mar 2025 a la(s) 4:25?p.m., Pierre Jolivet (pierre at joliv.et) escribi?: > > > On 4 Mar 2025, at 11:17?PM, Emmanuel Ayala wrote: > > > Well, I just tested the configuration with --download-f2cblaslapack --download-blis > and the direct solver cholesky with mumps becomes 10x faster :D > > > If your problem is big enough (and 700k unknowns should be by far big > enough), you should turn on BLR (block low-rank) which will give both > performance boost and memory savings. > But a good BLAS is mandatory indeed. > Thanks for the advice, I will try it. Regards. > > Thanks, > Pierre > > Thanks! > > El mar, 4 mar 2025 a la(s) 3:21?p.m., Emmanuel Ayala (juaneah at gmail.com) > escribi?: > >> Thanks for the notes. >> >> El mar, 4 mar 2025 a la(s) 2:22?p.m., Barry Smith (bsmith at petsc.dev) >> escribi?: >> >>> >>> >>> > On Mar 4, 2025, at 2:02?PM, Emmanuel Ayala wrote: >>> > >>> > Hello everyone. >>> > >>> > I'm trying to solve a linear system (which comes from 3D FEM with >>> structured DM mesh) with a direct solver. I configured petsc installation >>> with mumps (?download-mumps ?download-scalapack ?download-parmetis >>> ?download-metis, --download-hwloc, without ptscotch) and I have the >>> following functions: >>> > >>> > // K is the stiffness matrix, assembly correctly >>> > // U is the solution vector >>> > // RHS is the right hand side of the linear equation >>> > >>> > Mat Kfactor; >>> > >>> > ierr = MatGetFactor(K,MATSOLVERMUMPS, MAT_FACTOR_CHOLESKY, >>> &Kfactor); CHKERRQ(ierr); >>> > ierr = MatCholeskyFactorSymbolic(Kfactor,K,0,0); CHKERRQ(ierr); >>> > ierr = MatCholeskyFactorNumeric(Kfactor,K,0); CHKERRQ(ierr); >>> > ierr = MatSolve(Kfactor,RHS,U); >>> >>> >>> Note 1) these four lines of code above are not needed if you use are >>> using KSPSolve to solve the system. The options below will trigger this. >>> > >>> > and run with options: >>> > -pc_type cholesky -pc_factor_mat_solver_type mumps >>> -mat_mumps_icntl_1 1 -mat_mumps_icntl_13 0 -mat_mumps_icntl_28 2 >>> -mat_mumps_icntl_29 2 >>> >> OK. I'm solving sepparetelly the direct solver and the iterative one. >> >>> >>> Note 2) it is imperative you use a good BLAS library and optimization >>> when using mumps. Do not use --download-blaslapack (or friends) and use >>> --with-debugging=0 in the configure option). >>> >> Right. I configured it with --with-debugging=0 and >> --download-fblaslapack. *So, which is a good BLAS library?* from petsc: >> ...One can use --download-f2cblaslapack --download-blis... >> >>> >>> Note 3) If you are running sequentially (no MPI) then also ensure you >>> have --with-openmp in your configure options and set an appropriate value >>> for the number of OpenMP threads when you run your program. >>> >> I'm running with MPI . >> >> Regards. >> >>> >>> >>> > >>> > PROBLEM: >>> > I got the correct solution, but the function MatCholeskyFactorNumeric( >>> ) takes too much time to be completed. MatCholeskyFactorSymbolic() and >>> MatSolve() are very fast. The test uses a square K matrix of 700k dofs, and >>> the MatCholeskyFactorNumeric() takes around 14 minutes, while an iterative >>> solver (KSPCG/PCJACOBI) takes 5 seconds to get the solution. Any >>> suggestions? >>> > >>> > Thanks in advance. >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From onur71 at protonmail.ch Sun Mar 9 18:49:52 2025 From: onur71 at protonmail.ch (Onur) Date: Sun, 09 Mar 2025 23:49:52 +0000 Subject: [petsc-users] Face coloring in DMPlex Message-ID: <5d_fbdEi4_y4HwUBUunMS5UxmomgS6zY5EQpbihjCQ54cFWK8clfIuzY1bfx_hkalhlSp0I2dz1AjuKJWcwHdyoXXXgjQ_WzQmW3gDhE7-M=@protonmail.ch> Hi, I am building a solver and for mesh handling, I useDMPlex. In my 3D mesh, I need to color faces. The adjacency information appears correct based on my checks(But I tried setting adjacency and creating new sections too): for (PetscInt f = fStart; f < fEnd; ++f) { PetscInt adjSize = PETSC_DETERMINE; PetscInt *adj = NULL; PetscCallVoid(DMPlexGetAdjacency(dm_, f, &adjSize, &adj)); PetscCallVoid(PetscPrintf(PETSC_COMM_WORLD, "[%4d]", f)); PetscInt count = 0; for (int i = 0; i < adjSize; ++i) { if (adj[i] >= fStart && adj[i] < fEnd) { count++; PetscCallVoid(PetscPrintf(PETSC_COMM_WORLD, " %4d", adj[i])); } } PetscCallVoid(PetscPrintf(PETSC_COMM_WORLD, " | %d\n", count)); PetscCallVoid(PetscFree(adj));} I am testing this on a mesh consisting of quadrilateral elements. This code correctly outputs 7 adjacent faces for interior faces and 4 for boundary faces (including the face itself). However, when I callDMCreateColoring, I get the following error: [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: No method getcoloring for DM of type plex What is the way to perform face coloring usingDMPlex? Thank you! Onur -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Mar 9 21:06:52 2025 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 9 Mar 2025 22:06:52 -0400 Subject: [petsc-users] Face coloring in DMPlex In-Reply-To: <5d_fbdEi4_y4HwUBUunMS5UxmomgS6zY5EQpbihjCQ54cFWK8clfIuzY1bfx_hkalhlSp0I2dz1AjuKJWcwHdyoXXXgjQ_WzQmW3gDhE7-M=@protonmail.ch> References: <5d_fbdEi4_y4HwUBUunMS5UxmomgS6zY5EQpbihjCQ54cFWK8clfIuzY1bfx_hkalhlSp0I2dz1AjuKJWcwHdyoXXXgjQ_WzQmW3gDhE7-M=@protonmail.ch> Message-ID: On Sun, Mar 9, 2025 at 8:04?PM Onur via petsc-users wrote: > Hi, > > I am building a solver and for mesh handling, I use DMPlex. In my 3D > mesh, I need to color faces. The adjacency information appears correct > based on my checks(But I tried setting adjacency and creating new sections > too): > for (PetscInt f = fStart; f < fEnd; ++f) { > PetscInt adjSize = PETSC_DETERMINE; > PetscInt *adj = NULL; > PetscCallVoid(DMPlexGetAdjacency(dm_, f, &adjSize, &adj)); > PetscCallVoid(PetscPrintf(PETSC_COMM_WORLD, "[%4d]", f)); > PetscInt count = 0; > for (int i = 0; i < adjSize; ++i) { > if (adj[i] >= fStart && adj[i] < fEnd) { > count++; > PetscCallVoid(PetscPrintf(PETSC_COMM_WORLD, " %4d", adj[i])); > } > } > PetscCallVoid(PetscPrintf(PETSC_COMM_WORLD, " | %d\n", count)); > PetscCallVoid(PetscFree(adj)); > } > > I am testing this on a mesh consisting of quadrilateral elements. This > code correctly outputs 7 adjacent faces for interior faces and 4 for > boundary faces (including the face itself). > > However, when I call DMCreateColoring, I get the following error: > > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: No method getcoloring for DM of type plex > > What is the way to perform face coloring using DMPlex? > > You can see at the bottom of this page ( https://urldefense.us/v3/__https://petsc.org/main/manualpages/DM/DMCreateColoring/__;!!G_uCfscf7eWS!Ye71m4cv18ArH6KNma4l3K6Dap0gpNdURt6Rr0xaC8a8xT0ZlCHGYf_082sga_F8wArt0_KklHPfxzx6WRgM$ ) that there are no Plex-specific implementations of DMCreateColorjng. This is because I do not know of any algorithms for unstructured meshes that work better than coloring the nonzero structure. Thus I have always used the greedy coloring on the matrix. Could you use that? Thanks, Matt > Thank you! > > Onur > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Ye71m4cv18ArH6KNma4l3K6Dap0gpNdURt6Rr0xaC8a8xT0ZlCHGYf_082sga_F8wArt0_KklHPfx5tzftPQ$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Wed Mar 12 15:34:32 2025 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Wed, 12 Mar 2025 16:34:32 -0400 Subject: [petsc-users] PetscPythonInitialize, KSPPYTHON, PCPYTHON, etc Message-ID: <2ade136e-d6de-438b-ac0d-2e5e16df9a0c@giref.ulaval.ca> Hi, just a naive question: looking at KSPPYTHON and PCPYTHON, we saw that there is only 1 example available. We are asking ourself: is it still supported and can we start developping ou PCs and KSPs on top of it? Or is there a "new" replacement for these? Thanks, Eric -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 From knepley at gmail.com Wed Mar 12 15:48:04 2025 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 12 Mar 2025 16:48:04 -0400 Subject: [petsc-users] PetscPythonInitialize, KSPPYTHON, PCPYTHON, etc In-Reply-To: <2ade136e-d6de-438b-ac0d-2e5e16df9a0c@giref.ulaval.ca> References: <2ade136e-d6de-438b-ac0d-2e5e16df9a0c@giref.ulaval.ca> Message-ID: On Wed, Mar 12, 2025 at 4:34?PM Eric Chamberland via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi, > > just a naive question: looking at KSPPYTHON and PCPYTHON, we saw that > there is only 1 example available. > > We are asking ourself: is it still supported and can we start > developping ou PCs and KSPs on top of it? > > Or is there a "new" replacement for these? > I think the reason that there are so few examples is that many examples exist in other packages, such as Firedrake, and they are the main consumers. KSPPYTHON is a way to write KSPSHELL using Python rather than C, and we mostly write C. I will say that recently we fixed everything so that PETSc errors and Python exceptions are passed correctly up the stack, and debugging these things should be easy. I have been debugging the PyVista visualization, and I can change the Python in one window and run in the other. It is easy. Thanks, Matt > Thanks, > > Eric > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bga88b0gb6cn6ZpZ9yaQxOngfDa9uXuUaWs5sX_wq6Qa259hB-AVBEUw3b1DUvrlGybsQzMitII4Bxx8iF9g$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.diehl at kuleuven.be Fri Mar 14 09:41:34 2025 From: martin.diehl at kuleuven.be (Martin Diehl) Date: Fri, 14 Mar 2025 14:41:34 +0000 Subject: [petsc-users] Fortran interfaces: Google Summer of Code 2025? In-Reply-To: <857D8A26-115E-4AB8-91A8-2F8FC71C17D1@petsc.dev> References: <51e996a3b06a4f3f7146fed18b928c3f86762b77.camel@kuleuven.be> <857D8A26-115E-4AB8-91A8-2F8FC71C17D1@petsc.dev> Message-ID: <9858e051220e1a9722a21ee7bf82c88a7aa16add.camel@kuleuven.be> Barry, for other languages, we can't rely on fortran-lang.org but NumFOCUS seems to be an option. After trying out the branch for 7517, I have some ideas for further Fortran work as outlined in the attached document (same for md and pdf). My suggestion would be to apply via fortran-lang.org with Tapashree (in CC). If that results in better Python code, it should also be useful for other languages. Martin On Thu, 2025-01-30 at 09:54 -0500, Barry Smith wrote: > > ?? Martin, > > ?? I have restarted in the last week on 7517 and plan for it to be in > the March release. > > ?? As part of the work I have developed new Pythoncode? that scraps > the code for signatures for all the functions, enums, objects etc and > from this constructs the Fortran binding. The same scraping could be > used for other languages so I am hoping automatic bindings can be > done for other languages, for example Rust, even Python. So perhaps > we should consider a summer of code project for other such languages? > > ?? Barry > > > > On Jan 30, 2025, at 6:13?AM, Martin Diehl > > wrote: > > > > Dear PETSc team, dear Barry, > > > > applications for the Google Summer of Code will start again and I > > was > > wondering if help for the re-factoring of the Fortran interfaces is > > still needed. Whether this makes sense depends on the progress of > > https://gitlab.com/petsc/petsc/-/merge_requests/7517 > > > > In contrast to the failed attempt last year, I have a student > > interested in working on this topic. > > > > Martin > > -- > > KU Leuven > > Department of Computer Science > > Department of Materials Engineering > > Celestijnenlaan 200a > > 3001 Leuven, Belgium > > > -- KU Leuven Department of Computer Science Department of Materials Engineering Celestijnenlaan 200a 3001 Leuven, Belgium -------------- next part -------------- A non-text attachment was scrubbed... Name: ideas.md Type: text/markdown Size: 2636 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ideas.pdf Type: application/pdf Size: 141113 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: This is a digitally signed message part URL: From silvia.preda at uninsubria.it Fri Mar 14 11:15:09 2025 From: silvia.preda at uninsubria.it (Preda Silvia) Date: Fri, 14 Mar 2025 16:15:09 +0000 Subject: [petsc-users] DOF projection after dmforest adaption Message-ID: Hi, We are having a hard time understanding how the degrees of freedom are projected after a dmforest adaption. Having used before the P4EST library directly, we recall that there, a index mapping from quadrants present in the grid before and after adaption (1 to 1, 1 to many, many to 1, for unaltered, refined, coarsened quadrants, respectively) was available. Would it be possible to access the same information for a dmforest? Thank you for all the suggestion! Silvia -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Mar 14 11:33:03 2025 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 14 Mar 2025 12:33:03 -0400 Subject: [petsc-users] Fortran interfaces: Google Summer of Code 2025? In-Reply-To: <9858e051220e1a9722a21ee7bf82c88a7aa16add.camel@kuleuven.be> References: <51e996a3b06a4f3f7146fed18b928c3f86762b77.camel@kuleuven.be> <857D8A26-115E-4AB8-91A8-2F8FC71C17D1@petsc.dev> <9858e051220e1a9722a21ee7bf82c88a7aa16add.camel@kuleuven.be> Message-ID: <37525E85-93EC-4D46-B266-E26A8A6EC206@petsc.dev> Martin, Thanks for the email, additional improvements are definitely possible and desirable. To me the most powerful would be the automatic generation of Fortran stubs for suboutines that return arrays. There are two "styles" of these in the PETSc C API, * in the first case the length of the array is passed as the argument before the pointer to the array PETSC_EXTERN void dmplexgettransitiveclosure_(DM *dm, PetscInt *p, PetscBool *useCone, PetscInt *N, F90Array1d *ptr, int *ierr PETSC_F90_2PTR_PROTO(ptrd)) { PetscInt *v = NULL; PetscInt n; CHKFORTRANNULL(N); *ierr = DMPlexGetTransitiveClosure(*dm, *p, *useCone, &n, &v); if (*ierr) return; *ierr = F90Array1dCreate((void *)v, MPIU_INT, 1, n * 2, ptr PETSC_F90_2PTR_PARAM(ptrd)); if (N) *N = n; } If we "mark" the source code with, for example, PetscErrorCode DMPlexGetTransitiveClosure(DM dm, PetscInt p, PetscBool useCone, PeAL PetscInt *numPoints, PetscInt *points[]) the getAPI() code will know the length of the array is the argument numPoints and thus all the information for generating the Fortran stub is available and it can be generated automatically. * in the second case the length of the array is not directly available PETSC_EXTERN void vecgetarray_(Vec *x, F90Array1d *ptr, int *ierr PETSC_F90_2PTR_PROTO(ptrd)) { PetscScalar *fa; PetscInt len; *ierr = VecGetArray(*x, &fa); if (*ierr) return; *ierr = VecGetLocalSize(*x, &len); if (*ierr) return; *ierr = F90Array1dCreate(fa, MPIU_SCALAR, 1, len, ptr PETSC_F90_2PTR_PARAM(ptrd)); } The Fortran stub cannot be generated automatically without some hint. Is there some way we can mark in the PETSc source a hint that indicates how to access the needed length. In this case by calling VecGetLocalSize(*x,&len); for example use PetscErrorCode VecGetArray(Vec x, PeALE(n,VecGetLocalSize(x,n)) PetscScalar *a[]) >From this the getAPI() parser will have all the information needed to generate the Fortran stub. I haven't picked a syntax for providing this information; above is just a suggestion that is trivial to parse but probably also has downsides. So I concur with your suggestion to submit to fortran-lang.org Barry > On Mar 14, 2025, at 10:41?AM, Martin Diehl wrote: > > Barry, > > for other languages, we can't rely on fortran-lang.org but NumFOCUS > seems to be an option. > After trying out the branch for 7517, I have some ideas for further > Fortran work as outlined in the attached document (same for md and > pdf). My suggestion would be to apply via fortran-lang.org with > Tapashree (in CC). If that results in better Python code, it should > also be useful for other languages. > > Martin > > > On Thu, 2025-01-30 at 09:54 -0500, Barry Smith wrote: >> >> Martin, >> >> I have restarted in the last week on 7517 and plan for it to be in >> the March release. >> >> As part of the work I have developed new Pythoncode that scraps >> the code for signatures for all the functions, enums, objects etc and >> from this constructs the Fortran binding. The same scraping could be >> used for other languages so I am hoping automatic bindings can be >> done for other languages, for example Rust, even Python. So perhaps >> we should consider a summer of code project for other such languages? >> >> Barry >> >> >>> On Jan 30, 2025, at 6:13?AM, Martin Diehl >>> wrote: >>> >>> Dear PETSc team, dear Barry, >>> >>> applications for the Google Summer of Code will start again and I >>> was >>> wondering if help for the re-factoring of the Fortran interfaces is >>> still needed. Whether this makes sense depends on the progress of >>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7517__;!!G_uCfscf7eWS!ewziLrHj9PbFBj6hTuPTF5QkS9JsrYqgm7k5F1usVchfc8wMFAEvHL8IZhxCe5Z_6L--4Z-3WxqPewkYzjhgLq4$ >>> >>> In contrast to the failed attempt last year, I have a student >>> interested in working on this topic. >>> >>> Martin >>> -- >>> KU Leuven >>> Department of Computer Science >>> Department of Materials Engineering >>> Celestijnenlaan 200a >>> 3001 Leuven, Belgium >>> >> > > > -- > KU Leuven > Department of Computer Science > Department of Materials Engineering > Celestijnenlaan 200a > 3001 Leuven, Belgium > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Fri Mar 14 11:53:11 2025 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Fri, 14 Mar 2025 12:53:11 -0400 Subject: [petsc-users] PetscPythonInitialize, KSPPYTHON, PCPYTHON, etc In-Reply-To: References: <2ade136e-d6de-438b-ac0d-2e5e16df9a0c@giref.ulaval.ca> Message-ID: <2fe077b8-09a5-4133-869c-3cbb236d1b57@giref.ulaval.ca> Thanks Matthew! I hope you're doing well. I have a question regarding a change I did on our code: I modified the code to call |PetscPythonInitialize(PETSC_NULLPTR, PETSC_NULLPTR)| after |SlepcInitialize|. Our CI ran successfully across 20 different environments (including Valgrind), but we encountered a segmentation fault on Ubuntu 24.04 when calling |PetscPythonInitialize| within our Python module (built using Pybind11 bindings). On Ubuntu, calling |PetscPythonInitialize| from within our Python 3 module results in a segfault with the following backtrace: Thread 1 "python3" received signal SIGSEGV, Segmentation fault. 0x00001555339b372ain PyList_New() from /lib/x86_64-linux-gnu/libpython3.12.so.1.0 (gdb) bt #0 0x00001555339b372ain PyList_New() from /lib/x86_64-linux-gnu/libpython3.12.so.1.0 #1 0x0000155533ad89d7in PyImport_Import() from /lib/x86_64-linux-gnu/libpython3.12.so.1.0 #2 0x0000155533ad8c80in PyImport_ImportModule() from /lib/x86_64-linux-gnu/libpython3.12.so.1.0 #3 0x000015553f437c0cin PetscPythonInitialize(pyexe=0x0, pylib=0x0) at /tmp/build_openmpi-4.1.6-opt/petsc-3.21.6-debug/src/sys/python/pythonsys.c:242 I am puzzled by this segfault... Some relevant details: grep -r PETSC_PYTHON_EXE /opt/petsc-3.21.6_debug_openmpi-4.1.6/include/ /opt/petsc-3.21.6_debug_openmpi-4.1.6/include/petscconf.h:#define PETSC_PYTHON_EXE"/usr/bin/python3" ls -la /lib/x86_64-linux-gnu/libpython3.12* lrwxrwxrwx 1 root root?????? 58 Feb? 4 09:48 /lib/x86_64-linux-gnu/libpython3.12.a -> ../python3.12/config-3.12-x86_64-linux-gnu/libpython3.12.a lrwxrwxrwx 1 root root?????? 60 Feb? 4 09:48 /lib/x86_64-linux-gnu/libpython3.12d.a -> ../python3.12/config-3.12d-x86_64-linux-gnu/libpython3.12d.a lrwxrwxrwx 1 root root?????? 19 Feb? 4 09:48 /lib/x86_64-linux-gnu/libpython3.12d.so -> libpython3.12d.so.1 lrwxrwxrwx 1 root root?????? 21 Feb? 4 09:48 /lib/x86_64-linux-gnu/libpython3.12d.so.1 -> libpython3.12d.so.1.0 -rw-r--r-- 1 root root 34018416 Feb? 4 09:48 /lib/x86_64-linux-gnu/libpython3.12d.so.1.0 lrwxrwxrwx 1 root root?????? 18 Feb? 4 09:48 /lib/x86_64-linux-gnu/libpython3.12.so -> libpython3.12.so.1 lrwxrwxrwx 1 root root?????? 20 Feb? 4 09:48 /lib/x86_64-linux-gnu/libpython3.12.so.1 -> libpython3.12.so.1.0 -rw-r--r-- 1 root root? 9055112 Feb? 4 09:48 /lib/x86_64-linux-gnu/libpython3.12.so.1.0 (I installed the debug ("d") versions to get debug symbols, but the segfault occurs even without them.) Interestingly, on the same Ubuntu setup, calling |PetscPythonInitialize| within our /pure C++/ code works fine, and the |"pc_type python"| example runs successfully. This issue only occurs when calling |PetscPythonInitialize| from our /Python module/. Any ideas? Could this be something that has already been fixed in PETSc 3.22.x? Thanks, Eric On 2025-03-12 16:48, Matthew Knepley wrote: > On Wed, Mar 12, 2025 at 4:34?PM Eric Chamberland via petsc-users > wrote: > > Hi, > > just a naive question: looking at KSPPYTHON and PCPYTHON, we saw that > there is only 1 example available. > > We are asking ourself: is it still supported and can we start > developping ou PCs and KSPs on top of it? > > Or is there a "new" replacement for these? > > > I think the reason that there are so few examples is that many > examples exist in other packages, such as Firedrake, and they are the > main consumers. KSPPYTHON is a way to write KSPSHELL?using Python > rather than C, and we mostly write C. > > I will say that recently we fixed everything so that PETSc errors and > Python exceptions are passed correctly up the stack, and debugging > these things should be easy. I have been debugging the > PyVista?visualization, and I can change the Python in one window and > run in the other. It is easy. > > ? Thanks, > > ? ? ?Matt > > Thanks, > > Eric > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eq26UN2CarzUz7T9WGteIHDXqp5w7VVjasgK6Vd1QKhEZ2DgW9eB1ibuO_ba7ZyquQVtToW81IARavsFIj_RRgdYXowj80Dz3tMpJzHH$ > -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Mar 14 12:04:18 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 14 Mar 2025 13:04:18 -0400 Subject: [petsc-users] DOF projection after dmforest adaption In-Reply-To: References: Message-ID: On Fri, Mar 14, 2025 at 12:15?PM Preda Silvia via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi, > > > > We are having a hard time understanding how the degrees of freedom are > projected after a dmforest adaption. Having used before the P4EST library > directly, we recall that there, a index mapping from quadrants present in > the grid before and after adaption (1 to 1, 1 to many, many to 1, for > unaltered, refined, coarsened quadrants, respectively) was available. Would > it be possible to access the same information for a dmforest? > > > > Thank you for all the suggestion! > I do not understand exactly what you want yet. A projection of the dofs would necessarily depend on the function space you are using to represent the field. So you might instead be asking, can I get the refinement pattern like the parent of a given cell. You can get this https://urldefense.us/v3/__https://petsc.org/main/manualpages/DMPlex/DMPlexGetTreeParent/__;!!G_uCfscf7eWS!aAk0LSbDAPHyCjCzQ3Fa3FjQxqV-5Cx8PkJvYW2BWAfdWeBJcXe9U9MriWyHncDo0mnS_NKfIJJCq7vFlz-x$ There are few users (except us), so we would be happy to listen to interface suggestions. I will also note that Toby is the expert, and I am an amateur. Thanks, Matt > Silvia > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aAk0LSbDAPHyCjCzQ3Fa3FjQxqV-5Cx8PkJvYW2BWAfdWeBJcXe9U9MriWyHncDo0mnS_NKfIJJCq_i0Ymri$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Mar 14 12:09:30 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 14 Mar 2025 13:09:30 -0400 Subject: [petsc-users] PetscPythonInitialize, KSPPYTHON, PCPYTHON, etc In-Reply-To: <2fe077b8-09a5-4133-869c-3cbb236d1b57@giref.ulaval.ca> References: <2ade136e-d6de-438b-ac0d-2e5e16df9a0c@giref.ulaval.ca> <2fe077b8-09a5-4133-869c-3cbb236d1b57@giref.ulaval.ca> Message-ID: On Fri, Mar 14, 2025 at 12:53?PM Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Thanks Matthew! > > I hope you're doing well. > > Hi! I know there is still a Plex bug submitted, but this is the busiest time of the year. I will fix it soon. > I have a question regarding a change I did on our code: I modified the > code to call PetscPythonInitialize(PETSC_NULLPTR, PETSC_NULLPTR) after > SlepcInitialize. > > Our CI ran successfully across 20 different environments (including > Valgrind), but we encountered a segmentation fault on Ubuntu 24.04 when > calling PetscPythonInitialize within our Python module (built using > Pybind11 bindings). > > On Ubuntu, calling PetscPythonInitialize from within our Python 3 module > results in a segfault with the following backtrace: > > This is weird. When I have seen things like this before, it is because someone is not dynamically linking properly, so that static variables are duplicated, and they get out of sync. This is why I always use -python to have PETSc do that initialization automatically, and it guaranteed to be linked to the same libpython that the SNESPYTHON is using. Thanks, Matt > Thread 1 "python3" received signal SIGSEGV, Segmentation fault. > 0x00001555339b372a in PyList_New () from > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > (gdb) bt > #0 0x00001555339b372a in PyList_New () from > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > #1 0x0000155533ad89d7 in PyImport_Import () from > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > #2 0x0000155533ad8c80 in PyImport_ImportModule () from > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > #3 0x000015553f437c0c in PetscPythonInitialize (pyexe=0x0, pylib=0x0) at > /tmp/build_openmpi-4.1.6-opt/petsc-3.21.6-debug/src/sys/python/pythonsys.c > :242 > > I am puzzled by this segfault... > > Some relevant details: > > grep -r PETSC_PYTHON_EXE /opt/petsc-3.21.6_debug_openmpi-4.1.6/include/ > /opt/petsc-3.21.6_debug_openmpi-4.1.6/include/petscconf.h:#define > PETSC_PYTHON_EXE "/usr/bin/python3" > > ls -la /lib/x86_64-linux-gnu/libpython3.12* > lrwxrwxrwx 1 root root 58 Feb 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12.a -> > ../python3.12/config-3.12-x86_64-linux-gnu/libpython3.12.a > lrwxrwxrwx 1 root root 60 Feb 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12d.a -> > ../python3.12/config-3.12d-x86_64-linux-gnu/libpython3.12d.a > lrwxrwxrwx 1 root root 19 Feb 4 09:48 /lib/x86_64-linux-gnu/ > libpython3.12d.so -> libpython3.12d.so.1 > lrwxrwxrwx 1 root root 21 Feb 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12d.so.1 -> libpython3.12d.so.1.0 > -rw-r--r-- 1 root root 34018416 Feb 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12d.so.1.0 > lrwxrwxrwx 1 root root 18 Feb 4 09:48 /lib/x86_64-linux-gnu/ > libpython3.12.so -> libpython3.12.so.1 > lrwxrwxrwx 1 root root 20 Feb 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12.so.1 -> libpython3.12.so.1.0 > -rw-r--r-- 1 root root 9055112 Feb 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > > (I installed the debug ("d") versions to get debug symbols, but the > segfault occurs even without them.) > > Interestingly, on the same Ubuntu setup, calling PetscPythonInitialize > within our *pure C++* code works fine, and the "pc_type python" example > runs successfully. > > This issue only occurs when calling PetscPythonInitialize from our *Python > module*. > > Any ideas? > > Could this be something that has already been fixed in PETSc 3.22.x? > > Thanks, > > Eric > > > On 2025-03-12 16:48, Matthew Knepley wrote: > > On Wed, Mar 12, 2025 at 4:34?PM Eric Chamberland via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> Hi, >> >> just a naive question: looking at KSPPYTHON and PCPYTHON, we saw that >> there is only 1 example available. >> >> We are asking ourself: is it still supported and can we start >> developping ou PCs and KSPs on top of it? >> >> Or is there a "new" replacement for these? >> > > I think the reason that there are so few examples is that many examples > exist in other packages, such as Firedrake, and they are the main > consumers. KSPPYTHON is a way to write KSPSHELL using Python rather than C, > and we mostly write C. > > I will say that recently we fixed everything so that PETSc errors and > Python exceptions are passed correctly up the stack, and debugging these > things should be easy. I have been debugging the PyVista visualization, and > I can change the Python in one window and run in the other. It is easy. > > Thanks, > > Matt > > >> Thanks, >> >> Eric >> >> -- >> Eric Chamberland, ing., M. Ing >> Professionnel de recherche >> GIREF/Universit? Laval >> (418) 656-2131 poste 41 22 42 >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dXr7aqf6xruDXqpMNsJ0FcuY0Tm7N9gm65siheq7oGM02T9B8cFX_qKELOCxmaaUJHR3lkefANYmVinLW3QQ$ > > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dXr7aqf6xruDXqpMNsJ0FcuY0Tm7N9gm65siheq7oGM02T9B8cFX_qKELOCxmaaUJHR3lkefANYmVinLW3QQ$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvia.preda at uninsubria.it Fri Mar 14 13:02:38 2025 From: silvia.preda at uninsubria.it (Preda Silvia) Date: Fri, 14 Mar 2025 18:02:38 +0000 Subject: [petsc-users] DOF projection after dmforest adaption In-Reply-To: References: Message-ID: On Fri, Mar 14, 2025 at 12:15?PM Preda Silvia via petsc-users > wrote: Hi, We are having a hard time understanding how the degrees of freedom are projected after a dmforest adaption. Having used before the P4EST library directly, we recall that there, a index mapping from quadrants present in the grid before and after adaption (1 to 1, 1 to many, many to 1, for unaltered, refined, coarsened quadrants, respectively) was available. Would it be possible to access the same information for a dmforest? Thank you for all the suggestion! I do not understand exactly what you want yet. Let?s make an explicit example. I have an old mesh and a new adapted one. Between the two meshes, the following correspondences hold: * The quadrant indexed as 1 in the old mesh has been refined and has originated the quadrants indexed as 3, 4, 5, 6 in the new mesh * The quadrants 10, 11, 12, 13 (old indexing) have been coarsened and correspond to the quadrant 7 in the new mesh indexing * The quadrant 15 (old indexing) has been left unchanged, but in the new mesh is indexed as 17. We would like to have access to this mapping between the sets of old and new indices. A projection of the dofs would necessarily depend on the function space you are using to represent the field. So you might instead be asking, can I get the refinement pattern like the parent of a given cell. You can get this https://urldefense.us/v3/__https://petsc.org/main/manualpages/DMPlex/DMPlexGetTreeParent/__;!!G_uCfscf7eWS!eEQKDGx261iREJkDLwioeVGv-9jkXQ_Vgn15eYSfh30mRtmoRCucEcrPvKIaxQr6T7hZqGq6Ifpj6ZNrOkOmjsFquTJsi3eFPg$ It is not clear to me if the function you suggested is apt to our aim. Thanks, Silvia There are few users (except us), so we would be happy to listen to interface suggestions. I will also note that Toby is the expert, and I am an amateur. Thanks, Matt Silvia -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eEQKDGx261iREJkDLwioeVGv-9jkXQ_Vgn15eYSfh30mRtmoRCucEcrPvKIaxQr6T7hZqGq6Ifpj6ZNrOkOmjsFquTJetwsp_A$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Sun Mar 16 09:31:53 2025 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Sun, 16 Mar 2025 10:31:53 -0400 Subject: [petsc-users] PetscPythonInitialize, KSPPYTHON, PCPYTHON, etc In-Reply-To: References: <2ade136e-d6de-438b-ac0d-2e5e16df9a0c@giref.ulaval.ca> <2fe077b8-09a5-4133-869c-3cbb236d1b57@giref.ulaval.ca> Message-ID: <231c7d99-f8aa-4567-aec2-99ae16d3266d@giref.ulaval.ca> Hi Matthew, On 2025-03-14 13:09, Matthew Knepley wrote: > On Fri, Mar 14, 2025 at 12:53?PM Eric Chamberland > wrote: > > Thanks Matthew! > > I hope you're doing well. > > > Hi! I know there is still a Plex bug submitted, but this is the > busiest time of the year. I will fix it soon. No rush, we are working on 2D examples for now, take your time! > > I have a question regarding a change I did on our code: I modified > the code to call |PetscPythonInitialize(PETSC_NULLPTR, > PETSC_NULLPTR)| after |SlepcInitialize|. > > Our CI ran successfully across 20 different environments > (including Valgrind), but we encountered a segmentation fault on > Ubuntu 24.04 when calling |PetscPythonInitialize| within our > Python module (built using Pybind11 bindings). > > On Ubuntu, calling |PetscPythonInitialize| from within our Python > 3 module results in a segfault with the following backtrace: > > This is weird. When I have seen things like this before, it is because > someone is not dynamically linking properly, so that > static variables are duplicated, and they get out of sync. > > This is why?I always use? -python to have PETSc do that initialization > automatically, and it guaranteed to be linked to the same > libpython that the SNESPYTHON?is using. Ok, my colleague Ren? and I have built a reproducer, see https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/issues/1717__;!!G_uCfscf7eWS!Y9u6eJnvOw3IsoL2l2NE4TRAuerg63WS_QF4KnevrJSvCcetF4dyqWweHkPeGbZJ27D_p1cH1cvdr5RhjeMhZ82J5axUqPoNQMXT6pFI$ . Thanks, Eric > > ? Thanks, > > ? ? ?Matt > > Thread 1 "python3" received signal SIGSEGV, Segmentation fault. > 0x00001555339b372ain PyList_New() from > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > (gdb) bt > #0 0x00001555339b372ain PyList_New() from > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > #1 0x0000155533ad89d7in PyImport_Import() from > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > #2 0x0000155533ad8c80in PyImport_ImportModule() from > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > #3 0x000015553f437c0cin PetscPythonInitialize(pyexe=0x0, > pylib=0x0) at > /tmp/build_openmpi-4.1.6-opt/petsc-3.21.6-debug/src/sys/python/pythonsys.c:242 > > I am puzzled by this segfault... > > Some relevant details: > > grep -r PETSC_PYTHON_EXE > /opt/petsc-3.21.6_debug_openmpi-4.1.6/include/ > /opt/petsc-3.21.6_debug_openmpi-4.1.6/include/petscconf.h:#define > PETSC_PYTHON_EXE"/usr/bin/python3" > > ls -la /lib/x86_64-linux-gnu/libpython3.12* > lrwxrwxrwx 1 root root?????? 58 Feb? 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12.a -> > ../python3.12/config-3.12-x86_64-linux-gnu/libpython3.12.a > lrwxrwxrwx 1 root root?????? 60 Feb? 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12d.a -> > ../python3.12/config-3.12d-x86_64-linux-gnu/libpython3.12d.a > lrwxrwxrwx 1 root root?????? 19 Feb? 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12d.so > -> libpython3.12d.so.1 > lrwxrwxrwx 1 root root?????? 21 Feb? 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12d.so.1 -> libpython3.12d.so.1.0 > -rw-r--r-- 1 root root 34018416 Feb? 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12d.so.1.0 > lrwxrwxrwx 1 root root?????? 18 Feb? 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12.so > -> libpython3.12.so.1 > lrwxrwxrwx 1 root root?????? 20 Feb? 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12.so.1 -> libpython3.12.so.1.0 > -rw-r--r-- 1 root root? 9055112 Feb? 4 09:48 > /lib/x86_64-linux-gnu/libpython3.12.so.1.0 > > (I installed the debug ("d") versions to get debug symbols, but > the segfault occurs even without them.) > > Interestingly, on the same Ubuntu setup, calling > |PetscPythonInitialize| within our /pure C++/ code works fine, and > the |"pc_type python"| example runs successfully. > > This issue only occurs when calling |PetscPythonInitialize| from > our /Python module/. > > Any ideas? > > Could this be something that has already been fixed in PETSc 3.22.x? > > Thanks, > > Eric > > > On 2025-03-12 16:48, Matthew Knepley wrote: >> On Wed, Mar 12, 2025 at 4:34?PM Eric Chamberland via petsc-users >> wrote: >> >> Hi, >> >> just a naive question: looking at KSPPYTHON and PCPYTHON, we >> saw that >> there is only 1 example available. >> >> We are asking ourself: is it still supported and can we start >> developping ou PCs and KSPs on top of it? >> >> Or is there a "new" replacement for these? >> >> >> I think the reason that there are so few examples is that many >> examples exist in other packages, such as Firedrake, and they are >> the main consumers. KSPPYTHON is a way to write KSPSHELL?using >> Python rather than C, and we mostly write C. >> >> I will say that recently we fixed everything so that PETSc errors >> and Python exceptions are passed correctly up the stack, and >> debugging these things should be easy. I have been debugging the >> PyVista?visualization, and I can change the Python in one window >> and run in the other. It is easy. >> >> ? Thanks, >> >> ? ? ?Matt >> >> Thanks, >> >> Eric >> >> -- >> Eric Chamberland, ing., M. Ing >> Professionnel de recherche >> GIREF/Universit? Laval >> (418) 656-2131 poste 41 22 42 >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Y9u6eJnvOw3IsoL2l2NE4TRAuerg63WS_QF4KnevrJSvCcetF4dyqWweHkPeGbZJ27D_p1cH1cvdr5RhjeMhZ82J5axUqPoNQF9A_dJh$ >> > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Y9u6eJnvOw3IsoL2l2NE4TRAuerg63WS_QF4KnevrJSvCcetF4dyqWweHkPeGbZJ27D_p1cH1cvdr5RhjeMhZ82J5axUqPoNQF9A_dJh$ > -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.diehl at kuleuven.be Tue Mar 18 08:16:25 2025 From: martin.diehl at kuleuven.be (Martin Diehl) Date: Tue, 18 Mar 2025 13:16:25 +0000 Subject: [petsc-users] Fortran interfaces: Google Summer of Code 2025? In-Reply-To: <37525E85-93EC-4D46-B266-E26A8A6EC206@petsc.dev> References: <51e996a3b06a4f3f7146fed18b928c3f86762b77.camel@kuleuven.be> <857D8A26-115E-4AB8-91A8-2F8FC71C17D1@petsc.dev> <9858e051220e1a9722a21ee7bf82c88a7aa16add.camel@kuleuven.be> <37525E85-93EC-4D46-B266-E26A8A6EC206@petsc.dev> Message-ID: <9e6e412a366579af5e7a32c69998e673ff24ec7c.camel@kuleuven.be> Hello Barry, nice. I will work with Tapashree on the proposal and include these aspects if space permits. We will also announce it on the Fortran Discourse. I recently also connected to Ivan Pribec who gave the following talk ad FOSDEM: https://fosdem.org/2025/schedule/event/fosdem-2025-6509-easier-api-interoperability-writing-a-bindings-generator-to-c-c-with-coccinelle/ best regards, Martin On Fri, 2025-03-14 at 12:33 -0400, Barry Smith wrote: > ? ? > Martin, > > ? ? Thanks for the email, additional improvements are definitely > possible and desirable. > > ? ? To me the most powerful would be the automatic generation of > Fortran stubs for suboutines that return arrays. There are? > two "styles" of these in the PETSc C API,? > > * ?in the first case the length of the array is passed as the > argument before the pointer to the array > > PETSC_EXTERN void dmplexgettransitiveclosure_(DM *dm, PetscInt *p, > PetscBool *useCone, PetscInt *N, F90Array1d *ptr, int *ierr > PETSC_F90_2PTR_PROTO(ptrd)) > { > ? PetscInt *v = NULL; > ? PetscInt ?n; > > ? CHKFORTRANNULL(N); > ? *ierr = DMPlexGetTransitiveClosure(*dm, *p, *useCone, &n, &v); > ? if (*ierr) return; > ? *ierr = F90Array1dCreate((void *)v, MPIU_INT, 1, n * 2, ptr > PETSC_F90_2PTR_PARAM(ptrd)); > ? if (N) *N = n; > } > > If we "mark" the source code with, for example, > > PetscErrorCode DMPlexGetTransitiveClosure(DM dm, PetscInt p, > PetscBool useCone, PeAL PetscInt *numPoints, PetscInt *points[]) > > the getAPI() code will know the length of the array is the argument > numPoints and thus all the information for generating the Fortran > stub is available and it can be generated automatically. > > * in the second case the length of the array is not directly > available > > PETSC_EXTERN void vecgetarray_(Vec *x, F90Array1d *ptr, int *ierr > PETSC_F90_2PTR_PROTO(ptrd)) > { > ? PetscScalar *fa; > ? PetscInt ? ? len; > ? *ierr = VecGetArray(*x, &fa); > ? if (*ierr) return; > ? *ierr = VecGetLocalSize(*x, &len); > ? if (*ierr) return; > ? *ierr = F90Array1dCreate(fa, MPIU_SCALAR, 1, len, ptr > PETSC_F90_2PTR_PARAM(ptrd)); > } > > The Fortran stub cannot be generated automatically without some hint. > Is there some way we can mark in the PETSc source a > hint that indicates how to access the needed length. In this case by > calling VecGetLocalSize(*x,&len); for example use > > PetscErrorCode VecGetArray(Vec x, PeALE(n,VecGetLocalSize(x,n)) > PetscScalar *a[])? > > From this the getAPI() parser will have all the information needed to > generate the Fortran stub. I haven't picked a syntax for providing > this information; above is just a suggestion that is trivial to parse > but probably also has downsides. > > So I concur with your suggestion to submit to fortran-lang.org > > Barry > > > > > > > On Mar 14, 2025, at 10:41?AM, Martin Diehl > > wrote: > > > > Barry, > > > > for other languages, we can't rely on fortran-lang.org but NumFOCUS > > seems to be an option. > > After trying out the branch for 7517, I have some ideas for further > > Fortran work as outlined in the attached document (same for md and > > pdf). My suggestion would be to apply via fortran-lang.org with > > Tapashree (in CC). If that results in better Python code, it should > > also be useful for other languages. > > > > Martin > > > > > > On Thu, 2025-01-30 at 09:54 -0500, Barry Smith wrote: > > > > > > ?? Martin, > > > > > > ?? I have restarted in the last week on 7517 and plan for it to > > > be in > > > the March release. > > > > > > ?? As part of the work I have developed new Pythoncode? that > > > scraps > > > the code for signatures for all the functions, enums, objects etc > > > and > > > from this constructs the Fortran binding. The same scraping could > > > be > > > used for other languages so I am hoping automatic bindings can be > > > done for other languages, for example Rust, even Python. So > > > perhaps > > > we should consider a summer of code project for other such > > > languages? > > > > > > ?? Barry > > > > > > > > > > On Jan 30, 2025, at 6:13?AM, Martin Diehl > > > > wrote: > > > > > > > > Dear PETSc team, dear Barry, > > > > > > > > applications for the Google Summer of Code will start again and > > > > I > > > > was > > > > wondering if help for the re-factoring of the Fortran > > > > interfaces is > > > > still needed. Whether this makes sense depends on the progress > > > > of > > > > https://gitlab.com/petsc/petsc/-/merge_requests/7517 > > > > > > > > In contrast to the failed attempt last year, I have a student > > > > interested in working on this topic. > > > > > > > > Martin > > > > -- > > > > KU Leuven > > > > Department of Computer Science > > > > Department of Materials Engineering > > > > Celestijnenlaan 200a > > > > 3001 Leuven, Belgium > > > > > > > > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: This is a digitally signed message part URL: From joal at sdu.dk Thu Mar 20 04:56:40 2025 From: joal at sdu.dk (Joe Alexandersen) Date: Thu, 20 Mar 2025 09:56:40 +0000 Subject: [petsc-users] Semi-coarsening for GMG using DMDA? Message-ID: Dear PETSc developers, We are working with a code that uses regular meshes (DMDA) and geometric multigrid. We would like to go from uniform coarsening/refinement to semi-coarsening/refinement, due to anisotropy in our underlying equations. We have tried to figure out if we can do this using built-in functions of PETSc, but it is unclear to us whether we can get it done relatively easily. It seems that we can go from the coarsest grid and refine differently in each direction using DMDASetRefinementFactor and then use DMRefine to define the finer levels. However, from the doc page for DMCreateInterpolation, it states that it only works for "uniform refinement" which to me seems to indicate it will not work with different refinement in each direction. But on the other hand, it states that it should work if using DMRefine, which I assume used the information from DMDASetRefinementFactor upon creation? So our questions are: is there are feasible and relatively simple way to do semi-coarsening/refinement of DMDAs for geometric multigrid hierarchies? Would the above work? Thanks in advance! Sincerely, Joe Alexandersen University of Southern Denmark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Mar 20 10:34:17 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 20 Mar 2025 11:34:17 -0400 Subject: [petsc-users] Semi-coarsening for GMG using DMDA? In-Reply-To: References: Message-ID: <3376A9C0-8E4A-4DE0-A510-EF1645F22A26@petsc.dev> In theory you can do as you propose. In the context below uniform refinement" only means that the coordinates of the DMDA are ignored so each refinement. The interpolation is fine woth different refinements in the different coordinate directions. Barry > On Mar 20, 2025, at 5:56?AM, Joe Alexandersen via petsc-users wrote: > > Dear PETSc developers, > > We are working with a code that uses regular meshes (DMDA) and geometric multigrid. We would like to go from uniform coarsening/refinement to semi-coarsening/refinement, due to anisotropy in our underlying equations. We have tried to figure out if we can do this using built-in functions of PETSc, but it is unclear to us whether we can get it done relatively easily. > > It seems that we can go from the coarsest grid and refine differently in each direction using DMDASetRefinementFactor and then use DMRefine to define the finer levels. However, from the doc page for DMCreateInterpolation, it states that it only works for "uniform refinement" which to me seems to indicate it will not work with different refinement in each direction. But on the other hand, it states that it should work if using DMRefine, which I assume used the information from DMDASetRefinementFactor upon creation? > > So our questions are: is there are feasible and relatively simple way to do semi-coarsening/refinement of DMDAs for geometric multigrid hierarchies? Would the above work? > > Thanks in advance! > > Sincerely, > Joe Alexandersen > University of Southern Denmark -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Thu Mar 20 11:00:13 2025 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 20 Mar 2025 12:00:13 -0400 Subject: [petsc-users] Semi-coarsening for GMG using DMDA? In-Reply-To: <3376A9C0-8E4A-4DE0-A510-EF1645F22A26@petsc.dev> References: <3376A9C0-8E4A-4DE0-A510-EF1645F22A26@petsc.dev> Message-ID: We have worked on semi coarsening in DMPlex, but it is not finished and we are not working on it now. I'm not sure about how easy it would be in DMDA, but Barry is suggesting that it is doable. We need to wait for Matt and he is on travel so his response may be delayed. Mark On Thu, Mar 20, 2025 at 11:34?AM Barry Smith wrote: > > In theory you can do as you propose. In the context below uniform > refinement" only means that the coordinates of the DMDA are ignored so each > refinement. The interpolation is fine woth different refinements in the > different coordinate directions. > > Barry > > > On Mar 20, 2025, at 5:56?AM, Joe Alexandersen via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Dear PETSc developers, > > We are working with a code that uses regular meshes (DMDA) and geometric > multigrid. We would like to go from uniform coarsening/refinement to > semi-coarsening/refinement, due to anisotropy in our underlying equations. > We have tried to figure out if we can do this using built-in functions of > PETSc, but it is unclear to us whether we can get it done relatively easily. > > It seems that we can go from the coarsest grid and refine differently in > each direction using DMDASetRefinementFactor and then use DMRefine to > define the finer levels. However, from the doc page for > DMCreateInterpolation, it states that it only works for "uniform > refinement" which to me seems to indicate it will not work with different > refinement in each direction. But on the other hand, it states that it > should work if using DMRefine, which I assume used the information from > DMDASetRefinementFactor upon creation? > > So our questions are: is there are feasible and relatively simple way to > do semi-coarsening/refinement of DMDAs for geometric multigrid hierarchies? > Would the above work? > > Thanks in advance! > > Sincerely, > Joe Alexandersen > University of Southern Denmark > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joal at sdu.dk Thu Mar 20 11:09:31 2025 From: joal at sdu.dk (Joe Alexandersen) Date: Thu, 20 Mar 2025 16:09:31 +0000 Subject: [petsc-users] Semi-coarsening for GMG using DMDA? In-Reply-To: References: <3376A9C0-8E4A-4DE0-A510-EF1645F22A26@petsc.dev> Message-ID: Great, thanks for the input so far. We will wait for Matt's response soonish. Sincerely, Joe Alexandersen Associate Professor DFF Sapere Aude Research Leader The Faculty of Engineering Institute of Mechanical and Electrical Engineering SDU Mechanical Engineering T +45 65 50 74 65 M +45 93 50 72 44 joal at sdu.dk https://urldefense.us/v3/__http://www.sdu.dk/ansat/joal__;!!G_uCfscf7eWS!aMq_eBTTAvLdy-GtpWdAVq8zqDaObi_eL0EKi7yFbh6VUSQCx-pDonMSd9Rejkpr3DBMyE15NaaOKv14zQ$ University of Southern Denmark Campusvej 55 DK-5230 Odense M https://urldefense.us/v3/__http://www.sdu.dk__;!!G_uCfscf7eWS!aMq_eBTTAvLdy-GtpWdAVq8zqDaObi_eL0EKi7yFbh6VUSQCx-pDonMSd9Rejkpr3DBMyE15NaYjcKBeNw$ Sent from Outlook for Android ________________________________ From: Mark Adams Sent: Thursday, March 20, 2025 5:00:31 pm To: Barry Smith Cc: Joe Alexandersen ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Semi-coarsening for GMG using DMDA? You don't often get email from mfadams at lbl.gov. Learn why this is important We have worked on semi coarsening in DMPlex, but it is not finished and we are not working on it now. I'm not sure about how easy it would be in DMDA, but Barry is suggesting that it is doable. We need to wait for Matt and he is on travel so his response may be delayed. Mark On Thu, Mar 20, 2025 at 11:34?AM Barry Smith > wrote: In theory you can do as you propose. In the context below uniform refinement" only means that the coordinates of the DMDA are ignored so each refinement. The interpolation is fine woth different refinements in the different coordinate directions. Barry On Mar 20, 2025, at 5:56?AM, Joe Alexandersen via petsc-users > wrote: Dear PETSc developers, We are working with a code that uses regular meshes (DMDA) and geometric multigrid. We would like to go from uniform coarsening/refinement to semi-coarsening/refinement, due to anisotropy in our underlying equations. We have tried to figure out if we can do this using built-in functions of PETSc, but it is unclear to us whether we can get it done relatively easily. It seems that we can go from the coarsest grid and refine differently in each direction using DMDASetRefinementFactor and then use DMRefine to define the finer levels. However, from the doc page for DMCreateInterpolation, it states that it only works for "uniform refinement" which to me seems to indicate it will not work with different refinement in each direction. But on the other hand, it states that it should work if using DMRefine, which I assume used the information from DMDASetRefinementFactor upon creation? So our questions are: is there are feasible and relatively simple way to do semi-coarsening/refinement of DMDAs for geometric multigrid hierarchies? Would the above work? Thanks in advance! Sincerely, Joe Alexandersen University of Southern Denmark -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Thu Mar 20 23:37:34 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Thu, 20 Mar 2025 21:37:34 -0700 Subject: [petsc-users] upgrading to 3.22.4 Message-ID: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code.? The screen output says: [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object I have a 4 processor run going.? I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( lldb? -p 71963 (lldb) process attach --pid 71963 Process 71963 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP ??? frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 libsystem_kernel.dylib`__semwait_signal: ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi ??? 0x7fff69d9274b <+15>: jmp??? 0x7fff69d9121d??????????? ; cerror ??? 0x7fff69d92750 <+20>: retq Target 0: (feap) stopped. Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". Architecture set to: x86_64h-apple-macosx-. Does anyone have any hints as to what may be going on?? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly.? The crash only occurs when I initiate the formation of the matrix. I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. Note, I have been -sanjay -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- #include "petscsys.h" #include "petscfix.h" #include "petsc/private/fortranimpl.h" /* gcreate.c */ /* Fortran interface file */ /* * This file was generated automatically by bfort from the C source * file. */ #ifdef PETSC_USE_POINTER_CONVERSION #if defined(__cplusplus) extern "C" { #endif extern void *PetscToPointer(void*); extern int PetscFromPointer(void *); extern void PetscRmPointer(void*); #if defined(__cplusplus) } #endif #else #define PetscToPointer(a) (a ? *(PetscFortranAddr *)(a) : 0) #define PetscFromPointer(a) (PetscFortranAddr)(a) #define PetscRmPointer(a) #endif #include "petscmat.h" #ifdef PETSC_HAVE_FORTRAN_CAPS #define matcreate_ MATCREATE #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matcreate_ matcreate #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matcreatefromoptions_ MATCREATEFROMOPTIONS #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matcreatefromoptions_ matcreatefromoptions #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matseterroriffailure_ MATSETERRORIFFAILURE #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matseterroriffailure_ matseterroriffailure #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matsetsizes_ MATSETSIZES #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matsetsizes_ matsetsizes #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matsetfromoptions_ MATSETFROMOPTIONS #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matsetfromoptions_ matsetfromoptions #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matxaijsetpreallocation_ MATXAIJSETPREALLOCATION #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matxaijsetpreallocation_ matxaijsetpreallocation #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matheaderreplace_ MATHEADERREPLACE #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matheaderreplace_ matheaderreplace #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matbindtocpu_ MATBINDTOCPU #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matbindtocpu_ matbindtocpu #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matboundtocpu_ MATBOUNDTOCPU #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matboundtocpu_ matboundtocpu #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matsetvaluescoo_ MATSETVALUESCOO #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matsetvaluescoo_ matsetvaluescoo #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matsetbindingpropagates_ MATSETBINDINGPROPAGATES #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matsetbindingpropagates_ matsetbindingpropagates #endif #ifdef PETSC_HAVE_FORTRAN_CAPS #define matgetbindingpropagates_ MATGETBINDINGPROPAGATES #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define matgetbindingpropagates_ matgetbindingpropagates #endif /* Provide declarations for malloc/free if needed for strings */ #include /* Definitions of Fortran Wrapper routines */ #if defined(__cplusplus) extern "C" { #endif PETSC_EXTERN void matcreate_(MPI_Fint * comm,Mat *A, int *ierr) { PETSC_FORTRAN_OBJECT_CREATE(A); PetscBool A_null = !*(void**) A ? PETSC_TRUE : PETSC_FALSE; CHKFORTRANNULLOBJECT(A); *ierr = MatCreate( MPI_Comm_f2c(*(comm)),A); // if C routine nullifed the object, we must set to to -2 to indicate null set in Fortran if (! A_null && !*(void**) A) * (void **) A = (void *)-2; } PETSC_EXTERN void matcreatefromoptions_(MPI_Fint * comm, char *prefix,PetscInt *bs,PetscInt *m,PetscInt *n,PetscInt *M,PetscInt *N,Mat *A, int *ierr, PETSC_FORTRAN_CHARLEN_T cl0) { char *_cltmp0 = PETSC_NULLPTR; PetscBool A_null = !*(void**) A ? PETSC_TRUE : PETSC_FALSE; CHKFORTRANNULLOBJECT(A); /* insert Fortran-to-C conversion for prefix */ FIXCHAR(prefix,cl0,_cltmp0); *ierr = MatCreateFromOptions( MPI_Comm_f2c(*(comm)),_cltmp0,*bs,*m,*n,*M,*N,A); FREECHAR(prefix,_cltmp0); // if C routine nullifed the object, we must set to to -2 to indicate null set in Fortran if (! A_null && !*(void**) A) * (void **) A = (void *)-2; } PETSC_EXTERN void matseterroriffailure_(Mat mat,PetscBool *flg, int *ierr) { CHKFORTRANNULLOBJECT(mat); *ierr = MatSetErrorIfFailure( (Mat)PetscToPointer((mat) ),*flg); } PETSC_EXTERN void matsetsizes_(Mat A,PetscInt *m,PetscInt *n,PetscInt *M,PetscInt *N, int *ierr) { CHKFORTRANNULLOBJECT(A); *ierr = MatSetSizes( (Mat)PetscToPointer((A) ),*m,*n,*M,*N); } PETSC_EXTERN void matsetfromoptions_(Mat B, int *ierr) { CHKFORTRANNULLOBJECT(B); *ierr = MatSetFromOptions( (Mat)PetscToPointer((B) )); } PETSC_EXTERN void matxaijsetpreallocation_(Mat A,PetscInt *bs, PetscInt dnnz[], PetscInt onnz[], PetscInt dnnzu[], PetscInt onnzu[], int *ierr) { CHKFORTRANNULLOBJECT(A); CHKFORTRANNULLINTEGER(dnnz); CHKFORTRANNULLINTEGER(onnz); CHKFORTRANNULLINTEGER(dnnzu); CHKFORTRANNULLINTEGER(onnzu); *ierr = MatXAIJSetPreallocation( (Mat)PetscToPointer((A) ),*bs,dnnz,onnz,dnnzu,onnzu); } PETSC_EXTERN void matheaderreplace_(Mat A,Mat *C, int *ierr) { CHKFORTRANNULLOBJECT(A); PetscBool C_null = !*(void**) C ? PETSC_TRUE : PETSC_FALSE; CHKFORTRANNULLOBJECT(C); *ierr = MatHeaderReplace( (Mat)PetscToPointer((A) ),C); // if C routine nullifed the object, we must set to to -2 to indicate null set in Fortran if (! C_null && !*(void**) C) * (void **) C = (void *)-2; } PETSC_EXTERN void matbindtocpu_(Mat A,PetscBool *flg, int *ierr) { CHKFORTRANNULLOBJECT(A); *ierr = MatBindToCPU( (Mat)PetscToPointer((A) ),*flg); } PETSC_EXTERN void matboundtocpu_(Mat A,PetscBool *flg, int *ierr) { CHKFORTRANNULLOBJECT(A); *ierr = MatBoundToCPU( (Mat)PetscToPointer((A) ),flg); } PETSC_EXTERN void matsetvaluescoo_(Mat A, PetscScalar coo_v[],InsertMode *imode, int *ierr) { CHKFORTRANNULLOBJECT(A); CHKFORTRANNULLSCALAR(coo_v); *ierr = MatSetValuesCOO( (Mat)PetscToPointer((A) ),coo_v,*imode); } PETSC_EXTERN void matsetbindingpropagates_(Mat A,PetscBool *flg, int *ierr) { CHKFORTRANNULLOBJECT(A); *ierr = MatSetBindingPropagates( (Mat)PetscToPointer((A) ),*flg); } PETSC_EXTERN void matgetbindingpropagates_(Mat A,PetscBool *flg, int *ierr) { CHKFORTRANNULLOBJECT(A); *ierr = MatGetBindingPropagates( (Mat)PetscToPointer((A) ),flg); } #if defined(__cplusplus) } #endif From s_g at berkeley.edu Thu Mar 20 23:42:41 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Thu, 20 Mar 2025 21:42:41 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> Message-ID: <6c53b4cc-be6f-4650-a5a6-f1f376a1ffd0@berkeley.edu> I take on item back...I was a failure at using the debugger.? Here is the backtrace.? MatCreate seems to have valid data :/ * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP ? * frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 ??? frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep + 196 ??? frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 ??? frame #3: 0x0000000108d47a6c libpetsc.3.22.dylib`PetscSleep(s=10) at psleep.c:48:5 ??? frame #4: 0x00000001089946a1 libpetsc.3.22.dylib`PetscAttachDebugger at adebug.c:458:5 ??? frame #5: 0x000000010d607508 libpetsc.3.22.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000010f7ffe48, line=101, fun="matcreate_", file="/Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c", num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object", ctx=0x0000000000000000) at adebug.c:522:9 ??? frame #6: 0x000000010d607af0 libpetsc.3.22.dylib`PetscError(comm=0x000000010f7ffe48, line=101, func="matcreate_", file="/Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c", n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object") at err.c:406:15 ??? frame #7: 0x00000001095f8e9a libpetsc.3.22.dylib`matcreate_(comm=0x000000010d8e6174, A=0x0000000107ea08c8, ierr=0x00007ffee829a348) at gcreatef.c:101:1 ??? frame #8: 0x0000000107995d04 feap`usolve_ at usolve.F:134:72 ??? frame #9: 0x0000000107b08b12 feap`presol_ at presol.f:181:72 ??? frame #10: 0x0000000107a91d18 feap`pmacr1_ at pmacr1.f:554:72 ??? frame #11: 0x0000000107a8c4ed feap`pmacr_ at pmacr.f:614:72 ??? frame #12: 0x0000000107a30eaf feap`pcontr_ at pcontr.f:1375:72 ??? frame #13: 0x0000000107dd4b3e feap`main at feap87.f:173:72 ??? frame #14: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 On 3/20/25 9:37 PM, Sanjay Govindjee wrote: > I am trying to upgrade my code to PETSc 3.22.4 (the code was last > updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using > this code with PETSc for over 20 years. > > To get my code to compile and link during this update, I only need to > make two changes; one was to use PetscViewerPushFormat instead of > PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY > in a spot or two. > > When I run the code however, I am getting an error very early on > during a call to MatCreate near the beginning of the code.? The screen > output says: > > [3]PETSC ERROR: matcreate_() at > /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > Cannot create PETSC_NULL_XXX object > [0]PETSC ERROR: matcreate_() at > /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > Cannot create PETSC_NULL_XXX object > [1]PETSC ERROR: matcreate_() at > /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > Cannot create PETSC_NULL_XXX object > [2]PETSC ERROR: matcreate_() at > /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > Cannot create PETSC_NULL_XXX object > > I have a 4 processor run going.? I am running with > -on_error_attach_debugger but the debugger is giving me cryptic (at > least to me) output (the same for all 4 processes modulo the PID).? > Stack traces seem to be unavailable :( > > lldb? -p 71963 > (lldb) process attach --pid 71963 > Process 71963 stopped > * thread #1, queue = 'com.apple.main-thread', stop reason = signal > SIGSTOP > ??? frame #0: 0x00007fff69d92746 > libsystem_kernel.dylib`__semwait_signal + 10 > libsystem_kernel.dylib`__semwait_signal: > ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> > ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi > ??? 0x7fff69d9274b <+15>: jmp??? 0x7fff69d9121d ; cerror > ??? 0x7fff69d92750 <+20>: retq > Target 0: (feap) stopped. > > Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". > Architecture set to: x86_64h-apple-macosx-. > > Does anyone have any hints as to what may be going on?? Note the > program starts normally and i can do stuff with the interactive > interface for the code -- even plotting the mesh etc. so I believe the > input data has been read in correctly.? The crash only occurs when I > initiate the formation of the matrix. > > I am attaching the > /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in > case that offers some insight. > > Note, I have been > -sanjay > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Mar 21 09:17:22 2025 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 21 Mar 2025 10:17:22 -0400 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> Message-ID: <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. Barry > On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: > > I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. > > To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. > > When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: > [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object > [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object > [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object > [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object > I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( > lldb -p 71963 > (lldb) process attach --pid 71963 > Process 71963 stopped > * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP > frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 > libsystem_kernel.dylib`__semwait_signal: > -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> > 0x7fff69d92748 <+12>: movq %rax, %rdi > 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror > 0x7fff69d92750 <+20>: retq > Target 0: (feap) stopped. > > Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". > Architecture set to: x86_64h-apple-macosx-. > Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. > > I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. > > Note, I have been > -sanjay > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Fri Mar 21 09:52:20 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Fri, 21 Mar 2025 09:52:20 -0500 (CDT) Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> Message-ID: Some notes on this change are at https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7517__;!!G_uCfscf7eWS!aQQ-TMsusc4QHRyMCbC8OwFN-znAEEo80ngP1kAlhvQJ_3zegcumVUvSqcBuE6lTbRZ0LWeAKXEXHEBXPmWRk-6NNBs$ Satish On Fri, 21 Mar 2025, Barry Smith wrote: > > I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. > > You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. > > Barry > > > > > > On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: > > > > I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. > > > > To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. > > > > When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: > > [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object > > [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object > > [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object > > [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object > > I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( > > lldb -p 71963 > > (lldb) process attach --pid 71963 > > Process 71963 stopped > > * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP > > frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 > > libsystem_kernel.dylib`__semwait_signal: > > -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> > > 0x7fff69d92748 <+12>: movq %rax, %rdi > > 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror > > 0x7fff69d92750 <+20>: retq > > Target 0: (feap) stopped. > > > > Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". > > Architecture set to: x86_64h-apple-macosx-. > > Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. > > > > I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. > > > > Note, I have been > > -sanjay > > -- > > > > From s_g at berkeley.edu Fri Mar 21 14:16:03 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Fri, 21 Mar 2025 12:16:03 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> Message-ID: <7cc9bfd8-8b1c-40f8-9824-282527f35240@berkeley.edu> Thanks.? I will give that a try.? To be clear, I should do ?> git checkout origin/main ?> make all ?> make check Or do I also need to run my (long) .\configure before the make all? -sanjay On 3/21/25 7:17 AM, Barry Smith wrote: > > ? ? I have just pushed a major update to the Fortran interface to the > main PETSc git branch. Could you please try to work with main (to > become release in a couple of weeks) with your Fortran code as we > debug the problem? This will save you a lot of work and hopefully make > the debugging more straightforward. > > ? ? You can send the same output with the debugger if it crashes in > the main branch and I can try to track down what is going wrong. > > ? Barry > > > > >> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users >> wrote: >> >> I am trying to upgrade my code to PETSc 3.22.4 (the code was last >> updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been >> using this code with PETSc for over 20 years. >> >> To get my code to compile and link during this update, I only need to >> make two changes; one was to use PetscViewerPushFormat instead of >> PetscViewerSetFormat and the other was to use >> PETSC_NULL_INTEGER_ARRAY in a spot or two. >> >> When I run the code however, I am getting an error very early on >> during a call to MatCreate near the beginning of the code.? The >> screen output says: >> >> [3]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >> Cannot create PETSC_NULL_XXX object >> [0]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >> Cannot create PETSC_NULL_XXX object >> [1]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >> Cannot create PETSC_NULL_XXX object >> [2]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >> Cannot create PETSC_NULL_XXX object >> >> I have a 4 processor run going.? I am running with >> -on_error_attach_debugger but the debugger is giving me cryptic (at >> least to me) output (the same for all 4 processes modulo the PID).? >> Stack traces seem to be unavailable :( >> >> lldb? -p 71963 >> (lldb) process attach --pid 71963 >> Process 71963 stopped >> * thread #1, queue = 'com.apple.main-thread', stop reason = >> signal SIGSTOP >> ??? frame #0: 0x00007fff69d92746 >> libsystem_kernel.dylib`__semwait_signal + 10 >> libsystem_kernel.dylib`__semwait_signal: >> ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> >> ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi >> ??? 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d??????????? ; cerror >> ??? 0x7fff69d92750 <+20>: retq >> Target 0: (feap) stopped. >> >> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >> Architecture set to: x86_64h-apple-macosx-. >> >> Does anyone have any hints as to what may be going on?? Note the >> program starts normally and i can do stuff with the interactive >> interface for the code -- even plotting the mesh etc. so I believe >> the input data has been read in correctly.? The crash only occurs >> when I initiate the formation of the matrix. >> >> I am attaching the >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in >> case that offers some insight. >> >> Note, I have been >> -sanjay >> -- >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Fri Mar 21 14:39:38 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Fri, 21 Mar 2025 14:39:38 -0500 (CDT) Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <7cc9bfd8-8b1c-40f8-9824-282527f35240@berkeley.edu> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <7cc9bfd8-8b1c-40f8-9824-282527f35240@berkeley.edu> Message-ID: On Fri, 21 Mar 2025, Sanjay Govindjee via petsc-users wrote: > Thanks.? I will give that a try.? To be clear, I should do > ?> ?> git checkout origin/main > ?> ?> make all > ?> ?> make check > Or do I also need to run my (long) .\configure before the make all? I would start a clean build from a clean repo. Satish > -sanjay > > > On 3/21/25 7:17 AM, Barry Smith wrote: > > > > ? ? I have just pushed a major update to the Fortran interface to the main > > PETSc git branch. Could you please try to work with main (to become release > > in a couple of weeks) with your Fortran code as we debug the problem? This > > will save you a lot of work and hopefully make the debugging more > > straightforward. > > > > ? ? You can send the same output with the debugger if it crashes in the main > > branch and I can try to track down what is going wrong. > > > > ? Barry > > > > > > > > > >> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users > >> wrote: > >> > >> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated > >> to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code > >> with PETSc for over 20 years. > >> > >> To get my code to compile and link during this update, I only need to make > >> two changes; one was to use PetscViewerPushFormat instead of > >> PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a > >> spot or two. > >> > >> When I run the code however, I am getting an error very early on during a > >> call to MatCreate near the beginning of the code.? The screen output says: > >> > >> [3]PETSC ERROR: matcreate_() at > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > >> Cannot create PETSC_NULL_XXX object > >> [0]PETSC ERROR: matcreate_() at > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > >> Cannot create PETSC_NULL_XXX object > >> [1]PETSC ERROR: matcreate_() at > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > >> Cannot create PETSC_NULL_XXX object > >> [2]PETSC ERROR: matcreate_() at > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > >> Cannot create PETSC_NULL_XXX object > >> > >> I have a 4 processor run going.? I am running with > >> -on_error_attach_debugger but the debugger is giving me cryptic (at least > >> to me) output (the same for all 4 processes modulo the PID).? Stack traces > >> seem to be unavailable :( > >> > >> lldb? -p 71963 > >> (lldb) process attach --pid 71963 > >> Process 71963 stopped > >> * thread #1, queue = 'com.apple.main-thread', stop reason = > >> signal SIGSTOP > >> ??? frame #0: 0x00007fff69d92746 > >> libsystem_kernel.dylib`__semwait_signal + 10 > >> libsystem_kernel.dylib`__semwait_signal: > >> ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> > >> ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi > >> ??? 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d??????????? ; cerror > >> ??? 0x7fff69d92750 <+20>: retq > >> Target 0: (feap) stopped. > >> > >> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". > >> Architecture set to: x86_64h-apple-macosx-. > >> > >> Does anyone have any hints as to what may be going on?? Note the program > >> starts normally and i can do stuff with the interactive interface for the > >> code -- even plotting the mesh etc. so I believe the input data has been > >> read in correctly.? The crash only occurs when I initiate the formation of > >> the matrix. > >> > >> I am attaching the > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case > >> that offers some insight. > >> > >> Note, I have been > >> -sanjay > >> -- > >> > > > > From s_g at berkeley.edu Fri Mar 21 14:43:05 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Fri, 21 Mar 2025 12:43:05 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <7cc9bfd8-8b1c-40f8-9824-282527f35240@berkeley.edu> Message-ID: Ok. Thanks for the recommendation. ------------------------------------------------------------------- Sanjay Govindjee, PhD, PE Horace, Dorothy, and Katherine Johnson Professor in Engineering Distinguished Professor of Civil and Environmental Engineering 779 Davis Hall University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!ct_FDSjTgbv5A4gaXMu_9l5RlvbTb3RjdZb2hG_OAjUmHLOWHQLRgouXdedWwdktCXxk5kb-O6apdPOI7VvqHw$ ------------------------------------------------------------------- Books: Continuum Mechanics of Solids https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!ct_FDSjTgbv5A4gaXMu_9l5RlvbTb3RjdZb2hG_OAjUmHLOWHQLRgouXdedWwdktCXxk5kb-O6apdPOzghiWbQ$ Example Problems for Continuum Mechanics of Solids https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!ct_FDSjTgbv5A4gaXMu_9l5RlvbTb3RjdZb2hG_OAjUmHLOWHQLRgouXdedWwdktCXxk5kb-O6apdPOSFqFuvA$ Engineering Mechanics of Deformable Solids https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!ct_FDSjTgbv5A4gaXMu_9l5RlvbTb3RjdZb2hG_OAjUmHLOWHQLRgouXdedWwdktCXxk5kb-O6apdPPTM57GyA$ Engineering Mechanics 3 (Dynamics) 2nd Edition https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!ct_FDSjTgbv5A4gaXMu_9l5RlvbTb3RjdZb2hG_OAjUmHLOWHQLRgouXdedWwdktCXxk5kb-O6apdPPLiq9kJg$ Engineering Mechanics 3, Supplementary Problems: Dynamics https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!ct_FDSjTgbv5A4gaXMu_9l5RlvbTb3RjdZb2hG_OAjUmHLOWHQLRgouXdedWwdktCXxk5kb-O6apdPN9mztOew$ ------------------------------------------------------------------- NSF NHERI SimCenter https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!ct_FDSjTgbv5A4gaXMu_9l5RlvbTb3RjdZb2hG_OAjUmHLOWHQLRgouXdedWwdktCXxk5kb-O6apdPNFCf49Yw$ ------------------------------------------------------------------- On Fri, Mar 21, 2025 at 12:39?PM Satish Balay wrote: > On Fri, 21 Mar 2025, Sanjay Govindjee via petsc-users wrote: > > > Thanks. I will give that a try. To be clear, I should do > > > > git checkout origin/main > > > > make all > > > > make check > > Or do I also need to run my (long) .\configure before the make all? > > I would start a clean build from a clean repo. > > Satish > > > -sanjay > > > > > > On 3/21/25 7:17 AM, Barry Smith wrote: > > > > > > I have just pushed a major update to the Fortran interface to the > main > > > PETSc git branch. Could you please try to work with main (to become > release > > > in a couple of weeks) with your Fortran code as we debug the problem? > This > > > will save you a lot of work and hopefully make the debugging more > > > straightforward. > > > > > > You can send the same output with the debugger if it crashes in > the main > > > branch and I can try to track down what is going wrong. > > > > > > Barry > > > > > > > > > > > > > > >> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users > > >> wrote: > > >> > > >> I am trying to upgrade my code to PETSc 3.22.4 (the code was last > updated > > >> to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this > code > > >> with PETSc for over 20 years. > > >> > > >> To get my code to compile and link during this update, I only need to > make > > >> two changes; one was to use PetscViewerPushFormat instead of > > >> PetscViewerSetFormat and the other was to use > PETSC_NULL_INTEGER_ARRAY in a > > >> spot or two. > > >> > > >> When I run the code however, I am getting an error very early on > during a > > >> call to MatCreate near the beginning of the code. The screen output > says: > > >> > > >> [3]PETSC ERROR: matcreate_() at > > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > > >> Cannot create PETSC_NULL_XXX object > > >> [0]PETSC ERROR: matcreate_() at > > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > > >> Cannot create PETSC_NULL_XXX object > > >> [1]PETSC ERROR: matcreate_() at > > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > > >> Cannot create PETSC_NULL_XXX object > > >> [2]PETSC ERROR: matcreate_() at > > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 > > >> Cannot create PETSC_NULL_XXX object > > >> > > >> I have a 4 processor run going. I am running with > > >> -on_error_attach_debugger but the debugger is giving me cryptic (at > least > > >> to me) output (the same for all 4 processes modulo the PID). Stack > traces > > >> seem to be unavailable :( > > >> > > >> lldb -p 71963 > > >> (lldb) process attach --pid 71963 > > >> Process 71963 stopped > > >> * thread #1, queue = 'com.apple.main-thread', stop reason = > > >> signal SIGSTOP > > >> frame #0: 0x00007fff69d92746 > > >> libsystem_kernel.dylib`__semwait_signal + 10 > > >> libsystem_kernel.dylib`__semwait_signal: > > >> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> > > >> 0x7fff69d92748 <+12>: movq %rax, %rdi > > >> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror > > >> 0x7fff69d92750 <+20>: retq > > >> Target 0: (feap) stopped. > > >> > > >> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". > > >> Architecture set to: x86_64h-apple-macosx-. > > >> > > >> Does anyone have any hints as to what may be going on? Note the > program > > >> starts normally and i can do stuff with the interactive interface for > the > > >> code -- even plotting the mesh etc. so I believe the input data has > been > > >> read in correctly. The crash only occurs when I initiate the > formation of > > >> the matrix. > > >> > > >> I am attaching the > > >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in > case > > >> that offers some insight. > > >> > > >> Note, I have been > > >> -sanjay > > >> -- > > >> > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Mar 22 09:31:12 2025 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 22 Mar 2025 10:31:12 -0400 Subject: [petsc-users] Semi-coarsening for GMG using DMDA? In-Reply-To: References: <3376A9C0-8E4A-4DE0-A510-EF1645F22A26@petsc.dev> Message-ID: On Thu, Mar 20, 2025 at 12:09?PM Joe Alexandersen via petsc-users < petsc-users at mcs.anl.gov> wrote: > Great, thanks for the input so far. We will wait for Matt's response > soonish. > Looking at the code, as Barry says, it should work. Please let us know if it does not. You can also do this with Plex, as Mark says. The drawback here is that I do not have code to determine that this kind of refinement is nested. Therefore it will fall back to the slow code for constructing arbitrary interpolators. If you really wanted this, we could improve the interpolator code to be fast for this kind of nesting. Thanks, Matt > Sincerely, > > Joe Alexandersen > Associate Professor > DFF Sapere Aude Research Leader > > The Faculty of Engineering > Institute of Mechanical and Electrical Engineering > SDU Mechanical Engineering > > T +45 65 50 74 65 > M +45 93 50 72 44 > > joal at sdu.dk > https://urldefense.us/v3/__http://www.sdu.dk/ansat/joal__;!!G_uCfscf7eWS!cacecPogIX0oMUXWnjWaHZ7dJFfObkYjqmStlNjP3kCbpcRSU1p7oCURf7NF9Zx80rIwseka_1xx3YSVnMqM$ > > > University of Southern Denmark > Campusvej 55 > DK-5230 Odense M > https://urldefense.us/v3/__http://www.sdu.dk__;!!G_uCfscf7eWS!cacecPogIX0oMUXWnjWaHZ7dJFfObkYjqmStlNjP3kCbpcRSU1p7oCURf7NF9Zx80rIwseka_1xx3QPJTHXE$ > > > Sent from Outlook for Android > > > ------------------------------ > *From:* Mark Adams > *Sent:* Thursday, March 20, 2025 5:00:31 pm > *To:* Barry Smith > *Cc:* Joe Alexandersen ; petsc-users at mcs.anl.gov < > petsc-users at mcs.anl.gov> > *Subject:* Re: [petsc-users] Semi-coarsening for GMG using DMDA? > > You don't often get email from mfadams at lbl.gov. Learn why this is > important > > We have worked on semi coarsening in DMPlex, but it is not finished and we > are not working on it now. > > I'm not sure about how easy it would be in DMDA, but Barry is suggesting > that it is doable. > > We need to wait for Matt and he is on travel so his response may > be delayed. > > Mark > > On Thu, Mar 20, 2025 at 11:34?AM Barry Smith wrote: > >> >> In theory you can do as you propose. In the context below uniform >> refinement" only means that the coordinates of the DMDA are ignored so each >> refinement. The interpolation is fine woth different refinements in the >> different coordinate directions. >> >> Barry >> >> >> On Mar 20, 2025, at 5:56?AM, Joe Alexandersen via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >> Dear PETSc developers, >> >> We are working with a code that uses regular meshes (DMDA) and geometric >> multigrid. We would like to go from uniform coarsening/refinement to >> semi-coarsening/refinement, due to anisotropy in our underlying equations. >> We have tried to figure out if we can do this using built-in functions of >> PETSc, but it is unclear to us whether we can get it done relatively easily. >> >> It seems that we can go from the coarsest grid and refine differently in >> each direction using DMDASetRefinementFactor and then use DMRefine to >> define the finer levels. However, from the doc page for >> DMCreateInterpolation, it states that it only works for "uniform >> refinement" which to me seems to indicate it will not work with different >> refinement in each direction. But on the other hand, it states that it >> should work if using DMRefine, which I assume used the information from >> DMDASetRefinementFactor upon creation? >> >> So our questions are: is there are feasible and relatively simple way to >> do semi-coarsening/refinement of DMDAs for geometric multigrid hierarchies? >> Would the above work? >> >> Thanks in advance! >> >> Sincerely, >> Joe Alexandersen >> University of Southern Denmark >> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cacecPogIX0oMUXWnjWaHZ7dJFfObkYjqmStlNjP3kCbpcRSU1p7oCURf7NF9Zx80rIwseka_1xx3Rrvue7x$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Sun Mar 23 02:19:53 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 00:19:53 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> Message-ID: <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> Hi Barry, ? I have moved to main and rebuilt the PETSc libraries etc.? Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. -sanjay Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) upremas.F:68:72: ?? 68 |????? &?????????????????????????????? ierr) | 1 Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) upremas.F:74:72: ?? 74 |????? &???????????????????????????? ierr) | 1 Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) upremas.F:77:72: ?? 77 |????? &???????????????????????????? ierr) | 1 Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) parkv.F:58:25: ?? 58 |?????? PetscViewer??? Y_view ????? |???????????????????????? 1 Error: Type name 'tpetscviewer' at (1) is ambiguous parkv.F:69:9: ?? 69 |?????? endif ????? |???????? 1 Error: Expecting END SUBROUTINE statement at (1) parkv.F:72:9: ?? 72 |?????? endif ????? |???????? 1 Error: Expecting END SUBROUTINE statement at (1) parkv.F:91:66: ?? 91 |???????? call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, | 1 Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? parkv.F:65:72: ?? 65 |???????? call VecCreate??????? (PETSC_COMM_WORLD, xvec, ierr) | 1 Error: There is no specific subroutine for the generic 'veccreate' at (1) parkv.F:67:72: ?? 67 |???????? call VecSetFromOptions(xvec, ierr) | 1 Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) parkv.F:68:72: ?? 68 |???????? call VecDuplicate???? (xvec, yvec, ierr) | 1 Error: There is no specific subroutine for the generic 'vecduplicate' at (1) parkv.F:71:72: ?? 71 |???????? call VecDuplicate???? (xvec, yvec, ierr) | 1 Error: There is no specific subroutine for the generic 'vecduplicate' at (1) parkv.F:85:72: ?? 85 |?????? call VecAssemblyBegin(xvec, ierr) | 1 Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) parkv.F:86:72: ?? 86 |?????? call VecAssemblyEnd? (xvec, ierr) | 1 Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) parkv.F:88:72: ?? 88 |?????? call MatMult?????????? (Kmat, xvec, yvec, ierr) | 1 Error: There is no specific subroutine for the generic 'matmult' at (1) parkv.F:101:72: ? 101 |?????? call VecGetOwnershipRange(yvec, starti, endi, ierr) | 1 Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) - On 3/21/25 7:17 AM, Barry Smith wrote: > > ? ? I have just pushed a major update to the Fortran interface to the > main PETSc git branch. Could you please try to work with main (to > become release in a couple of weeks) with your Fortran code as we > debug the problem? This will save you a lot of work and hopefully make > the debugging more straightforward. > > ? ? You can send the same output with the debugger if it crashes in > the main branch and I can try to track down what is going wrong. > > ? Barry > > > > >> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users >> wrote: >> >> I am trying to upgrade my code to PETSc 3.22.4 (the code was last >> updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been >> using this code with PETSc for over 20 years. >> >> To get my code to compile and link during this update, I only need to >> make two changes; one was to use PetscViewerPushFormat instead of >> PetscViewerSetFormat and the other was to use >> PETSC_NULL_INTEGER_ARRAY in a spot or two. >> >> When I run the code however, I am getting an error very early on >> during a call to MatCreate near the beginning of the code.? The >> screen output says: >> >> [3]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >> Cannot create PETSC_NULL_XXX object >> [0]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >> Cannot create PETSC_NULL_XXX object >> [1]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >> Cannot create PETSC_NULL_XXX object >> [2]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >> Cannot create PETSC_NULL_XXX object >> >> I have a 4 processor run going.? I am running with >> -on_error_attach_debugger but the debugger is giving me cryptic (at >> least to me) output (the same for all 4 processes modulo the PID).? >> Stack traces seem to be unavailable :( >> >> lldb? -p 71963 >> (lldb) process attach --pid 71963 >> Process 71963 stopped >> * thread #1, queue = 'com.apple.main-thread', stop reason = >> signal SIGSTOP >> ??? frame #0: 0x00007fff69d92746 >> libsystem_kernel.dylib`__semwait_signal + 10 >> libsystem_kernel.dylib`__semwait_signal: >> ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> >> ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi >> ??? 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d??????????? ; cerror >> ??? 0x7fff69d92750 <+20>: retq >> Target 0: (feap) stopped. >> >> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >> Architecture set to: x86_64h-apple-macosx-. >> >> Does anyone have any hints as to what may be going on?? Note the >> program starts normally and i can do stuff with the interactive >> interface for the code -- even plotting the mesh etc. so I believe >> the input data has been read in correctly.? The crash only occurs >> when I initiate the formation of the matrix. >> >> I am attaching the >> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in >> case that offers some insight. >> >> Note, I have been >> -sanjay >> -- >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Sun Mar 23 02:25:16 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 00:25:16 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> Message-ID: Small update.? I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed).? The errors related to the preallocation routines still persists. -sanjay On 3/23/25 12:19 AM, Sanjay Govindjee wrote: > Hi Barry, > ? I have moved to main and rebuilt the PETSc libraries etc.? Right now > I am having trouble just getting my source code to compile. Plenty of > subroutines with PETSc calls compile but a few are throwing errors and > killing my compile. I suspect there will be more but if I can figure > these hopefully I can debug the ones that will follow. > -sanjay > > Error: There is no specific subroutine for the generic > 'matmpibaijsetpreallocation' at (1) > upremas.F:68:72: > > ?? 68 |????? &?????????????????????????????? ierr) > | 1 > Error: There is no specific subroutine for the generic > 'matseqbaijsetpreallocation' at (1) > upremas.F:74:72: > > ?? 74 |????? &???????????????????????????? ierr) > | 1 > Error: There is no specific subroutine for the generic > 'matmpiaijsetpreallocation' at (1) > upremas.F:77:72: > > ?? 77 |????? &???????????????????????????? ierr) > | 1 > Error: There is no specific subroutine for the generic > 'matseqaijsetpreallocation' at (1) > > parkv.F:58:25: > > ?? 58 |?????? PetscViewer??? Y_view > ????? |???????????????????????? 1 > Error: Type name 'tpetscviewer' at (1) is ambiguous > parkv.F:69:9: > > ?? 69 |?????? endif > ????? |???????? 1 > Error: Expecting END SUBROUTINE statement at (1) > parkv.F:72:9: > > ?? 72 |?????? endif > ????? |???????? 1 > Error: Expecting END SUBROUTINE statement at (1) > parkv.F:91:66: > > ?? 91 |???????? call > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, > | 1 > Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean > 'yvec'? > parkv.F:65:72: > > ?? 65 |???????? call VecCreate??????? (PETSC_COMM_WORLD, xvec, ierr) > | 1 > Error: There is no specific subroutine for the generic 'veccreate' > at (1) > parkv.F:67:72: > > ?? 67 |???????? call VecSetFromOptions(xvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecsetfromoptions' at (1) > parkv.F:68:72: > > ?? 68 |???????? call VecDuplicate???? (xvec, yvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecduplicate' at (1) > parkv.F:71:72: > > ?? 71 |???????? call VecDuplicate???? (xvec, yvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecduplicate' at (1) > parkv.F:85:72: > > ?? 85 |?????? call VecAssemblyBegin(xvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecassemblybegin' at (1) > parkv.F:86:72: > > ?? 86 |?????? call VecAssemblyEnd? (xvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecassemblyend' at (1) > parkv.F:88:72: > > ?? 88 |?????? call MatMult?????????? (Kmat, xvec, yvec, ierr) > | 1 > Error: There is no specific subroutine for the generic 'matmult' > at (1) > parkv.F:101:72: > > ? 101 |?????? call VecGetOwnershipRange(yvec, starti, endi, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecgetownershiprange' at (1) > > > - > > On 3/21/25 7:17 AM, Barry Smith wrote: >> >> ? ? I have just pushed a major update to the Fortran interface to the >> main PETSc git branch. Could you please try to work with main (to >> become release in a couple of weeks) with your Fortran code as we >> debug the problem? This will save you a lot of work and hopefully >> make the debugging more straightforward. >> >> ? ? You can send the same output with the debugger if it crashes in >> the main branch and I can try to track down what is going wrong. >> >> ? Barry >> >> >> >> >>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users >>> wrote: >>> >>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last >>> updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been >>> using this code with PETSc for over 20 years. >>> >>> To get my code to compile and link during this update, I only need >>> to make two changes; one was to use PetscViewerPushFormat instead of >>> PetscViewerSetFormat and the other was to use >>> PETSC_NULL_INTEGER_ARRAY in a spot or two. >>> >>> When I run the code however, I am getting an error very early on >>> during a call to MatCreate near the beginning of the code.? The >>> screen output says: >>> >>> [3]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >>> Cannot create PETSC_NULL_XXX object >>> [0]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >>> Cannot create PETSC_NULL_XXX object >>> [1]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >>> Cannot create PETSC_NULL_XXX object >>> [2]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >>> Cannot create PETSC_NULL_XXX object >>> >>> I have a 4 processor run going.? I am running with >>> -on_error_attach_debugger but the debugger is giving me cryptic (at >>> least to me) output (the same for all 4 processes modulo the PID).? >>> Stack traces seem to be unavailable :( >>> >>> lldb? -p 71963 >>> (lldb) process attach --pid 71963 >>> Process 71963 stopped >>> * thread #1, queue = 'com.apple.main-thread', stop reason = >>> signal SIGSTOP >>> ??? frame #0: 0x00007fff69d92746 >>> libsystem_kernel.dylib`__semwait_signal + 10 >>> libsystem_kernel.dylib`__semwait_signal: >>> ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> >>> ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi >>> ??? 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d??????????? ; cerror >>> ??? 0x7fff69d92750 <+20>: retq >>> Target 0: (feap) stopped. >>> >>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>> Architecture set to: x86_64h-apple-macosx-. >>> >>> Does anyone have any hints as to what may be going on?? Note the >>> program starts normally and i can do stuff with the interactive >>> interface for the code -- even plotting the mesh etc. so I believe >>> the input data has been read in correctly.? The crash only occurs >>> when I initiate the formation of the matrix. >>> >>> I am attaching the >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file >>> in case that offers some insight. >>> >>> Note, I have been >>> -sanjay >>> -- >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Sun Mar 23 02:42:05 2025 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 23 Mar 2025 07:42:05 +0000 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> Message-ID: <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 For instance: interface MatMPIBAIJSetPreallocation subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) import tMat Mat :: a PetscInt :: b PetscInt :: c PetscInt :: d(*) PetscInt :: e PetscInt :: f(*) PetscErrorCode z end subroutine end interface The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. Jose > El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: > > Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. > -sanjay > > On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >> Hi Barry, >> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >> -sanjay >> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >> upremas.F:68:72: >> >> 68 | & ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >> upremas.F:74:72: >> >> 74 | & ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >> upremas.F:77:72: >> >> 77 | & ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >> >> parkv.F:58:25: >> >> 58 | PetscViewer Y_view >> | 1 >> Error: Type name 'tpetscviewer' at (1) is ambiguous >> parkv.F:69:9: >> >> 69 | endif >> | 1 >> Error: Expecting END SUBROUTINE statement at (1) >> parkv.F:72:9: >> >> 72 | endif >> | 1 >> Error: Expecting END SUBROUTINE statement at (1) >> parkv.F:91:66: >> >> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >> | 1 >> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >> parkv.F:65:72: >> >> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'veccreate' at (1) >> parkv.F:67:72: >> >> 67 | call VecSetFromOptions(xvec, ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >> parkv.F:68:72: >> >> 68 | call VecDuplicate (xvec, yvec, ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >> parkv.F:71:72: >> >> 71 | call VecDuplicate (xvec, yvec, ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >> parkv.F:85:72: >> >> 85 | call VecAssemblyBegin(xvec, ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >> parkv.F:86:72: >> >> 86 | call VecAssemblyEnd (xvec, ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >> parkv.F:88:72: >> >> 88 | call MatMult (Kmat, xvec, yvec, ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'matmult' at (1) >> parkv.F:101:72: >> >> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >> >> >> - >> >> >> On 3/21/25 7:17 AM, Barry Smith wrote: >>> >>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>> >>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>> >>> Barry >>> >>> >>> >>> >>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>> >>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>> >>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>> >>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>> lldb -p 71963 >>>> (lldb) process attach --pid 71963 >>>> Process 71963 stopped >>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>> libsystem_kernel.dylib`__semwait_signal: >>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>> 0x7fff69d92750 <+20>: retq >>>> Target 0: (feap) stopped. >>>> >>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>> Architecture set to: x86_64h-apple-macosx-. >>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>> >>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>> >>>> Note, I have been >>>> -sanjay >>>> -- >>>> >>>> >>> >> > From s_g at berkeley.edu Sun Mar 23 02:46:17 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 00:46:17 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> Message-ID: For what its worth I have encountered a few more compile errors (and if I exclude the failing-to-compile routines some link errors). The link errors are: Undefined symbols for architecture x86_64: ? "_vecgetarrayreadf90_" ?"_vecrestorearrayreadf90_", The other compile errors are in a routine that starts with #???? include?? ???? ? use?????????????????????? petscksp ???? ? implicit?? none The compile errors are as follows (It seems that some of the declarations have moved around or have changed). usolve.F:86:39: ?? 86 |?????? real*8???????? info(MAT_INFO_SIZE), nza,nzr,mal ????? |?????????????????????????????????????? 1 Error: Symbol 'mat_info_size' at (1) has no IMPLICIT type; did you mean 'mpi_win_size'? usolve.F:186:39: ? 186 |???????????? mal = info(MAT_INFO_MALLOCS) ????? |?????????????????????????????????????? 1 Error: Symbol 'mat_info_mallocs' at (1) has no IMPLICIT type; did you mean 'mpi_info_null'? usolve.F:184:44: ? 184 |???????????? nza = info(MAT_INFO_NZ_ALLOCATED) ????? |??????????????????????????????????????????? 1 Error: Symbol 'mat_info_nz_allocated' at (1) has no IMPLICIT type; did you mean 'mpi_win_flavor_allocate'? usolve.F:185:39: ? 185 |???????????? nzr = info(MAT_INFO_NZ_USED) ????? |?????????????????????????????????????? 1 Error: Symbol 'mat_info_nz_used' at (1) has no IMPLICIT type; did you mean 'mpi_info_null'? usolve.F:144:72: ? 183 |???????????? call MatGetInfo(Kmat, MAT_LOCAL, info, ierr) | 1 Error: There is no specific subroutine for the generic 'matgetinfo' at (1) usolve.F:306:72: ? 306 |???????????????? call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) | 1 Error: There is no specific subroutine for the generic 'pcsetcoordinates' at (1) usolve.F:339:15: ? 339 |???????????? if(reason.gt.0 .and. echo) then ????? |?????????????? 1 Error: Operands of comparison operator '.gt.' at (1) are TYPE(ekspconvergedreason)/INTEGER(4) usolve.F:341:52: ? 341 |?????????????? write(? *,*) ' CONVERGENCE: ',creason(reason), ????? |??????????????????????????????????????????????????? 1 Error: Array index at (1) must be of INTEGER type, found DERIVED usolve.F:343:52: ? 343 |?????????????? write(iow,*) ' CONVERGENCE: ',creason(reason), ????? |??????????????????????????????????????????????????? 1 Error: Array index at (1) must be of INTEGER type, found DERIVED usolve.F:345:19: ? 345 |???????????? elseif(reason.lt.0) then ????? |?????????????????? 1 Error: Operands of comparison operator '.lt.' at (1) are TYPE(ekspconvergedreason)/INTEGER(4) usolve.F:346:62: ? 346 |?????????????? write(? *,*) ' NO CONVERGENCE REASON: ',nreason(-reason) |????????????????????????????????????????????????????????????? 1 Error: Operand of unary numeric operator '-' at (1) is UNKNOWN usolve.F:347:62: ? 347 |?????????????? write(iow,*) ' NO CONVERGENCE REASON: ',nreason(-reason) |????????????????????????????????????????????????????????????? 1 Error: Operand of unary numeric operator '-' at (1) is UNKNOWN - On 3/23/25 12:19 AM, Sanjay Govindjee wrote: > Hi Barry, > ? I have moved to main and rebuilt the PETSc libraries etc.? Right now > I am having trouble just getting my source code to compile. Plenty of > subroutines with PETSc calls compile but a few are throwing errors and > killing my compile. I suspect there will be more but if I can figure > these hopefully I can debug the ones that will follow. > -sanjay > > Error: There is no specific subroutine for the generic > 'matmpibaijsetpreallocation' at (1) > upremas.F:68:72: > > ?? 68 |????? &?????????????????????????????? ierr) > | 1 > Error: There is no specific subroutine for the generic > 'matseqbaijsetpreallocation' at (1) > upremas.F:74:72: > > ?? 74 |????? &???????????????????????????? ierr) > | 1 > Error: There is no specific subroutine for the generic > 'matmpiaijsetpreallocation' at (1) > upremas.F:77:72: > > ?? 77 |????? &???????????????????????????? ierr) > | 1 > Error: There is no specific subroutine for the generic > 'matseqaijsetpreallocation' at (1) > > parkv.F:58:25: > > ?? 58 |?????? PetscViewer??? Y_view > ????? |???????????????????????? 1 > Error: Type name 'tpetscviewer' at (1) is ambiguous > parkv.F:69:9: > > ?? 69 |?????? endif > ????? |???????? 1 > Error: Expecting END SUBROUTINE statement at (1) > parkv.F:72:9: > > ?? 72 |?????? endif > ????? |???????? 1 > Error: Expecting END SUBROUTINE statement at (1) > parkv.F:91:66: > > ?? 91 |???????? call > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, > | 1 > Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean > 'yvec'? > parkv.F:65:72: > > ?? 65 |???????? call VecCreate??????? (PETSC_COMM_WORLD, xvec, ierr) > | 1 > Error: There is no specific subroutine for the generic 'veccreate' > at (1) > parkv.F:67:72: > > ?? 67 |???????? call VecSetFromOptions(xvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecsetfromoptions' at (1) > parkv.F:68:72: > > ?? 68 |???????? call VecDuplicate???? (xvec, yvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecduplicate' at (1) > parkv.F:71:72: > > ?? 71 |???????? call VecDuplicate???? (xvec, yvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecduplicate' at (1) > parkv.F:85:72: > > ?? 85 |?????? call VecAssemblyBegin(xvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecassemblybegin' at (1) > parkv.F:86:72: > > ?? 86 |?????? call VecAssemblyEnd? (xvec, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecassemblyend' at (1) > parkv.F:88:72: > > ?? 88 |?????? call MatMult?????????? (Kmat, xvec, yvec, ierr) > | 1 > Error: There is no specific subroutine for the generic 'matmult' > at (1) > parkv.F:101:72: > > ? 101 |?????? call VecGetOwnershipRange(yvec, starti, endi, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'vecgetownershiprange' at (1) > > > - > > On 3/21/25 7:17 AM, Barry Smith wrote: >> >> ? ? I have just pushed a major update to the Fortran interface to the >> main PETSc git branch. Could you please try to work with main (to >> become release in a couple of weeks) with your Fortran code as we >> debug the problem? This will save you a lot of work and hopefully >> make the debugging more straightforward. >> >> ? ? You can send the same output with the debugger if it crashes in >> the main branch and I can try to track down what is going wrong. >> >> ? Barry >> >> >> >> >>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users >>> wrote: >>> >>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last >>> updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been >>> using this code with PETSc for over 20 years. >>> >>> To get my code to compile and link during this update, I only need >>> to make two changes; one was to use PetscViewerPushFormat instead of >>> PetscViewerSetFormat and the other was to use >>> PETSC_NULL_INTEGER_ARRAY in a spot or two. >>> >>> When I run the code however, I am getting an error very early on >>> during a call to MatCreate near the beginning of the code.? The >>> screen output says: >>> >>> [3]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >>> Cannot create PETSC_NULL_XXX object >>> [0]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >>> Cannot create PETSC_NULL_XXX object >>> [1]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >>> Cannot create PETSC_NULL_XXX object >>> [2]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 >>> Cannot create PETSC_NULL_XXX object >>> >>> I have a 4 processor run going.? I am running with >>> -on_error_attach_debugger but the debugger is giving me cryptic (at >>> least to me) output (the same for all 4 processes modulo the PID).? >>> Stack traces seem to be unavailable :( >>> >>> lldb? -p 71963 >>> (lldb) process attach --pid 71963 >>> Process 71963 stopped >>> * thread #1, queue = 'com.apple.main-thread', stop reason = >>> signal SIGSTOP >>> ??? frame #0: 0x00007fff69d92746 >>> libsystem_kernel.dylib`__semwait_signal + 10 >>> libsystem_kernel.dylib`__semwait_signal: >>> ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> >>> ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi >>> ??? 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d??????????? ; cerror >>> ??? 0x7fff69d92750 <+20>: retq >>> Target 0: (feap) stopped. >>> >>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>> Architecture set to: x86_64h-apple-macosx-. >>> >>> Does anyone have any hints as to what may be going on?? Note the >>> program starts normally and i can do stuff with the interactive >>> interface for the code -- even plotting the mesh etc. so I believe >>> the input data has been read in correctly.? The crash only occurs >>> when I initiate the formation of the matrix. >>> >>> I am attaching the >>> /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file >>> in case that offers some insight. >>> >>> Note, I have been >>> -sanjay >>> -- >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Sun Mar 23 02:52:46 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 00:52:46 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> Message-ID: Here is what I have right now: ??????????? call MatMPIBAIJSetPreallocation(Mmat,nsbk, ???? & PETSC_NULL_INTEGER,mr(np(246)), ???? & PETSC_NULL_INTEGER,mr(np(247)), ???? &?????????????????????????????? ierr) ??????????? call MatSeqBAIJSetPreallocation(Mmat,nsbk, ???? & PETSC_DEFAULT_INTEGER,mr(np(246)), ???? &?????????????????????????????? ierr) ????????? else ??????????? call MatSetType(Mmat,MATAIJ,ierr) ??????????? call MatMPIAIJSetPreallocation(Mmat, ???? & PETSC_NULL_INTEGER_ARRAY,mr(np(246)), ???? & PETSC_NULL_INTEGER_ARRAY,mr(np(247)), ???? &???????????????????????????? ierr) ??????????? call MatSeqAIJSetPreallocation(Mmat, ???? & PETSC_NULL_INTEGER_ARRAY,mr(np(246)), ???? &???????????????????????????? ierr) Before with prior versions of petsc I had ??????????? call MatMPIBAIJSetPreallocation(Mmat,nsbk, ???? & PETSC_NULL_INTEGER,mr(np(246)), ???? & PETSC_NULL_INTEGER,mr(np(247)), ???? &?????????????????????????????? ierr) ??????????? call MatSeqBAIJSetPreallocation(Mmat,nsbk, ???? & PETSC_DEFAULT_INTEGER,mr(np(246)), ???? &?????????????????????????????? ierr) ????????? else ??????????? call MatSetType(Mmat,MATAIJ,ierr) ??????????? call MatMPIAIJSetPreallocation(Mmat, ???? & PETSC_NULL_INTEGER(1),mr(np(246)), ???? & PETSC_NULL_INTEGER(1),mr(np(247)), ???? &???????????????????????????? ierr) ??????????? call MatSeqAIJSetPreallocation(Mmat, ???? & PETSC_NULL_INTEGER(1),mr(np(246)), ???? &???????????????????????????? ierr) N.B. I wrote this decades ago... ------------------------------------------------------------------- Sanjay Govindjee, PhD, PE Horace, Dorothy, and Katherine Johnson Professor in Engineering Distinguished Professor of Civil and Environmental Engineering 779 Davis Hall University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!bt6E7YD1LcI-TwFlsTEukfgnddGYNt7h4xMMAJWHrDwwawBGGixIG0Y3GlzVgrc5tbcz99GRwLp7rNWRMRlaRQ$ ------------------------------------------------------------------- Books: Introduction to Mechanics of Solid Materials https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!bt6E7YD1LcI-TwFlsTEukfgnddGYNt7h4xMMAJWHrDwwawBGGixIG0Y3GlzVgrc5tbcz99GRwLp7rNVPeawDBw$ Continuum Mechanics of Solids https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!bt6E7YD1LcI-TwFlsTEukfgnddGYNt7h4xMMAJWHrDwwawBGGixIG0Y3GlzVgrc5tbcz99GRwLp7rNUAfx_NXw$ Example Problems for Continuum Mechanics of Solids https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!bt6E7YD1LcI-TwFlsTEukfgnddGYNt7h4xMMAJWHrDwwawBGGixIG0Y3GlzVgrc5tbcz99GRwLp7rNXmFjJvVw$ Engineering Mechanics of Deformable Solids https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!bt6E7YD1LcI-TwFlsTEukfgnddGYNt7h4xMMAJWHrDwwawBGGixIG0Y3GlzVgrc5tbcz99GRwLp7rNU2qzDqVQ$ Engineering Mechanics 3 (Dynamics) 2nd Edition https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!bt6E7YD1LcI-TwFlsTEukfgnddGYNt7h4xMMAJWHrDwwawBGGixIG0Y3GlzVgrc5tbcz99GRwLp7rNVKDd03jQ$ Engineering Mechanics 3, Supplementary Problems: Dynamics https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!bt6E7YD1LcI-TwFlsTEukfgnddGYNt7h4xMMAJWHrDwwawBGGixIG0Y3GlzVgrc5tbcz99GRwLp7rNXURaBrxQ$ ------------------------------------------------------------------- NSF NHERI SimCenter https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!bt6E7YD1LcI-TwFlsTEukfgnddGYNt7h4xMMAJWHrDwwawBGGixIG0Y3GlzVgrc5tbcz99GRwLp7rNU3AIDj7w$ ------------------------------------------------------------------- On 3/23/25 12:42 AM, Jose E. Roman wrote: > The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 > > For instance: > > interface MatMPIBAIJSetPreallocation > subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) > import tMat > Mat :: a > PetscInt :: b > PetscInt :: c > PetscInt :: d(*) > PetscInt :: e > PetscInt :: f(*) > PetscErrorCode z > end subroutine > end interface > > The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. > > Jose > > >> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >> >> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >> -sanjay >> >> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>> Hi Barry, >>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>> -sanjay >>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>> upremas.F:68:72: >>> >>> 68 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>> upremas.F:74:72: >>> >>> 74 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>> upremas.F:77:72: >>> >>> 77 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>> >>> parkv.F:58:25: >>> >>> 58 | PetscViewer Y_view >>> | 1 >>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>> parkv.F:69:9: >>> >>> 69 | endif >>> | 1 >>> Error: Expecting END SUBROUTINE statement at (1) >>> parkv.F:72:9: >>> >>> 72 | endif >>> | 1 >>> Error: Expecting END SUBROUTINE statement at (1) >>> parkv.F:91:66: >>> >>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>> | 1 >>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>> parkv.F:65:72: >>> >>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>> parkv.F:67:72: >>> >>> 67 | call VecSetFromOptions(xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>> parkv.F:68:72: >>> >>> 68 | call VecDuplicate (xvec, yvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>> parkv.F:71:72: >>> >>> 71 | call VecDuplicate (xvec, yvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>> parkv.F:85:72: >>> >>> 85 | call VecAssemblyBegin(xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>> parkv.F:86:72: >>> >>> 86 | call VecAssemblyEnd (xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>> parkv.F:88:72: >>> >>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>> parkv.F:101:72: >>> >>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>> >>> >>> - >>> >>> >>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>> >>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>> >>>> Barry >>>> >>>> >>>> >>>> >>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>> >>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>> >>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>> >>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>> lldb -p 71963 >>>>> (lldb) process attach --pid 71963 >>>>> Process 71963 stopped >>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>> libsystem_kernel.dylib`__semwait_signal: >>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>> 0x7fff69d92750 <+20>: retq >>>>> Target 0: (feap) stopped. >>>>> >>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>> Architecture set to: x86_64h-apple-macosx-. >>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>> >>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>> >>>>> Note, I have been >>>>> -sanjay >>>>> -- >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Sun Mar 23 02:54:47 2025 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 23 Mar 2025 07:54:47 +0000 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> Message-ID: Have a look at the list of changes - it is currently here https://urldefense.us/v3/__https://petsc.org/main/changes/dev/__;!!G_uCfscf7eWS!ZGeiat-1aoVqZo1IPI1kAEiFCl1GTP4Z65w0m3KJrZCopfOFNwbrmOPmjLGwC4J7Tw79C-8ozaNsXI05qkXz8Y18$ until the new version is released. See the last section "Fortran". The functions ending in "F90" have been renamed, just remove the "F90" suffix. Regarding the info-related errors, a workaround is to append %v, for instance mal = info(MAT_INFO_MALLOCS%v) But Barry may want to provide a better fix for this. Jose > El 23 mar 2025, a las 8:42, Jose E. Roman via petsc-users escribi?: > > The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 > > For instance: > > interface MatMPIBAIJSetPreallocation > subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) > import tMat > Mat :: a > PetscInt :: b > PetscInt :: c > PetscInt :: d(*) > PetscInt :: e > PetscInt :: f(*) > PetscErrorCode z > end subroutine > end interface > > The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. > > Jose > > >> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >> >> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >> -sanjay >> >> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>> Hi Barry, >>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>> -sanjay >>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>> upremas.F:68:72: >>> >>> 68 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>> upremas.F:74:72: >>> >>> 74 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>> upremas.F:77:72: >>> >>> 77 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>> >>> parkv.F:58:25: >>> >>> 58 | PetscViewer Y_view >>> | 1 >>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>> parkv.F:69:9: >>> >>> 69 | endif >>> | 1 >>> Error: Expecting END SUBROUTINE statement at (1) >>> parkv.F:72:9: >>> >>> 72 | endif >>> | 1 >>> Error: Expecting END SUBROUTINE statement at (1) >>> parkv.F:91:66: >>> >>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>> | 1 >>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>> parkv.F:65:72: >>> >>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>> parkv.F:67:72: >>> >>> 67 | call VecSetFromOptions(xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>> parkv.F:68:72: >>> >>> 68 | call VecDuplicate (xvec, yvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>> parkv.F:71:72: >>> >>> 71 | call VecDuplicate (xvec, yvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>> parkv.F:85:72: >>> >>> 85 | call VecAssemblyBegin(xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>> parkv.F:86:72: >>> >>> 86 | call VecAssemblyEnd (xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>> parkv.F:88:72: >>> >>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>> parkv.F:101:72: >>> >>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>> >>> >>> - >>> >>> >>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>> >>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>> >>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>> >>>> Barry >>>> >>>> >>>> >>>> >>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>> >>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>> >>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>> >>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>> lldb -p 71963 >>>>> (lldb) process attach --pid 71963 >>>>> Process 71963 stopped >>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>> libsystem_kernel.dylib`__semwait_signal: >>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>> 0x7fff69d92750 <+20>: retq >>>>> Target 0: (feap) stopped. >>>>> >>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>> Architecture set to: x86_64h-apple-macosx-. >>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>> >>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>> >>>>> Note, I have been >>>>> -sanjay >>>>> -- >>>>> >>>>> >>>> >>> >> > From s_g at berkeley.edu Sun Mar 23 02:56:42 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 00:56:42 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> Message-ID: Follow-up.? I notice in the documentation that there is a comment about setting files to be *.F90 Right now my files are *.F when they contain PETSc functions and data.? I can also confirm that I have $PETSC_ARCH/ftn/mat/petscmat.h90 and it has the interface as you note. ------ On 3/23/25 12:42 AM, Jose E. Roman wrote: > The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 > > For instance: > > interface MatMPIBAIJSetPreallocation > subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) > import tMat > Mat :: a > PetscInt :: b > PetscInt :: c > PetscInt :: d(*) > PetscInt :: e > PetscInt :: f(*) > PetscErrorCode z > end subroutine > end interface > > The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. > > Jose > > >> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >> >> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >> -sanjay >> >> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>> Hi Barry, >>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>> -sanjay >>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>> upremas.F:68:72: >>> >>> 68 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>> upremas.F:74:72: >>> >>> 74 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>> upremas.F:77:72: >>> >>> 77 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>> >>> parkv.F:58:25: >>> >>> 58 | PetscViewer Y_view >>> | 1 >>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>> parkv.F:69:9: >>> >>> 69 | endif >>> | 1 >>> Error: Expecting END SUBROUTINE statement at (1) >>> parkv.F:72:9: >>> >>> 72 | endif >>> | 1 >>> Error: Expecting END SUBROUTINE statement at (1) >>> parkv.F:91:66: >>> >>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>> | 1 >>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>> parkv.F:65:72: >>> >>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>> parkv.F:67:72: >>> >>> 67 | call VecSetFromOptions(xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>> parkv.F:68:72: >>> >>> 68 | call VecDuplicate (xvec, yvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>> parkv.F:71:72: >>> >>> 71 | call VecDuplicate (xvec, yvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>> parkv.F:85:72: >>> >>> 85 | call VecAssemblyBegin(xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>> parkv.F:86:72: >>> >>> 86 | call VecAssemblyEnd (xvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>> parkv.F:88:72: >>> >>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>> parkv.F:101:72: >>> >>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>> >>> >>> - >>> >>> >>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>> >>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>> >>>> Barry >>>> >>>> >>>> >>>> >>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>> >>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>> >>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>> >>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>> lldb -p 71963 >>>>> (lldb) process attach --pid 71963 >>>>> Process 71963 stopped >>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>> libsystem_kernel.dylib`__semwait_signal: >>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>> 0x7fff69d92750 <+20>: retq >>>>> Target 0: (feap) stopped. >>>>> >>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>> Architecture set to: x86_64h-apple-macosx-. >>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>> >>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>> >>>>> Note, I have been >>>>> -sanjay >>>>> -- >>>>> >>>>> From s_g at berkeley.edu Sun Mar 23 03:02:10 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 01:02:10 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> Message-ID: Jose, ? What module should I be using to load petscmat.h90? -sanjay -- On 3/23/25 12:54 AM, Jose E. Roman wrote: > Have a look at the list of changes - it is currently here https://urldefense.us/v3/__https://petsc.org/main/changes/dev/__;!!G_uCfscf7eWS!dTThTluS6q8pSm6Y7utH4DnjwWwV_R6ZlHfMvrTadm2kDYcYMZPDWLC-pHHE2rWho7bkDR4z48NlTuHOD2xV3A$ until the new version is released. See the last section "Fortran". > > The functions ending in "F90" have been renamed, just remove the "F90" suffix. > > Regarding the info-related errors, a workaround is to append %v, for instance > mal = info(MAT_INFO_MALLOCS%v) > But Barry may want to provide a better fix for this. > > Jose > > >> El 23 mar 2025, a las 8:42, Jose E. Roman via petsc-users escribi?: >> >> The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 >> >> For instance: >> >> interface MatMPIBAIJSetPreallocation >> subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) >> import tMat >> Mat :: a >> PetscInt :: b >> PetscInt :: c >> PetscInt :: d(*) >> PetscInt :: e >> PetscInt :: f(*) >> PetscErrorCode z >> end subroutine >> end interface >> >> The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. >> >> Jose >> >> >>> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >>> >>> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >>> -sanjay >>> >>> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>>> Hi Barry, >>>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>>> -sanjay >>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>> upremas.F:68:72: >>>> >>>> 68 | & ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>> upremas.F:74:72: >>>> >>>> 74 | & ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>> upremas.F:77:72: >>>> >>>> 77 | & ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>> >>>> parkv.F:58:25: >>>> >>>> 58 | PetscViewer Y_view >>>> | 1 >>>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>>> parkv.F:69:9: >>>> >>>> 69 | endif >>>> | 1 >>>> Error: Expecting END SUBROUTINE statement at (1) >>>> parkv.F:72:9: >>>> >>>> 72 | endif >>>> | 1 >>>> Error: Expecting END SUBROUTINE statement at (1) >>>> parkv.F:91:66: >>>> >>>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>>> | 1 >>>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>>> parkv.F:65:72: >>>> >>>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>>> parkv.F:67:72: >>>> >>>> 67 | call VecSetFromOptions(xvec, ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>>> parkv.F:68:72: >>>> >>>> 68 | call VecDuplicate (xvec, yvec, ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>> parkv.F:71:72: >>>> >>>> 71 | call VecDuplicate (xvec, yvec, ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>> parkv.F:85:72: >>>> >>>> 85 | call VecAssemblyBegin(xvec, ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>>> parkv.F:86:72: >>>> >>>> 86 | call VecAssemblyEnd (xvec, ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>>> parkv.F:88:72: >>>> >>>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>>> parkv.F:101:72: >>>> >>>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>>> >>>> >>>> - >>>> >>>> >>>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>>> >>>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>>> >>>>> Barry >>>>> >>>>> >>>>> >>>>> >>>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>>> >>>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>>> >>>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>>> >>>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>>> lldb -p 71963 >>>>>> (lldb) process attach --pid 71963 >>>>>> Process 71963 stopped >>>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>>> libsystem_kernel.dylib`__semwait_signal: >>>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>>> 0x7fff69d92750 <+20>: retq >>>>>> Target 0: (feap) stopped. >>>>>> >>>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>>> Architecture set to: x86_64h-apple-macosx-. >>>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>>> >>>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>>> >>>>>> Note, I have been >>>>>> -sanjay >>>>>> -- >>>>>> >>>>>> From jroman at dsic.upv.es Sun Mar 23 03:09:08 2025 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 23 Mar 2025 08:09:08 +0000 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> Message-ID: <54B2D303-689B-4BD1-8AF7-D4714EA7BD94@dsic.upv.es> "use petscmat" will use those definitions. As I said, you probably have mismatching arguments. For instance call MatSeqAIJSetPreallocation(Mmat, & PETSC_NULL_INTEGER_ARRAY,mr(np(246)), & ierr) The second argument is a PetscInt so PETSC_NULL_INTEGER_ARRAY is wrong, it should be PETSC_NULL_INTEGER. Now the compiler will help you fix this kind of errors, which would go unnoticed before. Jose > El 23 mar 2025, a las 9:02, Sanjay Govindjee escribi?: > > Jose, > What module should I be using to load petscmat.h90? > -sanjay > > -- > > On 3/23/25 12:54 AM, Jose E. Roman wrote: >> Have a look at the list of changes - it is currently here https://urldefense.us/v3/__https://petsc.org/main/changes/dev/__;!!G_uCfscf7eWS!b_yWT8KGbtscIdtldXg_71nJH-XvgG0xTlfqji-o_iqd6MlxmWYToCAnX5uBF5IuforBLSsy8Py0Uu4hJVXw8JnO$ until the new version is released. See the last section "Fortran". >> >> The functions ending in "F90" have been renamed, just remove the "F90" suffix. >> >> Regarding the info-related errors, a workaround is to append %v, for instance >> mal = info(MAT_INFO_MALLOCS%v) >> But Barry may want to provide a better fix for this. >> >> Jose >> >> >>> El 23 mar 2025, a las 8:42, Jose E. Roman via petsc-users escribi?: >>> >>> The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 >>> >>> For instance: >>> >>> interface MatMPIBAIJSetPreallocation >>> subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) >>> import tMat >>> Mat :: a >>> PetscInt :: b >>> PetscInt :: c >>> PetscInt :: d(*) >>> PetscInt :: e >>> PetscInt :: f(*) >>> PetscErrorCode z >>> end subroutine >>> end interface >>> >>> The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. >>> >>> Jose >>> >>> >>>> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >>>> >>>> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >>>> -sanjay >>>> >>>> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>>>> Hi Barry, >>>>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>>>> -sanjay >>>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>>> upremas.F:68:72: >>>>> >>>>> 68 | & ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>>> upremas.F:74:72: >>>>> >>>>> 74 | & ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>>> upremas.F:77:72: >>>>> >>>>> 77 | & ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>>> >>>>> parkv.F:58:25: >>>>> >>>>> 58 | PetscViewer Y_view >>>>> | 1 >>>>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>>>> parkv.F:69:9: >>>>> >>>>> 69 | endif >>>>> | 1 >>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>> parkv.F:72:9: >>>>> >>>>> 72 | endif >>>>> | 1 >>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>> parkv.F:91:66: >>>>> >>>>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>>>> | 1 >>>>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>>>> parkv.F:65:72: >>>>> >>>>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>>>> parkv.F:67:72: >>>>> >>>>> 67 | call VecSetFromOptions(xvec, ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>>>> parkv.F:68:72: >>>>> >>>>> 68 | call VecDuplicate (xvec, yvec, ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>> parkv.F:71:72: >>>>> >>>>> 71 | call VecDuplicate (xvec, yvec, ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>> parkv.F:85:72: >>>>> >>>>> 85 | call VecAssemblyBegin(xvec, ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>>>> parkv.F:86:72: >>>>> >>>>> 86 | call VecAssemblyEnd (xvec, ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>>>> parkv.F:88:72: >>>>> >>>>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>>>> parkv.F:101:72: >>>>> >>>>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>>>> >>>>> >>>>> - >>>>> >>>>> >>>>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>>>> >>>>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>>>> >>>>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>>>> >>>>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>>>> >>>>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>>>> lldb -p 71963 >>>>>>> (lldb) process attach --pid 71963 >>>>>>> Process 71963 stopped >>>>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>>>> libsystem_kernel.dylib`__semwait_signal: >>>>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>>>> 0x7fff69d92750 <+20>: retq >>>>>>> Target 0: (feap) stopped. >>>>>>> >>>>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>>>> Architecture set to: x86_64h-apple-macosx-. >>>>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>>>> >>>>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>>>> >>>>>>> Note, I have been >>>>>>> -sanjay >>>>>>> -- >>>>>>> >>>>>>> > From s_g at berkeley.edu Sun Mar 23 03:24:25 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 01:24:25 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <54B2D303-689B-4BD1-8AF7-D4714EA7BD94@dsic.upv.es> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> <54B2D303-689B-4BD1-8AF7-D4714EA7BD94@dsic.upv.es> Message-ID: Jose, ?? I've tried lots of combinations but I still get the same error. I think the signatures are all correct.? I've attached the routine in case you see something obvious.? If not I will try to make a standalone program that generates the same compile error. upremas.F:66:72: ?? 66 |????? &?????????????????????????????? ierr) | 1 Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) upremas.F:69:72: ?? 69 |????? &?????????????????????????????? ierr) | 1 Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) upremas.F:75:72: ?? 75 |????? &???????????????????????????? ierr) | 1 Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) upremas.F:78:72: ?? 78 |????? &???????????????????????????? ierr) | 1 Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) -- On 3/23/25 1:09 AM, Jose E. Roman wrote: > "use petscmat" will use those definitions. > > As I said, you probably have mismatching arguments. For instance > call MatSeqAIJSetPreallocation(Mmat, > & PETSC_NULL_INTEGER_ARRAY,mr(np(246)), > & ierr) > The second argument is a PetscInt so PETSC_NULL_INTEGER_ARRAY is wrong, it should be PETSC_NULL_INTEGER. > > Now the compiler will help you fix this kind of errors, which would go unnoticed before. > > Jose > > > >> El 23 mar 2025, a las 9:02, Sanjay Govindjee escribi?: >> >> Jose, >> What module should I be using to load petscmat.h90? >> -sanjay >> >> -- >> >> On 3/23/25 12:54 AM, Jose E. Roman wrote: >>> Have a look at the list of changes - it is currently herehttps://petsc.org/main/changes/dev/ until the new version is released. See the last section "Fortran". >>> >>> The functions ending in "F90" have been renamed, just remove the "F90" suffix. >>> >>> Regarding the info-related errors, a workaround is to append %v, for instance >>> mal = info(MAT_INFO_MALLOCS%v) >>> But Barry may want to provide a better fix for this. >>> >>> Jose >>> >>> >>>> El 23 mar 2025, a las 8:42, Jose E. Roman via petsc-users escribi?: >>>> >>>> The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 >>>> >>>> For instance: >>>> >>>> interface MatMPIBAIJSetPreallocation >>>> subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) >>>> import tMat >>>> Mat :: a >>>> PetscInt :: b >>>> PetscInt :: c >>>> PetscInt :: d(*) >>>> PetscInt :: e >>>> PetscInt :: f(*) >>>> PetscErrorCode z >>>> end subroutine >>>> end interface >>>> >>>> The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. >>>> >>>> Jose >>>> >>>> >>>>> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >>>>> >>>>> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >>>>> -sanjay >>>>> >>>>> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>>>>> Hi Barry, >>>>>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>>>>> -sanjay >>>>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>>>> upremas.F:68:72: >>>>>> >>>>>> 68 | & ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>>>> upremas.F:74:72: >>>>>> >>>>>> 74 | & ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>>>> upremas.F:77:72: >>>>>> >>>>>> 77 | & ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>>>> >>>>>> parkv.F:58:25: >>>>>> >>>>>> 58 | PetscViewer Y_view >>>>>> | 1 >>>>>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>>>>> parkv.F:69:9: >>>>>> >>>>>> 69 | endif >>>>>> | 1 >>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>> parkv.F:72:9: >>>>>> >>>>>> 72 | endif >>>>>> | 1 >>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>> parkv.F:91:66: >>>>>> >>>>>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>>>>> | 1 >>>>>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>>>>> parkv.F:65:72: >>>>>> >>>>>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>>>>> parkv.F:67:72: >>>>>> >>>>>> 67 | call VecSetFromOptions(xvec, ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>>>>> parkv.F:68:72: >>>>>> >>>>>> 68 | call VecDuplicate (xvec, yvec, ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>> parkv.F:71:72: >>>>>> >>>>>> 71 | call VecDuplicate (xvec, yvec, ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>> parkv.F:85:72: >>>>>> >>>>>> 85 | call VecAssemblyBegin(xvec, ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>>>>> parkv.F:86:72: >>>>>> >>>>>> 86 | call VecAssemblyEnd (xvec, ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>>>>> parkv.F:88:72: >>>>>> >>>>>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>>>>> parkv.F:101:72: >>>>>> >>>>>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>>>>> | 1 >>>>>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>>>>> >>>>>> >>>>>> - >>>>>> >>>>>> >>>>>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>>>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>>>>> >>>>>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>>>>> >>>>>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>>>>> >>>>>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>>>>> >>>>>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>>>>> lldb -p 71963 >>>>>>>> (lldb) process attach --pid 71963 >>>>>>>> Process 71963 stopped >>>>>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>>>>> libsystem_kernel.dylib`__semwait_signal: >>>>>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>>>>> 0x7fff69d92750 <+20>: retq >>>>>>>> Target 0: (feap) stopped. >>>>>>>> >>>>>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>>>>> Architecture set to: x86_64h-apple-macosx-. >>>>>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>>>>> >>>>>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>>>>> >>>>>>>> Note, I have been >>>>>>>> -sanjay >>>>>>>> -- >>>>>>>> >>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- !$Id:$ subroutine upremas(fl) ! * * F E A P * * A Finite Element Analysis Program !.... Copyright (c) 1984-2025: Regents of the University of California ! All rights reserved !-----[--.----+----.----+----.-----------------------------------------] ! Modification log Date (dd/mm/year) ! Original version 01/11/2006 ! 1. Added petscksp.h 01/05/2007 ! 2. Change 'include/finclude' to 'finclude' 23/01/2009 ! 3. Remove common 'pfeapa' (values in 'setups') 05/02/2009 ! 4. Update petsc includes to v3.1 20/07/2010 ! 5. Change 'ndf' to 'nbsk' to match usolve.F 05/01/2013 ! 6. Update for loss of VecValid, MatValid 05/01/2013 ! 7. Update matrix creation calls 06/01/2013 ! 8. finclude -> petsc/finclude 12/05/2016 ! 9. Update for PETSc 3.8.3 28/02/2018 ! 10. Update pfeapc to a module 28/03/2018 ! 11. Remove unused 'chk' 23/05/2019 ! 12. Fixed NULL_INT to DEFAULT_INT for preallocation 04/06/2020 ! 13. Update for PETSc 3.22.4 20/03/2025 !-----[--.----+----.----+----.-----------------------------------------] ! Purpose: Mass interface for PETSc ! Inputs: ! fl(1) - Form Consistent mass if true ! fl(2) - Form Lumped mass if true ! Outputs: !-----[--.----+----.----+----.-----------------------------------------] # include use petscksp use petscmat ! Newly added to try to fix compile errors use pfeapc implicit none # include "cdata.h" # include "comblk.h" # include "endata.h" # include "iofile.h" # include "pointer.h" # include "sdata.h" # include "setups.h" # include "pfeapb.h" PetscErrorCode :: ierr logical :: fl(2) ! Preform setup if(fl(1)) then ! Consistent mass allocate if(Mmat.eq.PETSC_NULL_MAT) then call MatCreate(PETSC_COMM_WORLD,Mmat,ierr) call MatSetSizes(Mmat,numpeq,numpeq,PETSC_DETERMINE, & PETSC_DETERMINE,ierr) if(pfeap_bcin) call MatSetBlockSize(Mmat,nsbk,ierr) call MatSetFromOptions(Mmat, ierr) if(pfeap_blk) then call MatSetType(Mmat,MATBAIJ,ierr) call MatMPIBAIJSetPreallocation(Mmat,nsbk, & PETSC_NULL_INTEGER,mr(np(246)), & PETSC_NULL_INTEGER,mr(np(247)), & ierr) call MatSeqBAIJSetPreallocation(Mmat,nsbk, & PETSC_DEFAULT_INTEGER,mr(np(246)), & ierr) else call MatSetType(Mmat,MATAIJ,ierr) call MatMPIAIJSetPreallocation(Mmat, & PETSC_NULL_INTEGER,mr(np(246)), & PETSC_NULL_INTEGER,mr(np(247)), & ierr) call MatSeqAIJSetPreallocation(Mmat, & PETSC_NULL_INTEGER,mr(np(246)), & ierr) endif endif elseif(fl(2)) then ! Lumped mass allocate if(Mdiag.eq.PETSC_NULL_VEC) then call VecCreate (PETSC_COMM_WORLD, Mdiag, ierr) call VecSetSizes (Mdiag, numpeq, PETSC_DECIDE, ierr) call VecSetFromOptions(Mdiag, ierr) endif elseif(.not.fl(1) .and. .not.fl(2)) then write(iow,*) ' ERROR DID NOT ALLOCATE MASS MATRIX' write(ilg,*) ' ERROR DID NOT ALLOCATE MASS MATRIX' if(rank.eq.0) then write(*,*) ' ERROR DID NOT ALLOCATE MASS MATRIX' endif call plstop(.true.) endif if(ierr .ne. 0) then write(iow,*) 'Error on MatCreate' write(ilg,*) 'Error on MatCreate' if(rank.eq.0) then write( *,*) 'Error on MatCreate' endif call plstop(.true.) endif if(fl(1)) then call MatZeroEntries (Mmat,ierr) else call VecZeroEntries (Mdiag,ierr) endif ! Vector for matrix multiply if(xvec.eq.PETSC_NULL_VEC) then call VecCreate (PETSC_COMM_WORLD, xvec, ierr) call VecSetSizes (xvec, numpeq, PETSC_DECIDE, ierr) call VecSetFromOptions(xvec, ierr) endif end subroutine upremas From jroman at dsic.upv.es Sun Mar 23 03:36:00 2025 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 23 Mar 2025 08:36:00 +0000 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> <54B2D303-689B-4BD1-8AF7-D4714EA7BD94@dsic.upv.es> Message-ID: <4F8FAC6B-34CC-4FB6-9D68-70B370D5B80C@dsic.upv.es> What is mr(np(246))? It should be an array, not a single entry of an array. As indicated in the list of changes, in some cases you can use the syntax [z] instead of z to represent an array of single value z. > El 23 mar 2025, a las 9:24, Sanjay Govindjee escribi?: > > Jose, > I've tried lots of combinations but I still get the same error. I think the signatures are all correct. I've attached the routine in case you see something obvious. If not I will try to make a standalone program that generates the same compile error. > upremas.F:66:72: > > 66 | & ierr) > | 1 > Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) > upremas.F:69:72: > > 69 | & ierr) > | 1 > Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) > upremas.F:75:72: > > 75 | & ierr) > | 1 > Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) > upremas.F:78:72: > > 78 | & ierr) > | 1 > Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) > > -- > > > On 3/23/25 1:09 AM, Jose E. Roman wrote: >> "use petscmat" will use those definitions. >> >> As I said, you probably have mismatching arguments. For instance >> call MatSeqAIJSetPreallocation(Mmat, >> & PETSC_NULL_INTEGER_ARRAY,mr(np(246)), >> & ierr) >> The second argument is a PetscInt so PETSC_NULL_INTEGER_ARRAY is wrong, it should be PETSC_NULL_INTEGER. >> >> Now the compiler will help you fix this kind of errors, which would go unnoticed before. >> >> Jose >> >> >> >> >>> El 23 mar 2025, a las 9:02, Sanjay Govindjee escribi?: >>> >>> Jose, >>> What module should I be using to load petscmat.h90? >>> -sanjay >>> >>> -- >>> >>> On 3/23/25 12:54 AM, Jose E. Roman wrote: >>> >>>> Have a look at the list of changes - it is currently here https://urldefense.us/v3/__https://petsc.org/main/changes/dev/__;!!G_uCfscf7eWS!Z7Z3K4IuDsP7f27a6RlCAoWVwvNV0Tfis4_iqdTxxBar71pu-yzwviBPbf7nO9r0Tpv1IC5GMYqm_Cju5tqLwHgn$ until the new version is released. See the last section "Fortran". >>>> >>>> The functions ending in "F90" have been renamed, just remove the "F90" suffix. >>>> >>>> Regarding the info-related errors, a workaround is to append %v, for instance >>>> mal = info(MAT_INFO_MALLOCS%v) >>>> But Barry may want to provide a better fix for this. >>>> >>>> Jose >>>> >>>> >>>> >>>>> El 23 mar 2025, a las 8:42, Jose E. Roman via petsc-users escribi?: >>>>> >>>>> The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 >>>>> >>>>> For instance: >>>>> >>>>> interface MatMPIBAIJSetPreallocation >>>>> subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) >>>>> import tMat >>>>> Mat :: a >>>>> PetscInt :: b >>>>> PetscInt :: c >>>>> PetscInt :: d(*) >>>>> PetscInt :: e >>>>> PetscInt :: f(*) >>>>> PetscErrorCode z >>>>> end subroutine >>>>> end interface >>>>> >>>>> The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. >>>>> >>>>> Jose >>>>> >>>>> >>>>> >>>>>> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >>>>>> >>>>>> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >>>>>> -sanjay >>>>>> >>>>>> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>>>>> >>>>>>> Hi Barry, >>>>>>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>>>>>> -sanjay >>>>>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>>>>> upremas.F:68:72: >>>>>>> >>>>>>> 68 | & ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>>>>> upremas.F:74:72: >>>>>>> >>>>>>> 74 | & ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>>>>> upremas.F:77:72: >>>>>>> >>>>>>> 77 | & ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>>>>> >>>>>>> parkv.F:58:25: >>>>>>> >>>>>>> 58 | PetscViewer Y_view >>>>>>> | 1 >>>>>>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>>>>>> parkv.F:69:9: >>>>>>> >>>>>>> 69 | endif >>>>>>> | 1 >>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>> parkv.F:72:9: >>>>>>> >>>>>>> 72 | endif >>>>>>> | 1 >>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>> parkv.F:91:66: >>>>>>> >>>>>>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>>>>>> | 1 >>>>>>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>>>>>> parkv.F:65:72: >>>>>>> >>>>>>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>>>>>> parkv.F:67:72: >>>>>>> >>>>>>> 67 | call VecSetFromOptions(xvec, ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>>>>>> parkv.F:68:72: >>>>>>> >>>>>>> 68 | call VecDuplicate (xvec, yvec, ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>> parkv.F:71:72: >>>>>>> >>>>>>> 71 | call VecDuplicate (xvec, yvec, ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>> parkv.F:85:72: >>>>>>> >>>>>>> 85 | call VecAssemblyBegin(xvec, ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>>>>>> parkv.F:86:72: >>>>>>> >>>>>>> 86 | call VecAssemblyEnd (xvec, ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>>>>>> parkv.F:88:72: >>>>>>> >>>>>>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>>>>>> parkv.F:101:72: >>>>>>> >>>>>>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>>>>>> | 1 >>>>>>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>>>>>> >>>>>>> >>>>>>> - >>>>>>> >>>>>>> >>>>>>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>>>>> >>>>>>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>>>>>> >>>>>>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>>>>>> >>>>>>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>>>>>> >>>>>>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>>>>>> >>>>>>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>>>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>>>>>> lldb -p 71963 >>>>>>>>> (lldb) process attach --pid 71963 >>>>>>>>> Process 71963 stopped >>>>>>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>>>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>>>>>> libsystem_kernel.dylib`__semwait_signal: >>>>>>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>>>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>>>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>>>>>> 0x7fff69d92750 <+20>: retq >>>>>>>>> Target 0: (feap) stopped. >>>>>>>>> >>>>>>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>>>>>> Architecture set to: x86_64h-apple-macosx-. >>>>>>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>>>>>> >>>>>>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>>>>>> >>>>>>>>> Note, I have been >>>>>>>>> -sanjay >>>>>>>>> -- >>>>>>>>> >>>>>>>>> >>>>>>>>> >>> >> > > From s_g at berkeley.edu Sun Mar 23 03:43:31 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 01:43:31 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <4F8FAC6B-34CC-4FB6-9D68-70B370D5B80C@dsic.upv.es> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> <54B2D303-689B-4BD1-8AF7-D4714EA7BD94@dsic.upv.es> <4F8FAC6B-34CC-4FB6-9D68-70B370D5B80C@dsic.upv.es> Message-ID: <9d1f4c9e-a977-41c8-a149-72f9e1229cbf@berkeley.edu> Complicated history of a 40+ year old code.? But in short mr( np(246) ) is the first location of an allocated block of memory. If need be I can explicitly set up a pointer to the array, something like integer, pointer :: arname( : ) arname(1:arrlen)????? => mr(np(246):np(246)+arrlen-1) I'd rather not do this if I do not need to, but it is also not a big deal if that would take care of the problem. ------------------------------------------------------------------- Sanjay Govindjee, PhD, PE Horace, Dorothy, and Katherine Johnson Professor in Engineering Distinguished Professor of Civil and Environmental Engineering 779 Davis Hall University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!d1PcyKj_ayARs3s49qmaRPSgHSvYEeEAHmOaZlsSb8EcP2hm15tcxIMzRLVlelAkt-xOjtbOyJ6Cyvo2BdUOBA$ ------------------------------------------------------------------- Books: Introduction to Mechanics of Solid Materials https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!d1PcyKj_ayARs3s49qmaRPSgHSvYEeEAHmOaZlsSb8EcP2hm15tcxIMzRLVlelAkt-xOjtbOyJ6CyvqJBXC_jA$ Continuum Mechanics of Solids https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!d1PcyKj_ayARs3s49qmaRPSgHSvYEeEAHmOaZlsSb8EcP2hm15tcxIMzRLVlelAkt-xOjtbOyJ6Cyvp-qJ_sFg$ Example Problems for Continuum Mechanics of Solids https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!d1PcyKj_ayARs3s49qmaRPSgHSvYEeEAHmOaZlsSb8EcP2hm15tcxIMzRLVlelAkt-xOjtbOyJ6CyvpsAsZZmA$ Engineering Mechanics of Deformable Solids https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!d1PcyKj_ayARs3s49qmaRPSgHSvYEeEAHmOaZlsSb8EcP2hm15tcxIMzRLVlelAkt-xOjtbOyJ6Cyvrde-7h3Q$ Engineering Mechanics 3 (Dynamics) 2nd Edition https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!d1PcyKj_ayARs3s49qmaRPSgHSvYEeEAHmOaZlsSb8EcP2hm15tcxIMzRLVlelAkt-xOjtbOyJ6CyvoMZiWmQg$ Engineering Mechanics 3, Supplementary Problems: Dynamics https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!d1PcyKj_ayARs3s49qmaRPSgHSvYEeEAHmOaZlsSb8EcP2hm15tcxIMzRLVlelAkt-xOjtbOyJ6CyvpfWLDmPg$ ------------------------------------------------------------------- NSF NHERI SimCenter https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!d1PcyKj_ayARs3s49qmaRPSgHSvYEeEAHmOaZlsSb8EcP2hm15tcxIMzRLVlelAkt-xOjtbOyJ6CyvpOyJ_gKQ$ ------------------------------------------------------------------- On 3/23/25 1:36 AM, Jose E. Roman wrote: > What is mr(np(246))? It should be an array, not a single entry of an array. As indicated in the list of changes, in some cases you can use the syntax [z] instead of z to represent an array of single value z. > > >> El 23 mar 2025, a las 9:24, Sanjay Govindjee escribi?: >> >> Jose, >> I've tried lots of combinations but I still get the same error. I think the signatures are all correct. I've attached the routine in case you see something obvious. If not I will try to make a standalone program that generates the same compile error. >> upremas.F:66:72: >> >> 66 | & ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >> upremas.F:69:72: >> >> 69 | & ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >> upremas.F:75:72: >> >> 75 | & ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >> upremas.F:78:72: >> >> 78 | & ierr) >> | 1 >> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >> >> -- >> >> >> On 3/23/25 1:09 AM, Jose E. Roman wrote: >>> "use petscmat" will use those definitions. >>> >>> As I said, you probably have mismatching arguments. For instance >>> call MatSeqAIJSetPreallocation(Mmat, >>> & PETSC_NULL_INTEGER_ARRAY,mr(np(246)), >>> & ierr) >>> The second argument is a PetscInt so PETSC_NULL_INTEGER_ARRAY is wrong, it should be PETSC_NULL_INTEGER. >>> >>> Now the compiler will help you fix this kind of errors, which would go unnoticed before. >>> >>> Jose >>> >>> >>> >>> >>>> El 23 mar 2025, a las 9:02, Sanjay Govindjee escribi?: >>>> >>>> Jose, >>>> What module should I be using to load petscmat.h90? >>>> -sanjay >>>> >>>> -- >>>> >>>> On 3/23/25 12:54 AM, Jose E. Roman wrote: >>>> >>>>> Have a look at the list of changes - it is currently herehttps://petsc.org/main/changes/dev/ until the new version is released. See the last section "Fortran". >>>>> >>>>> The functions ending in "F90" have been renamed, just remove the "F90" suffix. >>>>> >>>>> Regarding the info-related errors, a workaround is to append %v, for instance >>>>> mal = info(MAT_INFO_MALLOCS%v) >>>>> But Barry may want to provide a better fix for this. >>>>> >>>>> Jose >>>>> >>>>> >>>>> >>>>>> El 23 mar 2025, a las 8:42, Jose E. Roman via petsc-users escribi?: >>>>>> >>>>>> The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 >>>>>> >>>>>> For instance: >>>>>> >>>>>> interface MatMPIBAIJSetPreallocation >>>>>> subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) >>>>>> import tMat >>>>>> Mat :: a >>>>>> PetscInt :: b >>>>>> PetscInt :: c >>>>>> PetscInt :: d(*) >>>>>> PetscInt :: e >>>>>> PetscInt :: f(*) >>>>>> PetscErrorCode z >>>>>> end subroutine >>>>>> end interface >>>>>> >>>>>> The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. >>>>>> >>>>>> Jose >>>>>> >>>>>> >>>>>> >>>>>>> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >>>>>>> >>>>>>> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >>>>>>> -sanjay >>>>>>> >>>>>>> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>>>>>> >>>>>>>> Hi Barry, >>>>>>>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>>>>>>> -sanjay >>>>>>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>>>>>> upremas.F:68:72: >>>>>>>> >>>>>>>> 68 | & ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>>>>>> upremas.F:74:72: >>>>>>>> >>>>>>>> 74 | & ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>>>>>> upremas.F:77:72: >>>>>>>> >>>>>>>> 77 | & ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>>>>>> >>>>>>>> parkv.F:58:25: >>>>>>>> >>>>>>>> 58 | PetscViewer Y_view >>>>>>>> | 1 >>>>>>>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>>>>>>> parkv.F:69:9: >>>>>>>> >>>>>>>> 69 | endif >>>>>>>> | 1 >>>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>>> parkv.F:72:9: >>>>>>>> >>>>>>>> 72 | endif >>>>>>>> | 1 >>>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>>> parkv.F:91:66: >>>>>>>> >>>>>>>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>>>>>>> | 1 >>>>>>>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>>>>>>> parkv.F:65:72: >>>>>>>> >>>>>>>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>>>>>>> parkv.F:67:72: >>>>>>>> >>>>>>>> 67 | call VecSetFromOptions(xvec, ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>>>>>>> parkv.F:68:72: >>>>>>>> >>>>>>>> 68 | call VecDuplicate (xvec, yvec, ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>>> parkv.F:71:72: >>>>>>>> >>>>>>>> 71 | call VecDuplicate (xvec, yvec, ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>>> parkv.F:85:72: >>>>>>>> >>>>>>>> 85 | call VecAssemblyBegin(xvec, ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>>>>>>> parkv.F:86:72: >>>>>>>> >>>>>>>> 86 | call VecAssemblyEnd (xvec, ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>>>>>>> parkv.F:88:72: >>>>>>>> >>>>>>>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>>>>>>> parkv.F:101:72: >>>>>>>> >>>>>>>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>>>>>>> | 1 >>>>>>>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>>>>>>> >>>>>>>> >>>>>>>> - >>>>>>>> >>>>>>>> >>>>>>>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>>>>>> >>>>>>>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>>>>>>> >>>>>>>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>>>>>>> >>>>>>>>> Barry >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>>>>>>> >>>>>>>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>>>>>>> >>>>>>>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>>>>>>> >>>>>>>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>>>>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>>>>>>> lldb -p 71963 >>>>>>>>>> (lldb) process attach --pid 71963 >>>>>>>>>> Process 71963 stopped >>>>>>>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>>>>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>>>>>>> libsystem_kernel.dylib`__semwait_signal: >>>>>>>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>>>>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>>>>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>>>>>>> 0x7fff69d92750 <+20>: retq >>>>>>>>>> Target 0: (feap) stopped. >>>>>>>>>> >>>>>>>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>>>>>>> Architecture set to: x86_64h-apple-macosx-. >>>>>>>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>>>>>>> >>>>>>>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>>>>>>> >>>>>>>>>> Note, I have been >>>>>>>>>> -sanjay >>>>>>>>>> -- >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Sun Mar 23 04:13:13 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 02:13:13 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <9d1f4c9e-a977-41c8-a149-72f9e1229cbf@berkeley.edu> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> <54B2D303-689B-4BD1-8AF7-D4714EA7BD94@dsic.upv.es> <4F8FAC6B-34CC-4FB6-9D68-70B370D5B80C@dsic.upv.es> <9d1f4c9e-a977-41c8-a149-72f9e1229cbf@berkeley.edu> Message-ID: Jose, Using the pointer construction to set up the arrays has resolved the issue with finding the preallocation interfaces. I'll look at the other issues tomorrow.? Thanks for the help. -sanjay - On 3/23/25 1:43 AM, Sanjay Govindjee wrote: > Complicated history of a 40+ year old code.? But in short > mr( np(246) ) is the first location of an allocated block of memory. > > If need be I can explicitly set up a pointer to the array, something like > > integer, pointer :: arname( : ) > arname(1:arrlen)????? => mr(np(246):np(246)+arrlen-1) > > I'd rather not do this if I do not need to, but it is also not a big > deal if that would > take care of the problem. > ------------------------------------------------------------------- > Sanjay Govindjee, PhD, PE > Horace, Dorothy, and Katherine Johnson Professor in Engineering > Distinguished Professor of Civil and Environmental Engineering > > 779 Davis Hall > University of California > Berkeley, CA 94720-1710 > > Voice: +1 510 642 6060 > FAX: +1 510 643 5264 > s_g at berkeley.edu > https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!auRr3LF8xRM8wbr2Uf-r-Iq36Vn_s7l9gpW6BJXP1rEDh7kY37gkOt0XJcsnnJ1HcoYZWhtZ4sgdEX5NiE1aSw$ > ------------------------------------------------------------------- > > Books: > > Introduction to Mechanics of Solid Materials > https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!auRr3LF8xRM8wbr2Uf-r-Iq36Vn_s7l9gpW6BJXP1rEDh7kY37gkOt0XJcsnnJ1HcoYZWhtZ4sgdEX4sw4h0ig$ > > Continuum Mechanics of Solids > https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!auRr3LF8xRM8wbr2Uf-r-Iq36Vn_s7l9gpW6BJXP1rEDh7kY37gkOt0XJcsnnJ1HcoYZWhtZ4sgdEX6w3IQM7Q$ > > Example Problems for Continuum Mechanics of Solids > https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!auRr3LF8xRM8wbr2Uf-r-Iq36Vn_s7l9gpW6BJXP1rEDh7kY37gkOt0XJcsnnJ1HcoYZWhtZ4sgdEX6ztVQnmQ$ > > Engineering Mechanics of Deformable Solids > https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!auRr3LF8xRM8wbr2Uf-r-Iq36Vn_s7l9gpW6BJXP1rEDh7kY37gkOt0XJcsnnJ1HcoYZWhtZ4sgdEX5ACxGINg$ > > Engineering Mechanics 3 (Dynamics) 2nd Edition > https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!auRr3LF8xRM8wbr2Uf-r-Iq36Vn_s7l9gpW6BJXP1rEDh7kY37gkOt0XJcsnnJ1HcoYZWhtZ4sgdEX6M8_LghQ$ > > Engineering Mechanics 3, Supplementary Problems: Dynamics > https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!auRr3LF8xRM8wbr2Uf-r-Iq36Vn_s7l9gpW6BJXP1rEDh7kY37gkOt0XJcsnnJ1HcoYZWhtZ4sgdEX7KMbs2nw$ > > ------------------------------------------------------------------- > NSF NHERI SimCenter > https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!auRr3LF8xRM8wbr2Uf-r-Iq36Vn_s7l9gpW6BJXP1rEDh7kY37gkOt0XJcsnnJ1HcoYZWhtZ4sgdEX4kwuOQ3Q$ > ------------------------------------------------------------------- > > On 3/23/25 1:36 AM, Jose E. Roman wrote: >> What is mr(np(246))? It should be an array, not a single entry of an array. As indicated in the list of changes, in some cases you can use the syntax [z] instead of z to represent an array of single value z. >> >> >>> El 23 mar 2025, a las 9:24, Sanjay Govindjee escribi?: >>> >>> Jose, >>> I've tried lots of combinations but I still get the same error. I think the signatures are all correct. I've attached the routine in case you see something obvious. If not I will try to make a standalone program that generates the same compile error. >>> upremas.F:66:72: >>> >>> 66 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>> upremas.F:69:72: >>> >>> 69 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>> upremas.F:75:72: >>> >>> 75 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>> upremas.F:78:72: >>> >>> 78 | & ierr) >>> | 1 >>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>> >>> -- >>> >>> >>> On 3/23/25 1:09 AM, Jose E. Roman wrote: >>>> "use petscmat" will use those definitions. >>>> >>>> As I said, you probably have mismatching arguments. For instance >>>> call MatSeqAIJSetPreallocation(Mmat, >>>> & PETSC_NULL_INTEGER_ARRAY,mr(np(246)), >>>> & ierr) >>>> The second argument is a PetscInt so PETSC_NULL_INTEGER_ARRAY is wrong, it should be PETSC_NULL_INTEGER. >>>> >>>> Now the compiler will help you fix this kind of errors, which would go unnoticed before. >>>> >>>> Jose >>>> >>>> >>>> >>>> >>>>> El 23 mar 2025, a las 9:02, Sanjay Govindjee escribi?: >>>>> >>>>> Jose, >>>>> What module should I be using to load petscmat.h90? >>>>> -sanjay >>>>> >>>>> -- >>>>> >>>>> On 3/23/25 12:54 AM, Jose E. Roman wrote: >>>>> >>>>>> Have a look at the list of changes - it is currently herehttps://petsc.org/main/changes/dev/ until the new version is released. See the last section "Fortran". >>>>>> >>>>>> The functions ending in "F90" have been renamed, just remove the "F90" suffix. >>>>>> >>>>>> Regarding the info-related errors, a workaround is to append %v, for instance >>>>>> mal = info(MAT_INFO_MALLOCS%v) >>>>>> But Barry may want to provide a better fix for this. >>>>>> >>>>>> Jose >>>>>> >>>>>> >>>>>> >>>>>>> El 23 mar 2025, a las 8:42, Jose E. Roman via petsc-users escribi?: >>>>>>> >>>>>>> The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 >>>>>>> >>>>>>> For instance: >>>>>>> >>>>>>> interface MatMPIBAIJSetPreallocation >>>>>>> subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) >>>>>>> import tMat >>>>>>> Mat :: a >>>>>>> PetscInt :: b >>>>>>> PetscInt :: c >>>>>>> PetscInt :: d(*) >>>>>>> PetscInt :: e >>>>>>> PetscInt :: f(*) >>>>>>> PetscErrorCode z >>>>>>> end subroutine >>>>>>> end interface >>>>>>> >>>>>>> The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. >>>>>>> >>>>>>> Jose >>>>>>> >>>>>>> >>>>>>> >>>>>>>> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >>>>>>>> >>>>>>>> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >>>>>>>> -sanjay >>>>>>>> >>>>>>>> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>>>>>>> >>>>>>>>> Hi Barry, >>>>>>>>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>>>>>>>> -sanjay >>>>>>>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>>>>>>> upremas.F:68:72: >>>>>>>>> >>>>>>>>> 68 | & ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>>>>>>> upremas.F:74:72: >>>>>>>>> >>>>>>>>> 74 | & ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>>>>>>> upremas.F:77:72: >>>>>>>>> >>>>>>>>> 77 | & ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>>>>>>> >>>>>>>>> parkv.F:58:25: >>>>>>>>> >>>>>>>>> 58 | PetscViewer Y_view >>>>>>>>> | 1 >>>>>>>>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>>>>>>>> parkv.F:69:9: >>>>>>>>> >>>>>>>>> 69 | endif >>>>>>>>> | 1 >>>>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>>>> parkv.F:72:9: >>>>>>>>> >>>>>>>>> 72 | endif >>>>>>>>> | 1 >>>>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>>>> parkv.F:91:66: >>>>>>>>> >>>>>>>>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>>>>>>>> | 1 >>>>>>>>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>>>>>>>> parkv.F:65:72: >>>>>>>>> >>>>>>>>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>>>>>>>> parkv.F:67:72: >>>>>>>>> >>>>>>>>> 67 | call VecSetFromOptions(xvec, ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>>>>>>>> parkv.F:68:72: >>>>>>>>> >>>>>>>>> 68 | call VecDuplicate (xvec, yvec, ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>>>> parkv.F:71:72: >>>>>>>>> >>>>>>>>> 71 | call VecDuplicate (xvec, yvec, ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>>>> parkv.F:85:72: >>>>>>>>> >>>>>>>>> 85 | call VecAssemblyBegin(xvec, ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>>>>>>>> parkv.F:86:72: >>>>>>>>> >>>>>>>>> 86 | call VecAssemblyEnd (xvec, ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>>>>>>>> parkv.F:88:72: >>>>>>>>> >>>>>>>>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>>>>>>>> parkv.F:101:72: >>>>>>>>> >>>>>>>>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>>>>>>>> | 1 >>>>>>>>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>>>>>>>> >>>>>>>>> >>>>>>>>> - >>>>>>>>> >>>>>>>>> >>>>>>>>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>>>>>>> >>>>>>>>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>>>>>>>> >>>>>>>>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>>>>>>>> >>>>>>>>>> Barry >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>>>>>>>> >>>>>>>>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>>>>>>>> >>>>>>>>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>>>>>>>> >>>>>>>>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>>>>>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>>>>>>>> lldb -p 71963 >>>>>>>>>>> (lldb) process attach --pid 71963 >>>>>>>>>>> Process 71963 stopped >>>>>>>>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>>>>>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>>>>>>>> libsystem_kernel.dylib`__semwait_signal: >>>>>>>>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>>>>>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>>>>>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>>>>>>>> 0x7fff69d92750 <+20>: retq >>>>>>>>>>> Target 0: (feap) stopped. >>>>>>>>>>> >>>>>>>>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>>>>>>>> Architecture set to: x86_64h-apple-macosx-. >>>>>>>>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>>>>>>>> >>>>>>>>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>>>>>>>> >>>>>>>>>>> Note, I have been >>>>>>>>>>> -sanjay >>>>>>>>>>> -- >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Sun Mar 23 17:22:51 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 15:22:51 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> <54B2D303-689B-4BD1-8AF7-D4714EA7BD94@dsic.upv.es> <4F8FAC6B-34CC-4FB6-9D68-70B370D5B80C@dsic.upv.es> <9d1f4c9e-a977-41c8-a149-72f9e1229cbf@berkeley.edu> Message-ID: <514265b7-1882-4c54-bec6-ebe8068ab90a@berkeley.edu> Barry and Jose, I have gotten my compile issues down to the following: usolve.F:84:46: ?? 84 |?????? MatInfo??????? info(MAT_INFO_SIZE) ????? |????????????????????????????????????????????? 1 Error: Symbol 'mat_info_size' at (1) has no IMPLICIT type; did you mean 'mpi_win_size'? usolve.F:191:39: ? 191 |???????????? mal = info(MAT_INFO_MALLOCS) ????? |?????????????????????????????????????? 1 Error: Symbol 'mat_info_mallocs' at (1) has no IMPLICIT type; did you mean 'mpi_info_null'? usolve.F:189:44: ? 189 |???????????? nza = info(MAT_INFO_NZ_ALLOCATED) ????? |??????????????????????????????????????????? 1 Error: Symbol 'mat_info_nz_allocated' at (1) has no IMPLICIT type; did you mean 'mpi_win_flavor_allocate'? usolve.F:190:39: ? 190 |???????????? nzr = info(MAT_INFO_NZ_USED) ????? |?????????????????????????????????????? 1 Error: Symbol 'mat_info_nz_used' at (1) has no IMPLICIT type; did you mean 'mpi_info_null'? usolve.F:188:72: ? 188 |???????????? call MatGetInfo(Kmat, MAT_LOCAL, info, ierr) | 1 Error: There is no specific subroutine for the generic 'matgetinfo' at (1) usolve.F:346:15: It seems that my issues are revolve about MatInfo, but I can not see what I am doing wrong. In terms of variable declarations, here is what I have ????? MatInfo??????? info(MAT_INFO_SIZE) One follow-up on a fix I made to remove some compile errors; I looked into petscksp.hf90 and found that the return value from KSPGetConvergedReason(kspsol,reason,ierr) can be accessed as reason%v. Is there a recommended way to get the value of the reason?? My use of reason%v seems rather unclean.? The use case is I want to know if the reason value is .gt.0 or .lt.0 and then given that I want to print a custom string to stdout and to a file unit. All hints appreciated, -sanjay On 3/23/25 2:13 AM, Sanjay Govindjee wrote: > Jose, > Using the pointer construction to set up the arrays has resolved the > issue with finding the preallocation interfaces. > I'll look at the other issues tomorrow.? Thanks for the help. > -sanjay > - > > On 3/23/25 1:43 AM, Sanjay Govindjee wrote: >> Complicated history of a 40+ year old code.? But in short >> mr( np(246) ) is the first location of an allocated block of memory. >> >> If need be I can explicitly set up a pointer to the array, something like >> >> integer, pointer :: arname( : ) >> arname(1:arrlen)????? => mr(np(246):np(246)+arrlen-1) >> >> I'd rather not do this if I do not need to, but it is also not a big >> deal if that would >> take care of the problem. >> ------------------------------------------------------------------- >> Sanjay Govindjee, PhD, PE >> Horace, Dorothy, and Katherine Johnson Professor in Engineering >> Distinguished Professor of Civil and Environmental Engineering >> >> 779 Davis Hall >> University of California >> Berkeley, CA 94720-1710 >> >> Voice: +1 510 642 6060 >> FAX: +1 510 643 5264 >> s_g at berkeley.edu >> https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!ehz0XBo49bhZyipfWPRxQNlqhVtKSPrq3HLOVdkkAI_8ML6Op73QTljYEYxjCbBRnJYZjqcpXu5s-73t7iL7HQ$ >> ------------------------------------------------------------------- >> >> Books: >> >> Introduction to Mechanics of Solid Materials >> https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!ehz0XBo49bhZyipfWPRxQNlqhVtKSPrq3HLOVdkkAI_8ML6Op73QTljYEYxjCbBRnJYZjqcpXu5s-72wUrGTLw$ >> >> Continuum Mechanics of Solids >> https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!ehz0XBo49bhZyipfWPRxQNlqhVtKSPrq3HLOVdkkAI_8ML6Op73QTljYEYxjCbBRnJYZjqcpXu5s-73NKzuAHQ$ >> >> Example Problems for Continuum Mechanics of Solids >> https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!ehz0XBo49bhZyipfWPRxQNlqhVtKSPrq3HLOVdkkAI_8ML6Op73QTljYEYxjCbBRnJYZjqcpXu5s-735DnkCuw$ >> >> Engineering Mechanics of Deformable Solids >> https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!ehz0XBo49bhZyipfWPRxQNlqhVtKSPrq3HLOVdkkAI_8ML6Op73QTljYEYxjCbBRnJYZjqcpXu5s-70b6-U0yw$ >> >> Engineering Mechanics 3 (Dynamics) 2nd Edition >> https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!ehz0XBo49bhZyipfWPRxQNlqhVtKSPrq3HLOVdkkAI_8ML6Op73QTljYEYxjCbBRnJYZjqcpXu5s-73QO-rr2g$ >> >> Engineering Mechanics 3, Supplementary Problems: Dynamics >> https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!ehz0XBo49bhZyipfWPRxQNlqhVtKSPrq3HLOVdkkAI_8ML6Op73QTljYEYxjCbBRnJYZjqcpXu5s-71XhgI3Fw$ >> >> ------------------------------------------------------------------- >> NSF NHERI SimCenter >> https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!ehz0XBo49bhZyipfWPRxQNlqhVtKSPrq3HLOVdkkAI_8ML6Op73QTljYEYxjCbBRnJYZjqcpXu5s-70kbFz5Mg$ >> ------------------------------------------------------------------- >> >> On 3/23/25 1:36 AM, Jose E. Roman wrote: >>> What is mr(np(246))? It should be an array, not a single entry of an array. As indicated in the list of changes, in some cases you can use the syntax [z] instead of z to represent an array of single value z. >>> >>> >>>> El 23 mar 2025, a las 9:24, Sanjay Govindjee escribi?: >>>> >>>> Jose, >>>> I've tried lots of combinations but I still get the same error. I think the signatures are all correct. I've attached the routine in case you see something obvious. If not I will try to make a standalone program that generates the same compile error. >>>> upremas.F:66:72: >>>> >>>> 66 | & ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>> upremas.F:69:72: >>>> >>>> 69 | & ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>> upremas.F:75:72: >>>> >>>> 75 | & ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>> upremas.F:78:72: >>>> >>>> 78 | & ierr) >>>> | 1 >>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>> >>>> -- >>>> >>>> >>>> On 3/23/25 1:09 AM, Jose E. Roman wrote: >>>>> "use petscmat" will use those definitions. >>>>> >>>>> As I said, you probably have mismatching arguments. For instance >>>>> call MatSeqAIJSetPreallocation(Mmat, >>>>> & PETSC_NULL_INTEGER_ARRAY,mr(np(246)), >>>>> & ierr) >>>>> The second argument is a PetscInt so PETSC_NULL_INTEGER_ARRAY is wrong, it should be PETSC_NULL_INTEGER. >>>>> >>>>> Now the compiler will help you fix this kind of errors, which would go unnoticed before. >>>>> >>>>> Jose >>>>> >>>>> >>>>> >>>>> >>>>>> El 23 mar 2025, a las 9:02, Sanjay Govindjee escribi?: >>>>>> >>>>>> Jose, >>>>>> What module should I be using to load petscmat.h90? >>>>>> -sanjay >>>>>> >>>>>> -- >>>>>> >>>>>> On 3/23/25 12:54 AM, Jose E. Roman wrote: >>>>>> >>>>>>> Have a look at the list of changes - it is currently herehttps://petsc.org/main/changes/dev/ until the new version is released. See the last section "Fortran". >>>>>>> >>>>>>> The functions ending in "F90" have been renamed, just remove the "F90" suffix. >>>>>>> >>>>>>> Regarding the info-related errors, a workaround is to append %v, for instance >>>>>>> mal = info(MAT_INFO_MALLOCS%v) >>>>>>> But Barry may want to provide a better fix for this. >>>>>>> >>>>>>> Jose >>>>>>> >>>>>>> >>>>>>> >>>>>>>> El 23 mar 2025, a las 8:42, Jose E. Roman via petsc-users escribi?: >>>>>>>> >>>>>>>> The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 >>>>>>>> >>>>>>>> For instance: >>>>>>>> >>>>>>>> interface MatMPIBAIJSetPreallocation >>>>>>>> subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) >>>>>>>> import tMat >>>>>>>> Mat :: a >>>>>>>> PetscInt :: b >>>>>>>> PetscInt :: c >>>>>>>> PetscInt :: d(*) >>>>>>>> PetscInt :: e >>>>>>>> PetscInt :: f(*) >>>>>>>> PetscErrorCode z >>>>>>>> end subroutine >>>>>>>> end interface >>>>>>>> >>>>>>>> The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. >>>>>>>> >>>>>>>> Jose >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >>>>>>>>> >>>>>>>>> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >>>>>>>>> -sanjay >>>>>>>>> >>>>>>>>> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>>>>>>>> >>>>>>>>>> Hi Barry, >>>>>>>>>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>>>>>>>>> -sanjay >>>>>>>>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>>>>>>>> upremas.F:68:72: >>>>>>>>>> >>>>>>>>>> 68 | & ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>>>>>>>> upremas.F:74:72: >>>>>>>>>> >>>>>>>>>> 74 | & ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>>>>>>>> upremas.F:77:72: >>>>>>>>>> >>>>>>>>>> 77 | & ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>>>>>>>> >>>>>>>>>> parkv.F:58:25: >>>>>>>>>> >>>>>>>>>> 58 | PetscViewer Y_view >>>>>>>>>> | 1 >>>>>>>>>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>>>>>>>>> parkv.F:69:9: >>>>>>>>>> >>>>>>>>>> 69 | endif >>>>>>>>>> | 1 >>>>>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>>>>> parkv.F:72:9: >>>>>>>>>> >>>>>>>>>> 72 | endif >>>>>>>>>> | 1 >>>>>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>>>>> parkv.F:91:66: >>>>>>>>>> >>>>>>>>>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>>>>>>>>> | 1 >>>>>>>>>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>>>>>>>>> parkv.F:65:72: >>>>>>>>>> >>>>>>>>>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>>>>>>>>> parkv.F:67:72: >>>>>>>>>> >>>>>>>>>> 67 | call VecSetFromOptions(xvec, ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>>>>>>>>> parkv.F:68:72: >>>>>>>>>> >>>>>>>>>> 68 | call VecDuplicate (xvec, yvec, ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>>>>> parkv.F:71:72: >>>>>>>>>> >>>>>>>>>> 71 | call VecDuplicate (xvec, yvec, ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>>>>> parkv.F:85:72: >>>>>>>>>> >>>>>>>>>> 85 | call VecAssemblyBegin(xvec, ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>>>>>>>>> parkv.F:86:72: >>>>>>>>>> >>>>>>>>>> 86 | call VecAssemblyEnd (xvec, ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>>>>>>>>> parkv.F:88:72: >>>>>>>>>> >>>>>>>>>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>>>>>>>>> parkv.F:101:72: >>>>>>>>>> >>>>>>>>>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>>>>>>>>> | 1 >>>>>>>>>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> - >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>>>>>>>> >>>>>>>>>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>>>>>>>>> >>>>>>>>>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>>>>>>>>> >>>>>>>>>>> Barry >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>>>>>>>>> >>>>>>>>>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>>>>>>>>> >>>>>>>>>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>>>>>>>>> >>>>>>>>>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>>>>>>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>>>>>>>>> lldb -p 71963 >>>>>>>>>>>> (lldb) process attach --pid 71963 >>>>>>>>>>>> Process 71963 stopped >>>>>>>>>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>>>>>>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>>>>>>>>> libsystem_kernel.dylib`__semwait_signal: >>>>>>>>>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>>>>>>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>>>>>>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>>>>>>>>> 0x7fff69d92750 <+20>: retq >>>>>>>>>>>> Target 0: (feap) stopped. >>>>>>>>>>>> >>>>>>>>>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>>>>>>>>> Architecture set to: x86_64h-apple-macosx-. >>>>>>>>>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>>>>>>>>> >>>>>>>>>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>>>>>>>>> >>>>>>>>>>>> Note, I have been >>>>>>>>>>>> -sanjay >>>>>>>>>>>> -- >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Sun Mar 23 19:09:44 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 17:09:44 -0700 Subject: [petsc-users] upgrading to 3.22.4 In-Reply-To: <514265b7-1882-4c54-bec6-ebe8068ab90a@berkeley.edu> References: <469fd2e4-480f-4f92-b50f-ce50b4d66375@berkeley.edu> <25C4C2FE-0915-4331-A0E2-540FC513BD6C@petsc.dev> <6425951b-2114-45ee-bab3-8989376fa465@berkeley.edu> <77CABA06-B5D5-4950-8871-2BAAA8DE5ACA@dsic.upv.es> <54B2D303-689B-4BD1-8AF7-D4714EA7BD94@dsic.upv.es> <4F8FAC6B-34CC-4FB6-9D68-70B370D5B80C@dsic.upv.es> <9d1f4c9e-a977-41c8-a149-72f9e1229cbf@berkeley.edu> <514265b7-1882-4c54-bec6-ebe8068ab90a@berkeley.edu> Message-ID: Problem resolved.? MatInfo is now a derived type with % access to the components MatInfo info info%nz_allocated??? ! defined in ftn/mat/petscmat.h etc. -sanjay - On 3/23/25 3:22 PM, Sanjay Govindjee wrote: > Barry and Jose, > I have gotten my compile issues down to the following: > > usolve.F:84:46: > > ?? 84 |?????? MatInfo??????? info(MAT_INFO_SIZE) > ????? |????????????????????????????????????????????? 1 > Error: Symbol 'mat_info_size' at (1) has no IMPLICIT type; did you > mean 'mpi_win_size'? > usolve.F:191:39: > > ? 191 |???????????? mal = info(MAT_INFO_MALLOCS) > ????? |?????????????????????????????????????? 1 > Error: Symbol 'mat_info_mallocs' at (1) has no IMPLICIT type; did > you mean 'mpi_info_null'? > usolve.F:189:44: > > ? 189 |???????????? nza = info(MAT_INFO_NZ_ALLOCATED) > ????? |??????????????????????????????????????????? 1 > Error: Symbol 'mat_info_nz_allocated' at (1) has no IMPLICIT type; > did you mean 'mpi_win_flavor_allocate'? > usolve.F:190:39: > > ? 190 |???????????? nzr = info(MAT_INFO_NZ_USED) > ????? |?????????????????????????????????????? 1 > Error: Symbol 'mat_info_nz_used' at (1) has no IMPLICIT type; did > you mean 'mpi_info_null'? > usolve.F:188:72: > > ? 188 |???????????? call MatGetInfo(Kmat, MAT_LOCAL, info, ierr) > | 1 > Error: There is no specific subroutine for the generic > 'matgetinfo' at (1) > usolve.F:346:15: > > It seems that my issues are revolve about MatInfo, but I can not see > what I am doing wrong. > In terms of variable declarations, here is what I have > > ????? MatInfo??????? info(MAT_INFO_SIZE) > > One follow-up on a fix I made to remove some compile errors; I looked > into petscksp.hf90 and found that the return value from > KSPGetConvergedReason(kspsol,reason,ierr) can be accessed as reason%v. > Is there a recommended way to get the value of the reason?? My use of > reason%v seems rather unclean.? The use case is I want to know if the > reason value is .gt.0 or .lt.0 and then given that I want to print a > custom string to stdout and to a file unit. > > All hints appreciated, > -sanjay > On 3/23/25 2:13 AM, Sanjay Govindjee wrote: >> Jose, >> Using the pointer construction to set up the arrays has resolved the >> issue with finding the preallocation interfaces. >> I'll look at the other issues tomorrow.? Thanks for the help. >> -sanjay >> - >> >> On 3/23/25 1:43 AM, Sanjay Govindjee wrote: >>> Complicated history of a 40+ year old code.? But in short >>> mr( np(246) ) is the first location of an allocated block of memory. >>> >>> If need be I can explicitly set up a pointer to the array, something >>> like >>> >>> integer, pointer :: arname( : ) >>> arname(1:arrlen)????? => mr(np(246):np(246)+arrlen-1) >>> >>> I'd rather not do this if I do not need to, but it is also not a big >>> deal if that would >>> take care of the problem. >>> ------------------------------------------------------------------- >>> Sanjay Govindjee, PhD, PE >>> Horace, Dorothy, and Katherine Johnson Professor in Engineering >>> Distinguished Professor of Civil and Environmental Engineering >>> >>> 779 Davis Hall >>> University of California >>> Berkeley, CA 94720-1710 >>> >>> Voice: +1 510 642 6060 >>> FAX: +1 510 643 5264 >>> s_g at berkeley.edu >>> https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!aTymgz4gI-DBHEuvwePioD663AOso2SjvR2B8P01fgzMgbRLTd3shO7c_ExVPfE3gKnOc2vFK_uRIUcax9JvLQ$ >>> ------------------------------------------------------------------- >>> >>> Books: >>> >>> Introduction to Mechanics of Solid Materials >>> https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!aTymgz4gI-DBHEuvwePioD663AOso2SjvR2B8P01fgzMgbRLTd3shO7c_ExVPfE3gKnOc2vFK_uRIUcKonjoQg$ >>> >>> Continuum Mechanics of Solids >>> https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!aTymgz4gI-DBHEuvwePioD663AOso2SjvR2B8P01fgzMgbRLTd3shO7c_ExVPfE3gKnOc2vFK_uRIUct5UoOkg$ >>> >>> Example Problems for Continuum Mechanics of Solids >>> https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!aTymgz4gI-DBHEuvwePioD663AOso2SjvR2B8P01fgzMgbRLTd3shO7c_ExVPfE3gKnOc2vFK_uRIUe3xl3pBg$ >>> >>> Engineering Mechanics of Deformable Solids >>> https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!aTymgz4gI-DBHEuvwePioD663AOso2SjvR2B8P01fgzMgbRLTd3shO7c_ExVPfE3gKnOc2vFK_uRIUf2-4nr4A$ >>> >>> Engineering Mechanics 3 (Dynamics) 2nd Edition >>> https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!aTymgz4gI-DBHEuvwePioD663AOso2SjvR2B8P01fgzMgbRLTd3shO7c_ExVPfE3gKnOc2vFK_uRIUcxGGqdAQ$ >>> >>> Engineering Mechanics 3, Supplementary Problems: Dynamics >>> https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!aTymgz4gI-DBHEuvwePioD663AOso2SjvR2B8P01fgzMgbRLTd3shO7c_ExVPfE3gKnOc2vFK_uRIUcL36Wezg$ >>> >>> ------------------------------------------------------------------- >>> NSF NHERI SimCenter >>> https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!aTymgz4gI-DBHEuvwePioD663AOso2SjvR2B8P01fgzMgbRLTd3shO7c_ExVPfE3gKnOc2vFK_uRIUfyIYFsBg$ >>> ------------------------------------------------------------------- >>> >>> On 3/23/25 1:36 AM, Jose E. Roman wrote: >>>> What is mr(np(246))? It should be an array, not a single entry of an array. As indicated in the list of changes, in some cases you can use the syntax [z] instead of z to represent an array of single value z. >>>> >>>> >>>>> El 23 mar 2025, a las 9:24, Sanjay Govindjee escribi?: >>>>> >>>>> Jose, >>>>> I've tried lots of combinations but I still get the same error. I think the signatures are all correct. I've attached the routine in case you see something obvious. If not I will try to make a standalone program that generates the same compile error. >>>>> upremas.F:66:72: >>>>> >>>>> 66 | & ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>>> upremas.F:69:72: >>>>> >>>>> 69 | & ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>>> upremas.F:75:72: >>>>> >>>>> 75 | & ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>>> upremas.F:78:72: >>>>> >>>>> 78 | & ierr) >>>>> | 1 >>>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>>> >>>>> -- >>>>> >>>>> >>>>> On 3/23/25 1:09 AM, Jose E. Roman wrote: >>>>>> "use petscmat" will use those definitions. >>>>>> >>>>>> As I said, you probably have mismatching arguments. For instance >>>>>> call MatSeqAIJSetPreallocation(Mmat, >>>>>> & PETSC_NULL_INTEGER_ARRAY,mr(np(246)), >>>>>> & ierr) >>>>>> The second argument is a PetscInt so PETSC_NULL_INTEGER_ARRAY is wrong, it should be PETSC_NULL_INTEGER. >>>>>> >>>>>> Now the compiler will help you fix this kind of errors, which would go unnoticed before. >>>>>> >>>>>> Jose >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> El 23 mar 2025, a las 9:02, Sanjay Govindjee escribi?: >>>>>>> >>>>>>> Jose, >>>>>>> What module should I be using to load petscmat.h90? >>>>>>> -sanjay >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> On 3/23/25 12:54 AM, Jose E. Roman wrote: >>>>>>> >>>>>>>> Have a look at the list of changes - it is currently herehttps://petsc.org/main/changes/dev/ until the new version is released. See the last section "Fortran". >>>>>>>> >>>>>>>> The functions ending in "F90" have been renamed, just remove the "F90" suffix. >>>>>>>> >>>>>>>> Regarding the info-related errors, a workaround is to append %v, for instance >>>>>>>> mal = info(MAT_INFO_MALLOCS%v) >>>>>>>> But Barry may want to provide a better fix for this. >>>>>>>> >>>>>>>> Jose >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> El 23 mar 2025, a las 8:42, Jose E. Roman via petsc-users escribi?: >>>>>>>>> >>>>>>>>> The Fortran interfaces for those functions are generated correctly, see $PETSC_ARCH/ftn/mat/petscmat.h90 >>>>>>>>> >>>>>>>>> For instance: >>>>>>>>> >>>>>>>>> interface MatMPIBAIJSetPreallocation >>>>>>>>> subroutine MatMPIBAIJSetPreallocation(a,b,c,d,e,f, z) >>>>>>>>> import tMat >>>>>>>>> Mat :: a >>>>>>>>> PetscInt :: b >>>>>>>>> PetscInt :: c >>>>>>>>> PetscInt :: d(*) >>>>>>>>> PetscInt :: e >>>>>>>>> PetscInt :: f(*) >>>>>>>>> PetscErrorCode z >>>>>>>>> end subroutine >>>>>>>>> end interface >>>>>>>>> >>>>>>>>> The compiler message is probably due to the type of an argument not matching the expected one. In particular, if you are passing NULL in one of the array arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER. >>>>>>>>> >>>>>>>>> Jose >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users escribi?: >>>>>>>>>> >>>>>>>>>> Small update. I managed to eliminate all the errors associated with PetscViewer and below (it had to do with the fact that I had not yet built a module that was needed). The errors related to the preallocation routines still persists. >>>>>>>>>> -sanjay >>>>>>>>>> >>>>>>>>>> On 3/23/25 12:19 AM, Sanjay Govindjee wrote: >>>>>>>>>> >>>>>>>>>>> Hi Barry, >>>>>>>>>>> I have moved to main and rebuilt the PETSc libraries etc. Right now I am having trouble just getting my source code to compile. Plenty of subroutines with PETSc calls compile but a few are throwing errors and killing my compile. I suspect there will be more but if I can figure these hopefully I can debug the ones that will follow. >>>>>>>>>>> -sanjay >>>>>>>>>>> Error: There is no specific subroutine for the generic 'matmpibaijsetpreallocation' at (1) >>>>>>>>>>> upremas.F:68:72: >>>>>>>>>>> >>>>>>>>>>> 68 | & ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'matseqbaijsetpreallocation' at (1) >>>>>>>>>>> upremas.F:74:72: >>>>>>>>>>> >>>>>>>>>>> 74 | & ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'matmpiaijsetpreallocation' at (1) >>>>>>>>>>> upremas.F:77:72: >>>>>>>>>>> >>>>>>>>>>> 77 | & ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'matseqaijsetpreallocation' at (1) >>>>>>>>>>> >>>>>>>>>>> parkv.F:58:25: >>>>>>>>>>> >>>>>>>>>>> 58 | PetscViewer Y_view >>>>>>>>>>> | 1 >>>>>>>>>>> Error: Type name 'tpetscviewer' at (1) is ambiguous >>>>>>>>>>> parkv.F:69:9: >>>>>>>>>>> >>>>>>>>>>> 69 | endif >>>>>>>>>>> | 1 >>>>>>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>>>>>> parkv.F:72:9: >>>>>>>>>>> >>>>>>>>>>> 72 | endif >>>>>>>>>>> | 1 >>>>>>>>>>> Error: Expecting END SUBROUTINE statement at (1) >>>>>>>>>>> parkv.F:91:66: >>>>>>>>>>> >>>>>>>>>>> 91 | call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"yvec.m",Y_view, >>>>>>>>>>> | 1 >>>>>>>>>>> Error: Symbol 'y_view' at (1) has no IMPLICIT type; did you mean 'yvec'? >>>>>>>>>>> parkv.F:65:72: >>>>>>>>>>> >>>>>>>>>>> 65 | call VecCreate (PETSC_COMM_WORLD, xvec, ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'veccreate' at (1) >>>>>>>>>>> parkv.F:67:72: >>>>>>>>>>> >>>>>>>>>>> 67 | call VecSetFromOptions(xvec, ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'vecsetfromoptions' at (1) >>>>>>>>>>> parkv.F:68:72: >>>>>>>>>>> >>>>>>>>>>> 68 | call VecDuplicate (xvec, yvec, ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>>>>>> parkv.F:71:72: >>>>>>>>>>> >>>>>>>>>>> 71 | call VecDuplicate (xvec, yvec, ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'vecduplicate' at (1) >>>>>>>>>>> parkv.F:85:72: >>>>>>>>>>> >>>>>>>>>>> 85 | call VecAssemblyBegin(xvec, ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'vecassemblybegin' at (1) >>>>>>>>>>> parkv.F:86:72: >>>>>>>>>>> >>>>>>>>>>> 86 | call VecAssemblyEnd (xvec, ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'vecassemblyend' at (1) >>>>>>>>>>> parkv.F:88:72: >>>>>>>>>>> >>>>>>>>>>> 88 | call MatMult (Kmat, xvec, yvec, ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'matmult' at (1) >>>>>>>>>>> parkv.F:101:72: >>>>>>>>>>> >>>>>>>>>>> 101 | call VecGetOwnershipRange(yvec, starti, endi, ierr) >>>>>>>>>>> | 1 >>>>>>>>>>> Error: There is no specific subroutine for the generic 'vecgetownershiprange' at (1) >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> - >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 3/21/25 7:17 AM, Barry Smith wrote: >>>>>>>>>>> >>>>>>>>>>>> I have just pushed a major update to the Fortran interface to the main PETSc git branch. Could you please try to work with main (to become release in a couple of weeks) with your Fortran code as we debug the problem? This will save you a lot of work and hopefully make the debugging more straightforward. >>>>>>>>>>>> >>>>>>>>>>>> You can send the same output with the debugger if it crashes in the main branch and I can try to track down what is going wrong. >>>>>>>>>>>> >>>>>>>>>>>> Barry >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> On Mar 21, 2025, at 12:37?AM, Sanjay Govindjee via petsc-users wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> I am trying to upgrade my code to PETSc 3.22.4 (the code was last updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using this code with PETSc for over 20 years. >>>>>>>>>>>>> >>>>>>>>>>>>> To get my code to compile and link during this update, I only need to make two changes; one was to use PetscViewerPushFormat instead of PetscViewerSetFormat and the other was to use PETSC_NULL_INTEGER_ARRAY in a spot or two. >>>>>>>>>>>>> >>>>>>>>>>>>> When I run the code however, I am getting an error very early on during a call to MatCreate near the beginning of the code. The screen output says: >>>>>>>>>>>>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>>>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>>>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>>>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c:101 Cannot create PETSC_NULL_XXX object >>>>>>>>>>>>> I have a 4 processor run going. I am running with -on_error_attach_debugger but the debugger is giving me cryptic (at least to me) output (the same for all 4 processes modulo the PID). Stack traces seem to be unavailable :( >>>>>>>>>>>>> lldb -p 71963 >>>>>>>>>>>>> (lldb) process attach --pid 71963 >>>>>>>>>>>>> Process 71963 stopped >>>>>>>>>>>>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>>>>>>>>>>>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>>>>>>>>>>>> libsystem_kernel.dylib`__semwait_signal: >>>>>>>>>>>>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>>>>>>>>>>>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>>>>>>>>>>>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>>>>>>>>>>>> 0x7fff69d92750 <+20>: retq >>>>>>>>>>>>> Target 0: (feap) stopped. >>>>>>>>>>>>> >>>>>>>>>>>>> Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". >>>>>>>>>>>>> Architecture set to: x86_64h-apple-macosx-. >>>>>>>>>>>>> Does anyone have any hints as to what may be going on? Note the program starts normally and i can do stuff with the interactive interface for the code -- even plotting the mesh etc. so I believe the input data has been read in correctly. The crash only occurs when I initiate the formation of the matrix. >>>>>>>>>>>>> >>>>>>>>>>>>> I am attaching the /Users/sg/petsc-3.22.4/gnug/src/mat/utils/ftn-auto/gcreatef.c file in case that offers some insight. >>>>>>>>>>>>> >>>>>>>>>>>>> Note, I have been >>>>>>>>>>>>> -sanjay >>>>>>>>>>>>> -- >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Sun Mar 23 19:35:13 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 17:35:13 -0700 Subject: [petsc-users] VecGetArrayReadF90 and VecRestoreArrayReadF90 not found on link Message-ID: Working off the main branch, I am trying to compile a code that uses VecGetArrayReadF90 and VecRestoreArrayReadF90. The subroutines all compile but at link, I am encountering Undefined symbols for architecture x86_64: ? "_vecgetarrayreadf90_", referenced from: ????? _parbmat_ in parbmat.o ????? _psprojb_ in psprojb.o ????? _psubsp_ in psubsp.o ????? _usolve_ in usolve.o ????? _paropm_ in libparpack.a(paropm.o) ????? _pminvsqr_ in libparpack.a(pminvsqr.o) ????? _parkv_ in libparpack.a(parkv.o) ????? ... ? "_vecrestorearrayreadf90_", referenced from: ????? _parbmat_ in parbmat.o ????? _psprojb_ in psprojb.o ????? _psubsp_ in psubsp.o ????? _usolve_ in usolve.o ????? _paropm_ in libparpack.a(paropm.o) ????? _pminvsqr_ in libparpack.a(pminvsqr.o) ????? _parkv_ in libparpack.a(parkv.o) Searching the repo I do not see these any place (the closest is DMDAVecGetArrayReadF90 and DMDAVecRestoreArrayReadF90). I do see VecGetArrayRead and VecRestoreArrayRead, which will resolve my linking issues, but I recall that these were supposed to be depricated in favor of the ...ReadF90 version. -sanjay -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Mar 23 19:59:42 2025 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 23 Mar 2025 20:59:42 -0400 Subject: [petsc-users] VecGetArrayReadF90 and VecRestoreArrayReadF90 not found on link In-Reply-To: References: Message-ID: <534F2871-779E-4E8D-8FF2-C9D8650AB892@petsc.dev> With the update for the Fortran binding all the XXXF90 routines have been renamed to not include the F90 suffix. Barry I tried to provide deprecated versions but found it was too difficult to keep the old name routines also. > On Mar 23, 2025, at 8:35?PM, Sanjay Govindjee via petsc-users wrote: > > Working off the main branch, I am trying to compile a code that uses VecGetArrayReadF90 and VecRestoreArrayReadF90. > The subroutines all compile but at link, I am encountering > Undefined symbols for architecture x86_64: > "_vecgetarrayreadf90_", referenced from: > _parbmat_ in parbmat.o > _psprojb_ in psprojb.o > _psubsp_ in psubsp.o > _usolve_ in usolve.o > _paropm_ in libparpack.a(paropm.o) > _pminvsqr_ in libparpack.a(pminvsqr.o) > _parkv_ in libparpack.a(parkv.o) > ... > "_vecrestorearrayreadf90_", referenced from: > _parbmat_ in parbmat.o > _psprojb_ in psprojb.o > _psubsp_ in psubsp.o > _usolve_ in usolve.o > _paropm_ in libparpack.a(paropm.o) > _pminvsqr_ in libparpack.a(pminvsqr.o) > _parkv_ in libparpack.a(parkv.o) > Searching the repo I do not see these any place (the closest is DMDAVecGetArrayReadF90 and DMDAVecRestoreArrayReadF90). > I do see VecGetArrayRead and VecRestoreArrayRead, which will resolve my linking issues, but I recall that these were supposed to be depricated in favor of the ...ReadF90 version. > > -sanjay > -- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Sun Mar 23 21:32:42 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Sun, 23 Mar 2025 21:32:42 -0500 (CDT) Subject: [petsc-users] VecGetArrayReadF90 and VecRestoreArrayReadF90 not found on link In-Reply-To: <534F2871-779E-4E8D-8FF2-C9D8650AB892@petsc.dev> References: <534F2871-779E-4E8D-8FF2-C9D8650AB892@petsc.dev> Message-ID: <0147a36c-9cde-75fd-59f5-4dd648edad79@fastmail.org> Perhaps: #define XXXF90() XXX() Eventhough it won't give deprecated message to user - it will get the code compiling. [and these routines don't need to go away] Satish On Sun, 23 Mar 2025, Barry Smith wrote: > > With the update for the Fortran binding all the XXXF90 routines have been renamed to not include the F90 suffix. > > Barry > > I tried to provide deprecated versions but found it was too difficult to keep the old name routines also. > > > On Mar 23, 2025, at 8:35?PM, Sanjay Govindjee via petsc-users wrote: > > > > Working off the main branch, I am trying to compile a code that uses VecGetArrayReadF90 and VecRestoreArrayReadF90. > > The subroutines all compile but at link, I am encountering > > Undefined symbols for architecture x86_64: > > "_vecgetarrayreadf90_", referenced from: > > _parbmat_ in parbmat.o > > _psprojb_ in psprojb.o > > _psubsp_ in psubsp.o > > _usolve_ in usolve.o > > _paropm_ in libparpack.a(paropm.o) > > _pminvsqr_ in libparpack.a(pminvsqr.o) > > _parkv_ in libparpack.a(parkv.o) > > ... > > "_vecrestorearrayreadf90_", referenced from: > > _parbmat_ in parbmat.o > > _psprojb_ in psprojb.o > > _psubsp_ in psubsp.o > > _usolve_ in usolve.o > > _paropm_ in libparpack.a(paropm.o) > > _pminvsqr_ in libparpack.a(pminvsqr.o) > > _parkv_ in libparpack.a(parkv.o) > > Searching the repo I do not see these any place (the closest is DMDAVecGetArrayReadF90 and DMDAVecRestoreArrayReadF90). > > I do see VecGetArrayRead and VecRestoreArrayRead, which will resolve my linking issues, but I recall that these were supposed to be depricated in favor of the ...ReadF90 version. > > > > -sanjay > > -- > > > > > > From s_g at berkeley.edu Sun Mar 23 22:10:08 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 23 Mar 2025 20:10:08 -0700 Subject: [petsc-users] GCREATEF.C error Message-ID: Barry, ? I now have a compiled version of my code using the main branch. When I run however I am getting an error in matcreate_( ) when I try to solve (actually just set up the matrix).? The console window reports [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object The debugger windows all report (modulo the pid): (lldb) process attach --pid 90952 Process 90952 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP ??? frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 libsystem_kernel.dylib`__semwait_signal: ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi ??? 0x7fff69d9274b <+15>: jmp??? 0x7fff69d9121d??????????? ; cerror ??? 0x7fff69d92750 <+20>: retq Target 0: (feap) stopped. frame The debugger reports for the stack: (lldb) thread backtrace * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP ? * frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 ??? frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep + 196 ??? frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 ??? frame #3: 0x0000000111acb04c libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 ??? frame #4: 0x0000000111722961 libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 ??? frame #5: 0x00000001162ec7c8 libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000011788fde8, line=14, fun="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object", ctx=0x0000000000000000) at adebug.c:522:9 ??? frame #6: 0x00000001162ecdb0 libpetsc.3.022.dylib`PetscError(comm=0x000000011788fde8, line=14, func="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object") at err.c:409:15 ??? frame #7: 0x000000011233ecad libpetsc.3.022.dylib`matcreate_(a=0x00000001165fdc74, b=0x000000010fcde8c8, ierr=0x00007ffee0460308) at gcreatef.c:14:3 ??? frame #8: 0x000000010f7cf072 feap`usolve_ at usolve.F:138:72 ??? frame #9: 0x000000010f942de2 feap`presol_ at presol.f:181:72 ??? frame #10: 0x000000010f8cb8d8 feap`pmacr1_ at pmacr1.f:555:72 ??? frame #11: 0x000000010f8c60ad feap`pmacr_ at pmacr.f:614:72 ??? frame #12: 0x000000010f86ae4f feap`pcontr_ at pcontr.f:1375:72 ??? frame #13: 0x000000010fc1215e feap`main at feap87.f:173:72 ??? frame #14: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 ??? frame #15: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 Here is a peek at the frame stack: frame #3: 0x0000000105c7104c libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 ?? 45 ?? 46????? #if defined(PETSC_HAVE_SLEEP) ?? 47??????? else -> 48????????? sleep((int)s); ?? 49????? #elif defined(PETSC_HAVE__SLEEP) && defined(PETSC_HAVE__SLEEP_MILISEC) ?? 50??????? else _sleep((int)(s * 1000)); ?? 51????? #elif defined(PETSC_HAVE__SLEEP) (lldb) up frame #4: 0x00000001058c8961 libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 ?? 455?????????? while (left > 0) left = PetscSleep(left) - 1; ?? 456???????? } ?? 457?????? #else -> 458???????? PetscCall(PetscSleep(sleeptime)); ?? 459?????? #endif ?? 460?????? } ?? 461???? #endif (lldb) up frame #5: 0x000000010a4927c8 libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000010c7f1de8, line=14, fun="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object", ctx=0x0000000000000000) at adebug.c:522:9 ?? 519?????? if (fun) (void)(*PetscErrorPrintf)("%s() at %s:%d %s\n", fun, file, line, mess); ?? 520?????? else (void)(*PetscErrorPrintf)("%s:%d %s\n", file, line, mess); ?? 521 -> 522?????? (void)PetscAttachDebugger(); ?? 523?????? abort(); /* call abort because don't want to kill other MPI ranks that may successfully attach to debugger */ ?? 524?????? PetscFunctionReturn(PETSC_SUCCESS); ?? 525???? } (lldb) up frame #6: 0x000000010a492db0 libpetsc.3.022.dylib`PetscError(comm=0x000000010c7f1de8, line=14, func="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object") at err.c:409:15 ?? 406?????? if (p == PETSC_ERROR_INITIAL && n != PETSC_ERR_MEMC) (void)PetscMallocValidate(__LINE__, PETSC_FUNCTION_NAME, __FILE__); ?? 407 ?? 408?????? if (!eh) ierr = PetscTraceBackErrorHandler(comm, line, func, file, n, p, lbuf, NULL); -> 409?????? else ierr = (*eh->handler)(comm, line, func, file, n, p, lbuf, eh->ctx); ?? 410?????? PetscStackClearTop; ?? 411 ?? 412?????? /* (lldb) up frame #7: 0x00000001064e4cad libpetsc.3.022.dylib`matcreate_(a=0x000000010a7a3c74, b=0x0000000104e3e8c8, ierr=0x00007ffeeb300308) at gcreatef.c:14:3 ?? 11????? PETSC_EXTERN void matcreate_(MPI_Fint *a, Mat *b, PetscErrorCode *ierr) ?? 12????? { ?? 13??????? PetscBool null_b = !*(void**) b ? PETSC_TRUE : PETSC_FALSE; -> 14??????? PETSC_FORTRAN_OBJECT_CREATE(b); ?? 15??????? CHKFORTRANNULLOBJECT(b); ?? 16??????? *ierr = MatCreate(MPI_Comm_f2c(*(a)), b); ?? 17??????? if (*ierr) return; (lldb) up frame #8: 0x000000010492f072 feap`usolve_ at usolve.F:138:72 ?? 135?????????????? onnz => mr(np(246):np(246)+ilist(2,246)-1) ?? 136?????????????? dnnz => mr(np(247):np(247)+ilist(2,247)-1) ?? 137 -> 138?????????????? call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) ?? 139?????????????? call MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE, ?? 140????????? &??????????????????????? PETSC_DETERMINE,ierr) ?? 141?????????????? if(pfeap_bcin) call MatSetBlockSize(Kmat,nsbk,ierr) -- ------------------------------------------------------------------- Sanjay Govindjee, PhD, PE Horace, Dorothy, and Katherine Johnson Professor in Engineering Distinguished Professor of Civil and Environmental Engineering 779 Davis Hall University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!d3a3dMZHNHbpo2pUZUkrJ_sVG2VKWJenpA8SfSld478I1qG5iDDI_A5HMLojdmimArbDAt-E50BvvLJDGn-rKw$ ------------------------------------------------------------------- Books: Introduction to Mechanics of Solid Materials https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!d3a3dMZHNHbpo2pUZUkrJ_sVG2VKWJenpA8SfSld478I1qG5iDDI_A5HMLojdmimArbDAt-E50BvvLK6eJ2xMA$ Continuum Mechanics of Solids https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!d3a3dMZHNHbpo2pUZUkrJ_sVG2VKWJenpA8SfSld478I1qG5iDDI_A5HMLojdmimArbDAt-E50BvvLL3Y4t2YA$ Example Problems for Continuum Mechanics of Solids https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!d3a3dMZHNHbpo2pUZUkrJ_sVG2VKWJenpA8SfSld478I1qG5iDDI_A5HMLojdmimArbDAt-E50BvvLIyAaK4Eg$ Engineering Mechanics of Deformable Solids https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!d3a3dMZHNHbpo2pUZUkrJ_sVG2VKWJenpA8SfSld478I1qG5iDDI_A5HMLojdmimArbDAt-E50BvvLLf5ewSKQ$ Engineering Mechanics 3 (Dynamics) 2nd Edition https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!d3a3dMZHNHbpo2pUZUkrJ_sVG2VKWJenpA8SfSld478I1qG5iDDI_A5HMLojdmimArbDAt-E50BvvLIJH2xWfA$ Engineering Mechanics 3, Supplementary Problems: Dynamics https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!d3a3dMZHNHbpo2pUZUkrJ_sVG2VKWJenpA8SfSld478I1qG5iDDI_A5HMLojdmimArbDAt-E50BvvLJV9YZdRA$ ------------------------------------------------------------------- NSF NHERI SimCenter https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!d3a3dMZHNHbpo2pUZUkrJ_sVG2VKWJenpA8SfSld478I1qG5iDDI_A5HMLojdmimArbDAt-E50BvvLJMVyG6GQ$ ------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ye_Changqing at outlook.com Mon Mar 24 03:33:05 2025 From: Ye_Changqing at outlook.com (Ye Changqing) Date: Mon, 24 Mar 2025 08:33:05 +0000 Subject: [petsc-users] Floating point exception when save complex vector into a hdf5 file Message-ID: Dear PETSc developers, I encountered a strange problem when I tried to save a DMDA vector into an hdf5 file, a floating point error was thrown. I can repeat the problem on the cluster. However, the same codes run fine on my local computer. Below the .cxx file is the minimal working example, the .txt is the runtime error obtained from SLURM, and the .py file should tell the configure options that I used to build the library. Any suggestions? Best, Changqing -------------- next part -------------- A non-text attachment was scrubbed... Name: test_hdf5.cxx Type: application/octet-stream Size: 1229 bytes Desc: test_hdf5.cxx URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_test_hdf5_1165815.txt URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reconfigure-linux-oneapi-complex-opt.py Type: application/octet-stream Size: 1160 bytes Desc: reconfigure-linux-oneapi-complex-opt.py URL: From Ye_Changqing at outlook.com Mon Mar 24 04:21:52 2025 From: Ye_Changqing at outlook.com (Ye Changqing) Date: Mon, 24 Mar 2025 09:21:52 +0000 Subject: [petsc-users] =?gb2312?b?u9i4tDogRmxvYXRpbmcgcG9pbnQgZXhjZXB0?= =?gb2312?b?aW9uIHdoZW4gc2F2ZSBjb21wbGV4IHZlY3RvciBpbnRvIGEgaGRmNSBmaWxl?= In-Reply-To: References: Message-ID: Dear all, I reconfigured petsc with "--with-debugging=1". It throws more messages which I attached below. Best, Changqing ________________________________________ ???: Ye Changqing ????: 2025?3?24? 16:33 ???: petsc-users at mcs.anl.gov ??: Floating point exception when save complex vector into a hdf5 file Dear PETSc developers, I encountered a strange problem when I tried to save a DMDA vector into an hdf5 file, a floating point error was thrown. I can repeat the problem on the cluster. However, the same codes run fine on my local computer. Below the .cxx file is the minimal working example, the .txt is the runtime error obtained from SLURM, and the .py file should tell the configure options that I used to build the library. Any suggestions? Best, Changqing -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error_test_hdf5_1166068.txt URL: From mfadams at lbl.gov Mon Mar 24 06:14:19 2025 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 24 Mar 2025 07:14:19 -0400 Subject: [petsc-users] =?utf-8?b?5Zue5aSNOiBGbG9hdGluZyBwb2ludCBleGNlcHRp?= =?utf-8?q?on_when_save_complex_vector_into_a_hdf5_file?= In-Reply-To: References: Message-ID: Just to check, you want to delete the linux-oneapi-complex-opt directory and do a fresh build when you get errors like this and you might send your configure log. Mark On Mon, Mar 24, 2025 at 5:22?AM Ye Changqing wrote: > Dear all, > > I reconfigured petsc with "--with-debugging=1". It throws more messages > which I attached below. > > Best, > Changqing > > ________________________________________ > ???: Ye Changqing > ????: 2025?3?24? 16:33 > ???: petsc-users at mcs.anl.gov > ??: Floating point exception when save complex vector into a hdf5 file > > Dear PETSc developers, > > I encountered a strange problem when I tried to save a DMDA vector into an > hdf5 file, a floating point error was thrown. I can repeat the problem on > the cluster. However, the same codes run fine on my local computer. > > Below the .cxx file is the minimal working example, the .txt is the > runtime error obtained from SLURM, and the .py file should tell the > configure options that I used to build the library. > > Any suggestions? > > Best, > Changqing > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ye_Changqing at outlook.com Mon Mar 24 06:57:41 2025 From: Ye_Changqing at outlook.com (Ye Changqing) Date: Mon, 24 Mar 2025 11:57:41 +0000 Subject: [petsc-users] =?utf-8?b?5Zue5aSNOiAg5Zue5aSNOiBGbG9hdGluZyBwb2lu?= =?utf-8?q?t_exception_when_save_complex_vector_into_a_hdf5_file?= In-Reply-To: References: Message-ID: Dear Mark, As suggested, I did a fresh configuration on PETSc. The problem is still there. The attachment is the configure.log for your reference. Best, Changqing ________________________________________ ???: Mark Adams ????: 2025?3?24? 19:14 ???: Ye Changqing ??: petsc-users at mcs.anl.gov ??: Re: [petsc-users] ??: Floating point exception when save complex vector into a hdf5 file Just to check, you want to delete the linux-oneapi-complex-opt directory and do a fresh build when you get errors like this and you might send your configure log. Mark On Mon, Mar 24, 2025 at 5:22?AM Ye Changqing > wrote: Dear all, I reconfigured petsc with "--with-debugging=1". It throws more messages which I attached below. Best, Changqing ________________________________________ ???: Ye Changqing > ????: 2025?3?24? 16:33 ???: petsc-users at mcs.anl.gov ??: Floating point exception when save complex vector into a hdf5 file Dear PETSc developers, I encountered a strange problem when I tried to save a DMDA vector into an hdf5 file, a floating point error was thrown. I can repeat the problem on the cluster. However, the same codes run fine on my local computer. Below the .cxx file is the minimal working example, the .txt is the runtime error obtained from SLURM, and the .py file should tell the configure options that I used to build the library. Any suggestions? Best, Changqing -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 3275403 bytes Desc: configure.log URL: From bsmith at petsc.dev Mon Mar 24 09:15:04 2025 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 24 Mar 2025 10:15:04 -0400 Subject: [petsc-users] GCREATEF.C error In-Reply-To: References: Message-ID: How do you declare and initialize the matrix in usolve.F before calling MatCreate()? You should not initialize it with any value before the call. Barry > On Mar 23, 2025, at 11:10?PM, Sanjay Govindjee via petsc-users wrote: > > Barry, > > I now have a compiled version of my code using the main branch. When I run however I am getting an error in matcreate_( ) when I try to solve (actually just set up the matrix). The console window reports > [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object > [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object > [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object > [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object > > The debugger windows all report (modulo the pid): > (lldb) process attach --pid 90952 > Process 90952 stopped > * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP > frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 > libsystem_kernel.dylib`__semwait_signal: > -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> > 0x7fff69d92748 <+12>: movq %rax, %rdi > 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror > 0x7fff69d92750 <+20>: retq > Target 0: (feap) stopped. > frame > The debugger reports for the stack: > (lldb) thread backtrace > * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP > * frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 > frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep + 196 > frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 > frame #3: 0x0000000111acb04c libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 > frame #4: 0x0000000111722961 libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 > frame #5: 0x00000001162ec7c8 libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000011788fde8, line=14, fun="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object", ctx=0x0000000000000000) at adebug.c:522:9 > frame #6: 0x00000001162ecdb0 libpetsc.3.022.dylib`PetscError(comm=0x000000011788fde8, line=14, func="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object") at err.c:409:15 > frame #7: 0x000000011233ecad libpetsc.3.022.dylib`matcreate_(a=0x00000001165fdc74, b=0x000000010fcde8c8, ierr=0x00007ffee0460308) at gcreatef.c:14:3 > frame #8: 0x000000010f7cf072 feap`usolve_ at usolve.F:138:72 > frame #9: 0x000000010f942de2 feap`presol_ at presol.f:181:72 > frame #10: 0x000000010f8cb8d8 feap`pmacr1_ at pmacr1.f:555:72 > frame #11: 0x000000010f8c60ad feap`pmacr_ at pmacr.f:614:72 > frame #12: 0x000000010f86ae4f feap`pcontr_ at pcontr.f:1375:72 > frame #13: 0x000000010fc1215e feap`main at feap87.f:173:72 > frame #14: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 > frame #15: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 > > Here is a peek at the frame stack: > frame #3: 0x0000000105c7104c libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 > 45 > 46 #if defined(PETSC_HAVE_SLEEP) > 47 else > -> 48 sleep((int)s); > 49 #elif defined(PETSC_HAVE__SLEEP) && defined(PETSC_HAVE__SLEEP_MILISEC) > 50 else _sleep((int)(s * 1000)); > 51 #elif defined(PETSC_HAVE__SLEEP) > (lldb) up > frame #4: 0x00000001058c8961 libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 > 455 while (left > 0) left = PetscSleep(left) - 1; > 456 } > 457 #else > -> 458 PetscCall(PetscSleep(sleeptime)); > 459 #endif > 460 } > 461 #endif > (lldb) up > frame #5: 0x000000010a4927c8 libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000010c7f1de8, line=14, fun="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object", ctx=0x0000000000000000) at adebug.c:522:9 > 519 if (fun) (void)(*PetscErrorPrintf)("%s() at %s:%d %s\n", fun, file, line, mess); > 520 else (void)(*PetscErrorPrintf)("%s:%d %s\n", file, line, mess); > 521 > -> 522 (void)PetscAttachDebugger(); > 523 abort(); /* call abort because don't want to kill other MPI ranks that may successfully attach to debugger */ > 524 PetscFunctionReturn(PETSC_SUCCESS); > 525 } > (lldb) up > frame #6: 0x000000010a492db0 libpetsc.3.022.dylib`PetscError(comm=0x000000010c7f1de8, line=14, func="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object") at err.c:409:15 > 406 if (p == PETSC_ERROR_INITIAL && n != PETSC_ERR_MEMC) (void)PetscMallocValidate(__LINE__, PETSC_FUNCTION_NAME, __FILE__); > 407 > 408 if (!eh) ierr = PetscTraceBackErrorHandler(comm, line, func, file, n, p, lbuf, NULL); > -> 409 else ierr = (*eh->handler)(comm, line, func, file, n, p, lbuf, eh->ctx); > 410 PetscStackClearTop; > 411 > 412 /* > (lldb) up > frame #7: 0x00000001064e4cad libpetsc.3.022.dylib`matcreate_(a=0x000000010a7a3c74, b=0x0000000104e3e8c8, ierr=0x00007ffeeb300308) at gcreatef.c:14:3 > 11 PETSC_EXTERN void matcreate_(MPI_Fint *a, Mat *b, PetscErrorCode *ierr) > 12 { > 13 PetscBool null_b = !*(void**) b ? PETSC_TRUE : PETSC_FALSE; > -> 14 PETSC_FORTRAN_OBJECT_CREATE(b); > 15 CHKFORTRANNULLOBJECT(b); > 16 *ierr = MatCreate(MPI_Comm_f2c(*(a)), b); > 17 if (*ierr) return; > (lldb) up > frame #8: 0x000000010492f072 feap`usolve_ at usolve.F:138:72 > 135 onnz => mr(np(246):np(246)+ilist(2,246)-1) > 136 dnnz => mr(np(247):np(247)+ilist(2,247)-1) > 137 > -> 138 call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) > 139 call MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE, > 140 & PETSC_DETERMINE,ierr) > 141 if(pfeap_bcin) call MatSetBlockSize(Kmat,nsbk,ierr) > > -- > ------------------------------------------------------------------- > Sanjay Govindjee, PhD, PE > Horace, Dorothy, and Katherine Johnson Professor in Engineering > Distinguished Professor of Civil and Environmental Engineering > > 779 Davis Hall > University of California > Berkeley, CA 94720-1710 > > Voice: +1 510 642 6060 > FAX: +1 510 643 5264 > s_g at berkeley.edu > https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!Ygp6nC3u6DJ00d8ORlci_yzE8Q_Ziwp_uCDcNW_LtYkL4Gt_P14tnFOWCIQRzX9490KkmSlZY6qiMOwz2i2vIUk$ > ------------------------------------------------------------------- > > Books: > > Introduction to Mechanics of Solid Materials > https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!Ygp6nC3u6DJ00d8ORlci_yzE8Q_Ziwp_uCDcNW_LtYkL4Gt_P14tnFOWCIQRzX9490KkmSlZY6qiMOwzSCkh2Wo$ > > Continuum Mechanics of Solids > https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!Ygp6nC3u6DJ00d8ORlci_yzE8Q_Ziwp_uCDcNW_LtYkL4Gt_P14tnFOWCIQRzX9490KkmSlZY6qiMOwzV-T26rE$ > > Example Problems for Continuum Mechanics of Solids > https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!Ygp6nC3u6DJ00d8ORlci_yzE8Q_Ziwp_uCDcNW_LtYkL4Gt_P14tnFOWCIQRzX9490KkmSlZY6qiMOwzWGTnSEs$ > > Engineering Mechanics of Deformable Solids > https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!Ygp6nC3u6DJ00d8ORlci_yzE8Q_Ziwp_uCDcNW_LtYkL4Gt_P14tnFOWCIQRzX9490KkmSlZY6qiMOwzhX2M-aM$ > > Engineering Mechanics 3 (Dynamics) 2nd Edition > https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!Ygp6nC3u6DJ00d8ORlci_yzE8Q_Ziwp_uCDcNW_LtYkL4Gt_P14tnFOWCIQRzX9490KkmSlZY6qiMOwz7SLTbJs$ > > Engineering Mechanics 3, Supplementary Problems: Dynamics > https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!Ygp6nC3u6DJ00d8ORlci_yzE8Q_Ziwp_uCDcNW_LtYkL4Gt_P14tnFOWCIQRzX9490KkmSlZY6qiMOwzXdkwfn0$ > > ------------------------------------------------------------------- > NSF NHERI SimCenter > https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!Ygp6nC3u6DJ00d8ORlci_yzE8Q_Ziwp_uCDcNW_LtYkL4Gt_P14tnFOWCIQRzX9490KkmSlZY6qiMOwzGcwzVRA$ > ------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Mon Mar 24 10:14:47 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 24 Mar 2025 08:14:47 -0700 Subject: [petsc-users] GCREATEF.C error In-Reply-To: References: Message-ID: Hi Barry, The call sequence happens across several routines. I believe it is as follows: call PetscInitialize(PETSC_NULL_CHARACTER, ierr) call MPI_Comm_rank(PETSC_COMM_WORLD, rank, ierr) call MPI_Comm_size(PETSC_COMM_WORLD, ntasks, ierr) Kmat = PETSC_NULL_MAT if(Kmat.eq.PETSC_NULL_MAT) then call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) etc... Kmat itself is declared in a module module pfeapc # include use petscksp implicit none Vec :: rhs, sol, xvec Vec :: yvec, zvec Vec :: Mdiag, Msqrt Mat :: Kmat, Mmat, Pmat KSP :: kspsol end module pfeapc - On Mon, Mar 24, 2025 at 7:15?AM Barry Smith wrote: > > How do you declare and initialize the matrix in usolve.F before > calling MatCreate()? You should not initialize it with any value before the > call. > > Barry > > > On Mar 23, 2025, at 11:10?PM, Sanjay Govindjee via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Barry, > > I now have a compiled version of my code using the main branch. When I > run however I am getting an error in matcreate_( ) when I try to solve > (actually just set up the matrix). The console window reports > > [0]PETSC ERROR: matcreate_() at > /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create > PETSC_NULL_XXX object > [3]PETSC ERROR: matcreate_() at > /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create > PETSC_NULL_XXX object > [2]PETSC ERROR: matcreate_() at > /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create > PETSC_NULL_XXX object > [1]PETSC ERROR: matcreate_() at > /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create > PETSC_NULL_XXX object > > The debugger windows all report (modulo the pid): > > (lldb) process attach --pid 90952 > Process 90952 stopped > * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP > frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + > 10 > libsystem_kernel.dylib`__semwait_signal: > -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> > 0x7fff69d92748 <+12>: movq %rax, %rdi > 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror > 0x7fff69d92750 <+20>: retq > Target 0: (feap) stopped. > frame > > The debugger reports for the stack: > > (lldb) thread backtrace > * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP > * frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + > 10 > frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep + 196 > frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 > frame #3: 0x0000000111acb04c libpetsc.3.022.dylib`PetscSleep(s=10) at > psleep.c:48:5 > frame #4: 0x0000000111722961 libpetsc.3.022.dylib`PetscAttachDebugger > at adebug.c:458:5 > frame #5: 0x00000001162ec7c8 > libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000011788fde8, > line=14, fun="matcreate_", > file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", num=62, > p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object", > ctx=0x0000000000000000) at adebug.c:522:9 > frame #6: 0x00000001162ecdb0 > libpetsc.3.022.dylib`PetscError(comm=0x000000011788fde8, line=14, > func="matcreate_", > file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", n=62, > p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object") at > err.c:409:15 > frame #7: 0x000000011233ecad > libpetsc.3.022.dylib`matcreate_(a=0x00000001165fdc74, b=0x000000010fcde8c8, > ierr=0x00007ffee0460308) at gcreatef.c:14:3 > frame #8: 0x000000010f7cf072 feap`usolve_ at usolve.F:138:72 > frame #9: 0x000000010f942de2 feap`presol_ at presol.f:181:72 > frame #10: 0x000000010f8cb8d8 feap`pmacr1_ at pmacr1.f:555:72 > frame #11: 0x000000010f8c60ad feap`pmacr_ at pmacr.f:614:72 > frame #12: 0x000000010f86ae4f feap`pcontr_ at pcontr.f:1375:72 > frame #13: 0x000000010fc1215e feap`main at feap87.f:173:72 > frame #14: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 > frame #15: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 > > Here is a peek at the frame stack: > > frame #3: 0x0000000105c7104c libpetsc.3.022.dylib`PetscSleep(s=10) at > psleep.c:48:5 > 45 > 46 #if defined(PETSC_HAVE_SLEEP) > 47 else > -> 48 sleep((int)s); > 49 #elif defined(PETSC_HAVE__SLEEP) && > defined(PETSC_HAVE__SLEEP_MILISEC) > 50 else _sleep((int)(s * 1000)); > 51 #elif defined(PETSC_HAVE__SLEEP) > (lldb) up > frame #4: 0x00000001058c8961 libpetsc.3.022.dylib`PetscAttachDebugger at > adebug.c:458:5 > 455 while (left > 0) left = PetscSleep(left) - 1; > 456 } > 457 #else > -> 458 PetscCall(PetscSleep(sleeptime)); > 459 #endif > 460 } > 461 #endif > (lldb) up > frame #5: 0x000000010a4927c8 > libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000010c7f1de8, > line=14, fun="matcreate_", > file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", num=62, > p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object", > ctx=0x0000000000000000) at adebug.c:522:9 > 519 if (fun) (void)(*PetscErrorPrintf)("%s() at %s:%d %s\n", fun, > file, line, mess); > 520 else (void)(*PetscErrorPrintf)("%s:%d %s\n", file, line, > mess); > 521 > -> 522 (void)PetscAttachDebugger(); > 523 abort(); /* call abort because don't want to kill other MPI > ranks that may successfully attach to debugger */ > 524 PetscFunctionReturn(PETSC_SUCCESS); > 525 } > (lldb) up > frame #6: 0x000000010a492db0 > libpetsc.3.022.dylib`PetscError(comm=0x000000010c7f1de8, line=14, > func="matcreate_", > file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", n=62, > p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object") at > err.c:409:15 > 406 if (p == PETSC_ERROR_INITIAL && n != PETSC_ERR_MEMC) > (void)PetscMallocValidate(__LINE__, PETSC_FUNCTION_NAME, __FILE__); > 407 > 408 if (!eh) ierr = PetscTraceBackErrorHandler(comm, line, func, > file, n, p, lbuf, NULL); > -> 409 else ierr = (*eh->handler)(comm, line, func, file, n, p, > lbuf, eh->ctx); > 410 PetscStackClearTop; > 411 > 412 /* > (lldb) up > frame #7: 0x00000001064e4cad > libpetsc.3.022.dylib`matcreate_(a=0x000000010a7a3c74, b=0x0000000104e3e8c8, > ierr=0x00007ffeeb300308) at gcreatef.c:14:3 > 11 PETSC_EXTERN void matcreate_(MPI_Fint *a, Mat *b, > PetscErrorCode *ierr) > 12 { > 13 PetscBool null_b = !*(void**) b ? PETSC_TRUE : PETSC_FALSE; > -> 14 PETSC_FORTRAN_OBJECT_CREATE(b); > 15 CHKFORTRANNULLOBJECT(b); > 16 *ierr = MatCreate(MPI_Comm_f2c(*(a)), b); > 17 if (*ierr) return; > (lldb) up > frame #8: 0x000000010492f072 feap`usolve_ at usolve.F:138:72 > 135 onnz => mr(np(246):np(246)+ilist(2,246)-1) > 136 dnnz => mr(np(247):np(247)+ilist(2,247)-1) > 137 > -> 138 call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) > 139 call MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE, > 140 & PETSC_DETERMINE,ierr) > 141 if(pfeap_bcin) call MatSetBlockSize(Kmat,nsbk,ierr) > > > -- > ------------------------------------------------------------------- > Sanjay Govindjee, PhD, PE > Horace, Dorothy, and Katherine Johnson Professor in Engineering > Distinguished Professor of Civil and Environmental Engineering > > 779 Davis Hall > University of California > Berkeley, CA 94720-1710 > > Voice: +1 510 642 6060 > FAX: +1 510 643 5264s_g at berkeley.eduhttp://faculty.ce.berkeley.edu/sanjay > ------------------------------------------------------------------- > > Books: > > Introduction to Mechanics of Solid Materialshttps://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080 > > Continuum Mechanics of Solidshttps://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721 > > Example Problems for Continuum Mechanics of Solidshttps://www.amazon.com/dp/1083047361/ > > Engineering Mechanics of Deformable Solidshttps://www.amazon.com/dp/0199651647 > > Engineering Mechanics 3 (Dynamics) 2nd Editionhttp://www.amazon.com/dp/3642537111 > > Engineering Mechanics 3, Supplementary Problems: Dynamics https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!Ze1e1pn5iR1dyxVghSG5nRjhvjR5ODwW_5cnNeDlkxHfiBm7LC3cDe1n-nOt8RCETQzWjV0DIc1Il5Sei2SjZQ$ > > ------------------------------------------------------------------- > NSF NHERI SimCenterhttps://simcenter.designsafe-ci.org/ > ------------------------------------------------------------------- > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Mon Mar 24 13:26:36 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 24 Mar 2025 11:26:36 -0700 Subject: [petsc-users] GCREATEF.C error In-Reply-To: References: Message-ID: <05ef1b49-fe92-4f63-a93d-9265c006970f@berkeley.edu> I checked also that this is the first time that this part of the code is entered.? Before the call to MatCreate, all processes are reporting Kmat is 0 (as is PETSC_NULL_MAT).? All have the same PETSC_COMM_WORLD of 0. - On 3/24/25 8:14 AM, Sanjay Govindjee wrote: > Hi Barry, > The call sequence happens across several routines.? I believe it is as > follows: > > ? ? ? call PetscInitialize(PETSC_NULL_CHARACTER, ? ierr) > ? ? ? call MPI_Comm_rank(PETSC_COMM_WORLD, rank, ? ierr) > ? ? ? call MPI_Comm_size(PETSC_COMM_WORLD, ntasks, ierr) > ????? Kmat ?= PETSC_NULL_MAT > > ????? if(Kmat.eq.PETSC_NULL_MAT) then > ???????? call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) > ???????? etc... > > Kmat itself is declared in a module > > ? ? ? module pfeapc > # ? ? include ? > ? ? ? use ? ? ? ? ? ? ? ? ? ? ? petscksp > ? ? ? implicit none > > ? ? ? Vec ? ? ? ? ?:: rhs, sol, xvec > ? ? ? Vec ? ? ? ? ?:: yvec, zvec > ? ? ? Vec ? ? ? ? ?:: Mdiag, Msqrt > ? ? ? Mat ? ? ? ? ?:: Kmat, Mmat, Pmat > ? ? ? KSP ? ? ? ? ?:: kspsol > ? ? ? end module pfeapc > > - > > On Mon, Mar 24, 2025 at 7:15?AM Barry Smith wrote: > > > ? ? How do you declare and initialize the matrix in usolve.F > before calling MatCreate()? You should not initialize it with any > value before the call. > > ? ?Barry > > >> On Mar 23, 2025, at 11:10?PM, Sanjay Govindjee via petsc-users >> wrote: >> >> Barry, >> >> ? I now have a compiled version of my code using the main >> branch.? When I run however I am getting an error in matcreate_( >> ) when I try to solve (actually just set up the matrix).? The >> console window reports >> >> [0]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 >> Cannot create PETSC_NULL_XXX object >> [3]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 >> Cannot create PETSC_NULL_XXX object >> [2]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 >> Cannot create PETSC_NULL_XXX object >> [1]PETSC ERROR: matcreate_() at >> /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 >> Cannot create PETSC_NULL_XXX object >> >> The debugger windows all report (modulo the pid): >> >> (lldb) process attach --pid 90952 >> Process 90952 stopped >> * thread #1, queue = 'com.apple.main-thread', stop reason = >> signal SIGSTOP >> ??? frame #0: 0x00007fff69d92746 >> libsystem_kernel.dylib`__semwait_signal + 10 >> libsystem_kernel.dylib`__semwait_signal: >> ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> >> ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi >> ??? 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d??????????? ; cerror >> ??? 0x7fff69d92750 <+20>: retq >> Target 0: (feap) stopped. >> frame >> >> The debugger reports for the stack: >> >> (lldb) thread backtrace >> * thread #1, queue = 'com.apple.main-thread', stop reason = >> signal SIGSTOP >> ? * frame #0: 0x00007fff69d92746 >> libsystem_kernel.dylib`__semwait_signal + 10 >> ??? frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep >> + 196 >> ??? frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 >> ??? frame #3: 0x0000000111acb04c >> libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 >> ??? frame #4: 0x0000000111722961 >> libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 >> ??? frame #5: 0x00000001162ec7c8 >> libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000011788fde8, >> line=14, fun="matcreate_", >> file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", >> num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create >> PETSC_NULL_XXX object", ctx=0x0000000000000000) at adebug.c:522:9 >> ??? frame #6: 0x00000001162ecdb0 >> libpetsc.3.022.dylib`PetscError(comm=0x000000011788fde8, >> line=14, func="matcreate_", >> file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", >> n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create >> PETSC_NULL_XXX object") at err.c:409:15 >> ??? frame #7: 0x000000011233ecad >> libpetsc.3.022.dylib`matcreate_(a=0x00000001165fdc74, >> b=0x000000010fcde8c8, ierr=0x00007ffee0460308) at gcreatef.c:14:3 >> ??? frame #8: 0x000000010f7cf072 feap`usolve_ at usolve.F:138:72 >> ??? frame #9: 0x000000010f942de2 feap`presol_ at presol.f:181:72 >> ??? frame #10: 0x000000010f8cb8d8 feap`pmacr1_ at pmacr1.f:555:72 >> ??? frame #11: 0x000000010f8c60ad feap`pmacr_ at pmacr.f:614:72 >> ??? frame #12: 0x000000010f86ae4f feap`pcontr_ at >> pcontr.f:1375:72 >> ??? frame #13: 0x000000010fc1215e feap`main at feap87.f:173:72 >> ??? frame #14: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 >> ??? frame #15: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 >> >> Here is a peek at the frame stack: >> >> frame #3: 0x0000000105c7104c >> libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 >> ?? 45 >> ?? 46????? #if defined(PETSC_HAVE_SLEEP) >> ?? 47??????? else >> -> 48????????? sleep((int)s); >> ?? 49????? #elif defined(PETSC_HAVE__SLEEP) && >> defined(PETSC_HAVE__SLEEP_MILISEC) >> ?? 50??????? else _sleep((int)(s * 1000)); >> ?? 51????? #elif defined(PETSC_HAVE__SLEEP) >> (lldb) up >> frame #4: 0x00000001058c8961 >> libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 >> ?? 455?????????? while (left > 0) left = PetscSleep(left) - 1; >> ?? 456???????? } >> ?? 457?????? #else >> -> 458 PetscCall(PetscSleep(sleeptime)); >> ?? 459?????? #endif >> ?? 460?????? } >> ?? 461???? #endif >> (lldb) up >> frame #5: 0x000000010a4927c8 >> libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000010c7f1de8, >> line=14, fun="matcreate_", >> file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", >> num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create >> PETSC_NULL_XXX object", ctx=0x0000000000000000) at adebug.c:522:9 >> ?? 519?????? if (fun) (void)(*PetscErrorPrintf)("%s() at >> %s:%d %s\n", fun, file, line, mess); >> ?? 520?????? else (void)(*PetscErrorPrintf)("%s:%d %s\n", >> file, line, mess); >> ?? 521 >> -> 522?????? (void)PetscAttachDebugger(); >> ?? 523?????? abort(); /* call abort because don't want to >> kill other MPI ranks that may successfully attach to debugger */ >> ?? 524?????? PetscFunctionReturn(PETSC_SUCCESS); >> ?? 525???? } >> (lldb) up >> frame #6: 0x000000010a492db0 >> libpetsc.3.022.dylib`PetscError(comm=0x000000010c7f1de8, >> line=14, func="matcreate_", >> file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", >> n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create >> PETSC_NULL_XXX object") at err.c:409:15 >> ?? 406?????? if (p == PETSC_ERROR_INITIAL && n != >> PETSC_ERR_MEMC) (void)PetscMallocValidate(__LINE__, >> PETSC_FUNCTION_NAME, __FILE__); >> ?? 407 >> ?? 408?????? if (!eh) ierr = PetscTraceBackErrorHandler(comm, >> line, func, file, n, p, lbuf, NULL); >> -> 409?????? else ierr = (*eh->handler)(comm, line, func, >> file, n, p, lbuf, eh->ctx); >> ?? 410?????? PetscStackClearTop; >> ?? 411 >> ?? 412?????? /* >> (lldb) up >> frame #7: 0x00000001064e4cad >> libpetsc.3.022.dylib`matcreate_(a=0x000000010a7a3c74, >> b=0x0000000104e3e8c8, ierr=0x00007ffeeb300308) at gcreatef.c:14:3 >> ?? 11????? PETSC_EXTERN void matcreate_(MPI_Fint *a, Mat *b, >> PetscErrorCode *ierr) >> ?? 12????? { >> ?? 13??????? PetscBool null_b = !*(void**) b ? PETSC_TRUE : >> PETSC_FALSE; >> -> 14??????? PETSC_FORTRAN_OBJECT_CREATE(b); >> ?? 15??????? CHKFORTRANNULLOBJECT(b); >> ?? 16??????? *ierr = MatCreate(MPI_Comm_f2c(*(a)), b); >> ?? 17??????? if (*ierr) return; >> (lldb) up >> frame #8: 0x000000010492f072 feap`usolve_ at usolve.F:138:72 >> ?? 135?????????????? onnz => mr(np(246):np(246)+ilist(2,246)-1) >> ?? 136?????????????? dnnz => mr(np(247):np(247)+ilist(2,247)-1) >> ?? 137 >> -> 138?????????????? call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >> ?? 139?????????????? call >> MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE, >> ?? 140????????? & PETSC_DETERMINE,ierr) >> ?? 141?????????????? if(pfeap_bcin) call >> MatSetBlockSize(Kmat,nsbk,ierr) >> >> >> -- >> ------------------------------------------------------------------- >> Sanjay Govindjee, PhD, PE >> Horace, Dorothy, and Katherine Johnson Professor in Engineering >> Distinguished Professor of Civil and Environmental Engineering >> >> 779 Davis Hall >> University of California >> Berkeley, CA 94720-1710 >> >> Voice: +1 510 642 6060 >> FAX: +1 510 643 5264 >> s_g at berkeley.edu >> https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!csm5TpwF50O4A3Fzpkjw2ykrUWpvpkp4Y9-ejUpCagaALLqiRXdNvemoAnv93KNM1WTv5sdI-36GWoIL-MAyXg$ >> ------------------------------------------------------------------- >> >> Books: >> >> Introduction to Mechanics of Solid Materials >> https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!csm5TpwF50O4A3Fzpkjw2ykrUWpvpkp4Y9-ejUpCagaALLqiRXdNvemoAnv93KNM1WTv5sdI-36GWoLbtM0llA$ >> >> Continuum Mechanics of Solids >> https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!csm5TpwF50O4A3Fzpkjw2ykrUWpvpkp4Y9-ejUpCagaALLqiRXdNvemoAnv93KNM1WTv5sdI-36GWoKFxHALpw$ >> >> Example Problems for Continuum Mechanics of Solids >> https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!csm5TpwF50O4A3Fzpkjw2ykrUWpvpkp4Y9-ejUpCagaALLqiRXdNvemoAnv93KNM1WTv5sdI-36GWoI4JfkbnQ$ >> >> Engineering Mechanics of Deformable Solids >> https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!csm5TpwF50O4A3Fzpkjw2ykrUWpvpkp4Y9-ejUpCagaALLqiRXdNvemoAnv93KNM1WTv5sdI-36GWoLTK5LzXQ$ >> >> Engineering Mechanics 3 (Dynamics) 2nd Edition >> https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!csm5TpwF50O4A3Fzpkjw2ykrUWpvpkp4Y9-ejUpCagaALLqiRXdNvemoAnv93KNM1WTv5sdI-36GWoLqPKwRcg$ >> >> Engineering Mechanics 3, Supplementary Problems: Dynamics >> https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!csm5TpwF50O4A3Fzpkjw2ykrUWpvpkp4Y9-ejUpCagaALLqiRXdNvemoAnv93KNM1WTv5sdI-36GWoJ-NLmpBQ$ >> >> ------------------------------------------------------------------- >> NSF NHERI SimCenter >> https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!csm5TpwF50O4A3Fzpkjw2ykrUWpvpkp4Y9-ejUpCagaALLqiRXdNvemoAnv93KNM1WTv5sdI-36GWoJSbEtJGA$ >> ------------------------------------------------------------------- >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Mar 24 13:33:21 2025 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 24 Mar 2025 14:33:21 -0400 Subject: [petsc-users] GCREATEF.C error In-Reply-To: References: Message-ID: <89099373-7D27-4259-8E83-1258853EF6AA@petsc.dev> Ahh, you cannot use this type of construct if (x .eq. PETSC_NULL_XXX) then anymore. Nor can you ever assign x = PETSC_NULL_XXXX Instead use if (PetscObjectIsNull(x)) then or if (.not. PetscObjectIsNull(x)) then Also all PETSc objects are now null when they are declared so just write Mat Kmat if (PetscObjectIsNull(Kmat)) then call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) > etc... Barry > On Mar 24, 2025, at 11:14?AM, Sanjay Govindjee wrote: > > Hi Barry, > The call sequence happens across several routines. I believe it is as follows: > > call PetscInitialize(PETSC_NULL_CHARACTER, ierr) > call MPI_Comm_rank(PETSC_COMM_WORLD, rank, ierr) > call MPI_Comm_size(PETSC_COMM_WORLD, ntasks, ierr) > Kmat = PETSC_NULL_MAT > > if(Kmat.eq.PETSC_NULL_MAT) then > call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) > etc... > > Kmat itself is declared in a module > > module pfeapc > # include > use petscksp > implicit none > > Vec :: rhs, sol, xvec > Vec :: yvec, zvec > Vec :: Mdiag, Msqrt > Mat :: Kmat, Mmat, Pmat > KSP :: kspsol > end module pfeapc > > - > > On Mon, Mar 24, 2025 at 7:15?AM Barry Smith > wrote: >> >> How do you declare and initialize the matrix in usolve.F before calling MatCreate()? You should not initialize it with any value before the call. >> >> Barry >> >> >>> On Mar 23, 2025, at 11:10?PM, Sanjay Govindjee via petsc-users > wrote: >>> >>> Barry, >>> >>> I now have a compiled version of my code using the main branch. When I run however I am getting an error in matcreate_( ) when I try to solve (actually just set up the matrix). The console window reports >>> [0]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object >>> [3]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object >>> [2]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object >>> [1]PETSC ERROR: matcreate_() at /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 Cannot create PETSC_NULL_XXX object >>> >>> The debugger windows all report (modulo the pid): >>> (lldb) process attach --pid 90952 >>> Process 90952 stopped >>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>> frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>> libsystem_kernel.dylib`__semwait_signal: >>> -> 0x7fff69d92746 <+10>: jae 0x7fff69d92750 ; <+20> >>> 0x7fff69d92748 <+12>: movq %rax, %rdi >>> 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d ; cerror >>> 0x7fff69d92750 <+20>: retq >>> Target 0: (feap) stopped. >>> frame >>> The debugger reports for the stack: >>> (lldb) thread backtrace >>> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP >>> * frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 >>> frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep + 196 >>> frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 >>> frame #3: 0x0000000111acb04c libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 >>> frame #4: 0x0000000111722961 libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 >>> frame #5: 0x00000001162ec7c8 libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000011788fde8, line=14, fun="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object", ctx=0x0000000000000000) at adebug.c:522:9 >>> frame #6: 0x00000001162ecdb0 libpetsc.3.022.dylib`PetscError(comm=0x000000011788fde8, line=14, func="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object") at err.c:409:15 >>> frame #7: 0x000000011233ecad libpetsc.3.022.dylib`matcreate_(a=0x00000001165fdc74, b=0x000000010fcde8c8, ierr=0x00007ffee0460308) at gcreatef.c:14:3 >>> frame #8: 0x000000010f7cf072 feap`usolve_ at usolve.F:138:72 >>> frame #9: 0x000000010f942de2 feap`presol_ at presol.f:181:72 >>> frame #10: 0x000000010f8cb8d8 feap`pmacr1_ at pmacr1.f:555:72 >>> frame #11: 0x000000010f8c60ad feap`pmacr_ at pmacr.f:614:72 >>> frame #12: 0x000000010f86ae4f feap`pcontr_ at pcontr.f:1375:72 >>> frame #13: 0x000000010fc1215e feap`main at feap87.f:173:72 >>> frame #14: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 >>> frame #15: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 >>> >>> Here is a peek at the frame stack: >>> frame #3: 0x0000000105c7104c libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 >>> 45 >>> 46 #if defined(PETSC_HAVE_SLEEP) >>> 47 else >>> -> 48 sleep((int)s); >>> 49 #elif defined(PETSC_HAVE__SLEEP) && defined(PETSC_HAVE__SLEEP_MILISEC) >>> 50 else _sleep((int)(s * 1000)); >>> 51 #elif defined(PETSC_HAVE__SLEEP) >>> (lldb) up >>> frame #4: 0x00000001058c8961 libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 >>> 455 while (left > 0) left = PetscSleep(left) - 1; >>> 456 } >>> 457 #else >>> -> 458 PetscCall(PetscSleep(sleeptime)); >>> 459 #endif >>> 460 } >>> 461 #endif >>> (lldb) up >>> frame #5: 0x000000010a4927c8 libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000010c7f1de8, line=14, fun="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object", ctx=0x0000000000000000) at adebug.c:522:9 >>> 519 if (fun) (void)(*PetscErrorPrintf)("%s() at %s:%d %s\n", fun, file, line, mess); >>> 520 else (void)(*PetscErrorPrintf)("%s:%d %s\n", file, line, mess); >>> 521 >>> -> 522 (void)PetscAttachDebugger(); >>> 523 abort(); /* call abort because don't want to kill other MPI ranks that may successfully attach to debugger */ >>> 524 PetscFunctionReturn(PETSC_SUCCESS); >>> 525 } >>> (lldb) up >>> frame #6: 0x000000010a492db0 libpetsc.3.022.dylib`PetscError(comm=0x000000010c7f1de8, line=14, func="matcreate_", file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create PETSC_NULL_XXX object") at err.c:409:15 >>> 406 if (p == PETSC_ERROR_INITIAL && n != PETSC_ERR_MEMC) (void)PetscMallocValidate(__LINE__, PETSC_FUNCTION_NAME, __FILE__); >>> 407 >>> 408 if (!eh) ierr = PetscTraceBackErrorHandler(comm, line, func, file, n, p, lbuf, NULL); >>> -> 409 else ierr = (*eh->handler)(comm, line, func, file, n, p, lbuf, eh->ctx); >>> 410 PetscStackClearTop; >>> 411 >>> 412 /* >>> (lldb) up >>> frame #7: 0x00000001064e4cad libpetsc.3.022.dylib`matcreate_(a=0x000000010a7a3c74, b=0x0000000104e3e8c8, ierr=0x00007ffeeb300308) at gcreatef.c:14:3 >>> 11 PETSC_EXTERN void matcreate_(MPI_Fint *a, Mat *b, PetscErrorCode *ierr) >>> 12 { >>> 13 PetscBool null_b = !*(void**) b ? PETSC_TRUE : PETSC_FALSE; >>> -> 14 PETSC_FORTRAN_OBJECT_CREATE(b); >>> 15 CHKFORTRANNULLOBJECT(b); >>> 16 *ierr = MatCreate(MPI_Comm_f2c(*(a)), b); >>> 17 if (*ierr) return; >>> (lldb) up >>> frame #8: 0x000000010492f072 feap`usolve_ at usolve.F:138:72 >>> 135 onnz => mr(np(246):np(246)+ilist(2,246)-1) >>> 136 dnnz => mr(np(247):np(247)+ilist(2,247)-1) >>> 137 >>> -> 138 call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >>> 139 call MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE, >>> 140 & PETSC_DETERMINE,ierr) >>> 141 if(pfeap_bcin) call MatSetBlockSize(Kmat,nsbk,ierr) >>> >>> -- >>> ------------------------------------------------------------------- >>> Sanjay Govindjee, PhD, PE >>> Horace, Dorothy, and Katherine Johnson Professor in Engineering >>> Distinguished Professor of Civil and Environmental Engineering >>> >>> 779 Davis Hall >>> University of California >>> Berkeley, CA 94720-1710 >>> >>> Voice: +1 510 642 6060 >>> FAX: +1 510 643 5264 >>> s_g at berkeley.edu >>> https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!ZHDW8jL12Lr8PLPn7bLgyqZVqpP9Rp02LSQkIX0wF3EELz8q2gWCvh_B99J_PthgpNAwtStFMUIyYVmLi4v8C-s$ >>> ------------------------------------------------------------------- >>> >>> Books: >>> >>> Introduction to Mechanics of Solid Materials >>> https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!ZHDW8jL12Lr8PLPn7bLgyqZVqpP9Rp02LSQkIX0wF3EELz8q2gWCvh_B99J_PthgpNAwtStFMUIyYVmLllDPxy8$ >>> >>> Continuum Mechanics of Solids >>> https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!ZHDW8jL12Lr8PLPn7bLgyqZVqpP9Rp02LSQkIX0wF3EELz8q2gWCvh_B99J_PthgpNAwtStFMUIyYVmLm6eDVvU$ >>> >>> Example Problems for Continuum Mechanics of Solids >>> https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!ZHDW8jL12Lr8PLPn7bLgyqZVqpP9Rp02LSQkIX0wF3EELz8q2gWCvh_B99J_PthgpNAwtStFMUIyYVmLTJV6Z8Y$ >>> >>> Engineering Mechanics of Deformable Solids >>> https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!ZHDW8jL12Lr8PLPn7bLgyqZVqpP9Rp02LSQkIX0wF3EELz8q2gWCvh_B99J_PthgpNAwtStFMUIyYVmLNIcZt7U$ >>> >>> Engineering Mechanics 3 (Dynamics) 2nd Edition >>> https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!ZHDW8jL12Lr8PLPn7bLgyqZVqpP9Rp02LSQkIX0wF3EELz8q2gWCvh_B99J_PthgpNAwtStFMUIyYVmLye1DB-o$ >>> >>> Engineering Mechanics 3, Supplementary Problems: Dynamics >>> https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!ZHDW8jL12Lr8PLPn7bLgyqZVqpP9Rp02LSQkIX0wF3EELz8q2gWCvh_B99J_PthgpNAwtStFMUIyYVmLwzv91ys$ >>> >>> ------------------------------------------------------------------- >>> NSF NHERI SimCenter >>> https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!ZHDW8jL12Lr8PLPn7bLgyqZVqpP9Rp02LSQkIX0wF3EELz8q2gWCvh_B99J_PthgpNAwtStFMUIyYVmLZs-lDdQ$ >>> ------------------------------------------------------------------- >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Mon Mar 24 13:40:47 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 24 Mar 2025 11:40:47 -0700 Subject: [petsc-users] GCREATEF.C error In-Reply-To: <89099373-7D27-4259-8E83-1258853EF6AA@petsc.dev> References: <89099373-7D27-4259-8E83-1258853EF6AA@petsc.dev> Message-ID: <885920d8-384c-46c8-b8cf-af570573ec0d@berkeley.edu> Thanks!? I look forward to seeing the Change Note :) Time now to perform some search and destroy through the code! -sanjay ------------------------------------------------------------------- Sanjay Govindjee, PhD, PE Horace, Dorothy, and Katherine Johnson Professor in Engineering Distinguished Professor of Civil and Environmental Engineering 779 Davis Hall University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssWC_usPow$ ------------------------------------------------------------------- Books: Introduction to Mechanics of Solid Materials https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssXfRsAazA$ Continuum Mechanics of Solids https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssXoPG4Icg$ Example Problems for Continuum Mechanics of Solids https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssV55pN4oA$ Engineering Mechanics of Deformable Solids https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssUkQt68JQ$ Engineering Mechanics 3 (Dynamics) 2nd Edition https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssVLab2hTw$ Engineering Mechanics 3, Supplementary Problems: Dynamics https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssWTSu_3Cw$ ------------------------------------------------------------------- NSF NHERI SimCenter https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssVRW5b4BA$ ------------------------------------------------------------------- On 3/24/25 11:33 AM, Barry Smith wrote: > ? ?Ahh, you cannot use this type of construct > > ? ? if (x .eq. PETSC_NULL_XXX) then > > ? ?anymore. > > ? ?Nor can you ever assign x ?= PETSC_NULL_XXXX > > ? ?Instead use > > ? ?if (PetscObjectIsNull(x)) then > > ? ?or > > ? ?if (.not. PetscObjectIsNull(x)) then > > ? ?Also all PETSc objects are now null when they are declared so just > write > > ? ?Mat Kmat > ? ?if (PetscObjectIsNull(Kmat)) then > ? ? ? call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >> ???????? etc... > > ? Barry > > > >> On Mar 24, 2025, at 11:14?AM, Sanjay Govindjee wrote: >> >> Hi Barry, >> The call sequence happens across several routines. I believe it is as >> follows: >> >> ? ? ? call PetscInitialize(PETSC_NULL_CHARACTER, ierr) >> ? ? ? call MPI_Comm_rank(PETSC_COMM_WORLD, rank, ierr) >> ? ? ? call MPI_Comm_size(PETSC_COMM_WORLD, ntasks, ierr) >> ????? Kmat ?= PETSC_NULL_MAT >> >> if(Kmat.eq.PETSC_NULL_MAT) then >> ???????? call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >> ???????? etc... >> >> Kmat itself is declared in a module >> >> ? ? ? module pfeapc >> # ? ? include ? >> ? ? ? use ? ? ? ? ? ? ? ? ? ? ? petscksp >> ? ? ? implicit none >> >> ? ? ? Vec ? ? ? ? ?:: rhs, sol, xvec >> ? ? ? Vec ? ? ? ? ?:: yvec, zvec >> ? ? ? Vec ? ? ? ? ?:: Mdiag, Msqrt >> ? ? ? Mat ? ? ? ? ?:: Kmat, Mmat, Pmat >> ? ? ? KSP ? ? ? ? ?:: kspsol >> ? ? ? end module pfeapc >> >> - >> >> On Mon, Mar 24, 2025 at 7:15?AM Barry Smith wrote: >> >> >> ? ? How do you declare and initialize the matrix in usolve.F >> before calling MatCreate()? You should not initialize it with any >> value before the call. >> >> ? ?Barry >> >> >>> On Mar 23, 2025, at 11:10?PM, Sanjay Govindjee via petsc-users >>> wrote: >>> >>> Barry, >>> >>> ? I now have a compiled version of my code using the main >>> branch.? When I run however I am getting an error in matcreate_( >>> ) when I try to solve (actually just set up the matrix).? The >>> console window reports >>> >>> [0]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 >>> Cannot create PETSC_NULL_XXX object >>> [3]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 >>> Cannot create PETSC_NULL_XXX object >>> [2]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 >>> Cannot create PETSC_NULL_XXX object >>> [1]PETSC ERROR: matcreate_() at >>> /Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c:14 >>> Cannot create PETSC_NULL_XXX object >>> >>> The debugger windows all report (modulo the pid): >>> >>> (lldb) process attach --pid 90952 >>> Process 90952 stopped >>> * thread #1, queue = 'com.apple.main-thread', stop reason = >>> signal SIGSTOP >>> ??? frame #0: 0x00007fff69d92746 >>> libsystem_kernel.dylib`__semwait_signal + 10 >>> libsystem_kernel.dylib`__semwait_signal: >>> ->? 0x7fff69d92746 <+10>: jae??? 0x7fff69d92750??????????? ; >>> <+20> >>> ??? 0x7fff69d92748 <+12>: movq %rax, %rdi >>> ??? 0x7fff69d9274b <+15>: jmp 0x7fff69d9121d??????????? ; cerror >>> ??? 0x7fff69d92750 <+20>: retq >>> Target 0: (feap) stopped. >>> frame >>> >>> The debugger reports for the stack: >>> >>> (lldb) thread backtrace >>> * thread #1, queue = 'com.apple.main-thread', stop reason = >>> signal SIGSTOP >>> ? * frame #0: 0x00007fff69d92746 >>> libsystem_kernel.dylib`__semwait_signal + 10 >>> ??? frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep >>> + 196 >>> ??? frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 >>> ??? frame #3: 0x0000000111acb04c >>> libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 >>> ??? frame #4: 0x0000000111722961 >>> libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 >>> ??? frame #5: 0x00000001162ec7c8 >>> libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000011788fde8, >>> line=14, fun="matcreate_", >>> file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", >>> num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create >>> PETSC_NULL_XXX object", ctx=0x0000000000000000) at >>> adebug.c:522:9 >>> ??? frame #6: 0x00000001162ecdb0 >>> libpetsc.3.022.dylib`PetscError(comm=0x000000011788fde8, >>> line=14, func="matcreate_", >>> file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", >>> n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create >>> PETSC_NULL_XXX object") at err.c:409:15 >>> ??? frame #7: 0x000000011233ecad >>> libpetsc.3.022.dylib`matcreate_(a=0x00000001165fdc74, >>> b=0x000000010fcde8c8, ierr=0x00007ffee0460308) at >>> gcreatef.c:14:3 >>> ??? frame #8: 0x000000010f7cf072 feap`usolve_ at usolve.F:138:72 >>> ??? frame #9: 0x000000010f942de2 feap`presol_ at presol.f:181:72 >>> ??? frame #10: 0x000000010f8cb8d8 feap`pmacr1_ at >>> pmacr1.f:555:72 >>> ??? frame #11: 0x000000010f8c60ad feap`pmacr_ at pmacr.f:614:72 >>> ??? frame #12: 0x000000010f86ae4f feap`pcontr_ at >>> pcontr.f:1375:72 >>> ??? frame #13: 0x000000010fc1215e feap`main at feap87.f:173:72 >>> ??? frame #14: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 >>> ??? frame #15: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 >>> >>> Here is a peek at the frame stack: >>> >>> frame #3: 0x0000000105c7104c >>> libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 >>> ?? 45 >>> ?? 46????? #if defined(PETSC_HAVE_SLEEP) >>> ?? 47??????? else >>> -> 48????????? sleep((int)s); >>> ?? 49????? #elif defined(PETSC_HAVE__SLEEP) && >>> defined(PETSC_HAVE__SLEEP_MILISEC) >>> ?? 50??????? else _sleep((int)(s * 1000)); >>> ?? 51????? #elif defined(PETSC_HAVE__SLEEP) >>> (lldb) up >>> frame #4: 0x00000001058c8961 >>> libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 >>> ?? 455?????????? while (left > 0) left = PetscSleep(left) - 1; >>> ?? 456???????? } >>> ?? 457?????? #else >>> -> 458 PetscCall(PetscSleep(sleeptime)); >>> ?? 459?????? #endif >>> ?? 460?????? } >>> ?? 461???? #endif >>> (lldb) up >>> frame #5: 0x000000010a4927c8 >>> libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x000000010c7f1de8, >>> line=14, fun="matcreate_", >>> file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", >>> num=62, p=PETSC_ERROR_INITIAL, mess="Cannot create >>> PETSC_NULL_XXX object", ctx=0x0000000000000000) at >>> adebug.c:522:9 >>> ?? 519?????? if (fun) (void)(*PetscErrorPrintf)("%s() at >>> %s:%d %s\n", fun, file, line, mess); >>> ?? 520?????? else (void)(*PetscErrorPrintf)("%s:%d %s\n", >>> file, line, mess); >>> ?? 521 >>> -> 522 (void)PetscAttachDebugger(); >>> ?? 523?????? abort(); /* call abort because don't want to >>> kill other MPI ranks that may successfully attach to debugger */ >>> ?? 524 PetscFunctionReturn(PETSC_SUCCESS); >>> ?? 525???? } >>> (lldb) up >>> frame #6: 0x000000010a492db0 >>> libpetsc.3.022.dylib`PetscError(comm=0x000000010c7f1de8, >>> line=14, func="matcreate_", >>> file="/Users/sg/petsc-3.22.4main/gnu/ftn/mat/utils/gcreatef.c", >>> n=62, p=PETSC_ERROR_INITIAL, mess="Cannot create >>> PETSC_NULL_XXX object") at err.c:409:15 >>> ?? 406?????? if (p == PETSC_ERROR_INITIAL && n != >>> PETSC_ERR_MEMC) (void)PetscMallocValidate(__LINE__, >>> PETSC_FUNCTION_NAME, __FILE__); >>> ?? 407 >>> ?? 408?????? if (!eh) ierr = >>> PetscTraceBackErrorHandler(comm, line, func, file, n, p, >>> lbuf, NULL); >>> -> 409?????? else ierr = (*eh->handler)(comm, line, func, >>> file, n, p, lbuf, eh->ctx); >>> ?? 410?????? PetscStackClearTop; >>> ?? 411 >>> ?? 412?????? /* >>> (lldb) up >>> frame #7: 0x00000001064e4cad >>> libpetsc.3.022.dylib`matcreate_(a=0x000000010a7a3c74, >>> b=0x0000000104e3e8c8, ierr=0x00007ffeeb300308) at >>> gcreatef.c:14:3 >>> ?? 11????? PETSC_EXTERN void matcreate_(MPI_Fint *a, Mat *b, >>> PetscErrorCode *ierr) >>> ?? 12????? { >>> ?? 13??????? PetscBool null_b = !*(void**) b ? PETSC_TRUE : >>> PETSC_FALSE; >>> -> 14 PETSC_FORTRAN_OBJECT_CREATE(b); >>> ?? 15??????? CHKFORTRANNULLOBJECT(b); >>> ?? 16??????? *ierr = MatCreate(MPI_Comm_f2c(*(a)), b); >>> ?? 17??????? if (*ierr) return; >>> (lldb) up >>> frame #8: 0x000000010492f072 feap`usolve_ at usolve.F:138:72 >>> ?? 135?????????????? onnz => mr(np(246):np(246)+ilist(2,246)-1) >>> ?? 136?????????????? dnnz => mr(np(247):np(247)+ilist(2,247)-1) >>> ?? 137 >>> -> 138?????????????? call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >>> ?? 139?????????????? call >>> MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE, >>> ?? 140 & PETSC_DETERMINE,ierr) >>> ?? 141?????????????? if(pfeap_bcin) call >>> MatSetBlockSize(Kmat,nsbk,ierr) >>> >>> >>> -- >>> ------------------------------------------------------------------- >>> Sanjay Govindjee, PhD, PE >>> Horace, Dorothy, and Katherine Johnson Professor in Engineering >>> Distinguished Professor of Civil and Environmental Engineering >>> >>> 779 Davis Hall >>> University of California >>> Berkeley, CA 94720-1710 >>> >>> Voice: +1 510 642 6060 >>> FAX: +1 510 643 5264 >>> s_g at berkeley.edu >>> https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssWC_usPow$ >>> ------------------------------------------------------------------- >>> >>> Books: >>> >>> Introduction to Mechanics of Solid Materials >>> https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssXfRsAazA$ >>> >>> Continuum Mechanics of Solids >>> https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssXoPG4Icg$ >>> >>> Example Problems for Continuum Mechanics of Solids >>> https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssV55pN4oA$ >>> >>> Engineering Mechanics of Deformable Solids >>> https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssUkQt68JQ$ >>> >>> Engineering Mechanics 3 (Dynamics) 2nd Edition >>> https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssVLab2hTw$ >>> >>> Engineering Mechanics 3, Supplementary Problems: Dynamics >>> https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssWTSu_3Cw$ >>> >>> ------------------------------------------------------------------- >>> NSF NHERI SimCenter >>> https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!eHYMl71AKju95WDkDP_equtIeYH7RhHSzQqTS35mJGW2jkTzRvu62AtY2nbXhngzrBkB78YbsJ62ssVRW5b4BA$ >>> ------------------------------------------------------------------- >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Mar 24 14:18:20 2025 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 24 Mar 2025 15:18:20 -0400 Subject: [petsc-users] Floating point exception when save complex vector into a hdf5 file In-Reply-To: References: Message-ID: Please switch to the main git branch of PETSc and try again. Some of the code has been updated to handle the very large arrays you are working with. Barry > On Mar 24, 2025, at 7:57?AM, Ye Changqing wrote: > > Dear Mark, > > As suggested, I did a fresh configuration on PETSc. The problem is still there. The attachment is the configure.log for your reference. > > Best, > Changqing > > ________________________________________ > ???: Mark Adams > ????: 2025?3?24? 19:14 > ???: Ye Changqing > ??: petsc-users at mcs.anl.gov > ??: Re: [petsc-users] ??: Floating point exception when save complex vector into a hdf5 file > > Just to check, you want to delete the linux-oneapi-complex-opt directory and do a fresh build when you get errors like this and you might send your configure log. > Mark > > On Mon, Mar 24, 2025 at 5:22?AM Ye Changqing > wrote: > Dear all, > > I reconfigured petsc with "--with-debugging=1". It throws more messages which I attached below. > > Best, > Changqing > > ________________________________________ > ???: Ye Changqing > > ????: 2025?3?24? 16:33 > ???: petsc-users at mcs.anl.gov > ??: Floating point exception when save complex vector into a hdf5 file > > Dear PETSc developers, > > I encountered a strange problem when I tried to save a DMDA vector into an hdf5 file, a floating point error was thrown. I can repeat the problem on the cluster. However, the same codes run fine on my local computer. > > Below the .cxx file is the minimal working example, the .txt is the runtime error obtained from SLURM, and the .py file should tell the configure options that I used to build the library. > > Any suggestions? > > Best, > Changqing > > > From s_g at berkeley.edu Mon Mar 24 17:26:41 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 24 Mar 2025 15:26:41 -0700 Subject: [petsc-users] VecSetValue Message-ID: <54241aff-3748-40d9-936d-1fc423704d8a@berkeley.edu> My odessy to update my code to the main branch continues...I am now encountering an error with VecSetValue: [0]PETSC ERROR: VecSetValues() at /Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c:926 Null Pointer: Parameter # 4 This is on a single processor run -- I got tired of closing all those debug windows.? My call looks like call VecSetValue(rhs, k-1, b(j), ADD_VALUES, ierr) and the first two data types are Vec and integer,? and for the third real (kind=8) :: b(:) ! comes through the argument list for the subroutine vecsetvalue_ however seems to be receiving va=0x0000000000000000.? I see that formally the value is supposed to be PetscScalar; is that the issue?? The thread backtrace is below. (lldb) process attach --pid 51640 Process 51640 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP ??? frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 libsystem_kernel.dylib`__semwait_signal: ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi ??? 0x7fff69d9274b <+15>: jmp??? 0x7fff69d9121d??????????? ; cerror ??? 0x7fff69d92750 <+20>: retq Target 0: (feap) stopped. Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". Architecture set to: x86_64h-apple-macosx-. (lldb) thread backtrace * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP ? * frame #0: 0x00007fff69d92746 libsystem_kernel.dylib`__semwait_signal + 10 ??? frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep + 196 ??? frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 ??? frame #3: 0x000000010a1fb04c libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 ??? frame #4: 0x0000000109e52961 libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 ??? frame #5: 0x000000010ea1c7c8 libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x0000000110e78de8, line=926, fun="VecSetValues", file="/Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c", num=85, p=PETSC_ERROR_INITIAL, mess="Null Pointer: Parameter # 4", ctx=0x0000000000000000) at adebug.c:522:9 ??? frame #6: 0x000000010ea1cdb0 libpetsc.3.022.dylib`PetscError(comm=0x0000000110e78de8, line=926, func="VecSetValues", file="/Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c", n=85, p=PETSC_ERROR_INITIAL, mess="Null Pointer: Parameter # %d") at err.c:409:15 ??? frame #7: 0x000000010a866bf2 libpetsc.3.022.dylib`VecSetValues(x=0x00007fdb53068450, ni=1, ix=0x00007ffee6d74098, y=0x0000000000000000, iora=ADD_VALUES) at rvector.c:926:3 ??? frame #8: 0x000000010a82c1dc libpetsc.3.022.dylib`vecsetvalue_(v=0x00000001093ca8f8, i=0x00007ffee6d74098, va=0x0000000000000000, mode=0x00007ffee6d74094, ierr=0x00007ffee6d74238) at zvectorf.c:20:11 ??? frame #9: 0x0000000108ebbb48 feap`usolve_ at usolve.F:256:72 ??? frame #10: 0x000000010904cd9c feap`psolve_ at psolve.f:232:72 ??? frame #11: 0x0000000108fb7edc feap`pmacr1_ at pmacr1.f:667:72 ??? frame #12: 0x0000000108fb20dd feap`pmacr_ at pmacr.f:614:72 ??? frame #13: 0x0000000108f56e7f feap`pcontr_ at pcontr.f:1375:72 ??? frame #14: 0x00000001092fe18e feap`main at feap87.f:173:72 ??? frame #15: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 (lldb) up frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep + 196 libsystem_c.dylib`nanosleep: ->? 0x7fff69d15eea <+196>: testl? %eax, %eax ??? 0x7fff69d15eec <+198>: jns??? 0x7fff69d15eb5 ; <+143> ??? 0x7fff69d15eee <+200>: callq? 0x7fff69d1e008 ; symbol stub for: __error ??? 0x7fff69d15ef3 <+205>: cmpl?? $0x3c, (%rax) (lldb) up frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 libsystem_c.dylib`sleep: ->? 0x7fff69d15d52 <+41>: movl?? %eax, %ecx ??? 0x7fff69d15d54 <+43>: xorl?? %eax, %eax ??? 0x7fff69d15d56 <+45>: cmpl?? $-0x1, %ecx ??? 0x7fff69d15d59 <+48>: jne??? 0x7fff69d15d85??????????? ; <+92> (lldb) up frame #3: 0x000000010a1fb04c libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 ?? 45 ?? 46????? #if defined(PETSC_HAVE_SLEEP) ?? 47??????? else -> 48????????? sleep((int)s); ?? 49????? #elif defined(PETSC_HAVE__SLEEP) && defined(PETSC_HAVE__SLEEP_MILISEC) ?? 50??????? else _sleep((int)(s * 1000)); ?? 51????? #elif defined(PETSC_HAVE__SLEEP) (lldb) up frame #4: 0x0000000109e52961 libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 ?? 455?????????? while (left > 0) left = PetscSleep(left) - 1; ?? 456???????? } ?? 457?????? #else -> 458???????? PetscCall(PetscSleep(sleeptime)); ?? 459?????? #endif ?? 460?????? } ?? 461???? #endif (lldb) up frame #5: 0x000000010ea1c7c8 libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x0000000110e78de8, line=926, fun="VecSetValues", file="/Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c", num=85, p=PETSC_ERROR_INITIAL, mess="Null Pointer: Parameter # 4", ctx=0x0000000000000000) at adebug.c:522:9 ?? 519?????? if (fun) (void)(*PetscErrorPrintf)("%s() at %s:%d %s\n", fun, file, line, mess); ?? 520?????? else (void)(*PetscErrorPrintf)("%s:%d %s\n", file, line, mess); ?? 521 -> 522?????? (void)PetscAttachDebugger(); ?? 523?????? abort(); /* call abort because don't want to kill other MPI ranks that may successfully attach to debugger */ ?? 524?????? PetscFunctionReturn(PETSC_SUCCESS); ?? 525???? } (lldb) up frame #6: 0x000000010ea1cdb0 libpetsc.3.022.dylib`PetscError(comm=0x0000000110e78de8, line=926, func="VecSetValues", file="/Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c", n=85, p=PETSC_ERROR_INITIAL, mess="Null Pointer: Parameter # %d") at err.c:409:15 ?? 406?????? if (p == PETSC_ERROR_INITIAL && n != PETSC_ERR_MEMC) (void)PetscMallocValidate(__LINE__, PETSC_FUNCTION_NAME, __FILE__); ?? 407 ?? 408?????? if (!eh) ierr = PetscTraceBackErrorHandler(comm, line, func, file, n, p, lbuf, NULL); -> 409?????? else ierr = (*eh->handler)(comm, line, func, file, n, p, lbuf, eh->ctx); ?? 410?????? PetscStackClearTop; ?? 411 ?? 412?????? /* (lldb) up frame #7: 0x000000010a866bf2 libpetsc.3.022.dylib`VecSetValues(x=0x00007fdb53068450, ni=1, ix=0x00007ffee6d74098, y=0x0000000000000000, iora=ADD_VALUES) at rvector.c:926:3 ?? 923?????? PetscValidHeaderSpecific(x, VEC_CLASSID, 1); ?? 924?????? if (!ni) PetscFunctionReturn(PETSC_SUCCESS); ?? 925?????? PetscAssertPointer(ix, 3); -> 926?????? PetscAssertPointer(y, 4); ?? 927?????? PetscValidType(x, 1); ?? 928 ?? 929?????? PetscCall(PetscLogEventBegin(VEC_SetValues, x, 0, 0, 0)); (lldb) up frame #8: 0x000000010a82c1dc libpetsc.3.022.dylib`vecsetvalue_(v=0x00000001093ca8f8, i=0x00007ffee6d74098, va=0x0000000000000000, mode=0x00007ffee6d74094, ierr=0x00007ffee6d74238) at zvectorf.c:20:11 ?? 17????? PETSC_EXTERN void vecsetvalue_(Vec *v, PetscInt *i, PetscScalar *va, InsertMode *mode, PetscErrorCode *ierr) ?? 18????? { ?? 19??????? /* cannot use VecSetValue() here since that uses PetscCall() which has a return in it */ -> 20??????? *ierr = VecSetValues(*v, 1, i, va, *mode); ?? 21????? } ?? 22 ?? 23????? PETSC_EXTERN void vecsetvaluelocal_(Vec *v, PetscInt *i, PetscScalar *va, InsertMode *mode, PetscErrorCode *ierr) (lldb) up frame #9: 0x0000000108ebbb48 feap`usolve_ at usolve.F:256:72 ?? 253???????????????? if( k .gt. 0 ) then ?? 254?????????????????? j = j + 1 ?? 255???? !????????????? write(*,*) 'rank: ',rank,' k,j,b(j) ',k,j,b(j) -> 256?????????????????? call VecSetValue(rhs, k-1, b(j), ADD_VALUES, ierr) ?? 257???????????????? endif ?? 258?????????????? end do -- ------------------------------------------------------------------- Sanjay Govindjee, PhD, PE Horace, Dorothy, and Katherine Johnson Professor in Engineering Distinguished Professor of Civil and Environmental Engineering 779 Davis Hall University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!eYHZJmdbbAvMW_dwF5Gl5kuFLWS9Tllmp4OCyVBwo0hVI25qi3ZiV-maVgXQ7wmZ8dKjEGu1hb5u7w9azVgSHQ$ ------------------------------------------------------------------- Books: Introduction to Mechanics of Solid Materials https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!eYHZJmdbbAvMW_dwF5Gl5kuFLWS9Tllmp4OCyVBwo0hVI25qi3ZiV-maVgXQ7wmZ8dKjEGu1hb5u7w_VG5HHMA$ Continuum Mechanics of Solids https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!eYHZJmdbbAvMW_dwF5Gl5kuFLWS9Tllmp4OCyVBwo0hVI25qi3ZiV-maVgXQ7wmZ8dKjEGu1hb5u7w9T8MhzbQ$ Example Problems for Continuum Mechanics of Solids https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!eYHZJmdbbAvMW_dwF5Gl5kuFLWS9Tllmp4OCyVBwo0hVI25qi3ZiV-maVgXQ7wmZ8dKjEGu1hb5u7w_GqMlonw$ Engineering Mechanics of Deformable Solids https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!eYHZJmdbbAvMW_dwF5Gl5kuFLWS9Tllmp4OCyVBwo0hVI25qi3ZiV-maVgXQ7wmZ8dKjEGu1hb5u7w9troczOg$ Engineering Mechanics 3 (Dynamics) 2nd Edition https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!eYHZJmdbbAvMW_dwF5Gl5kuFLWS9Tllmp4OCyVBwo0hVI25qi3ZiV-maVgXQ7wmZ8dKjEGu1hb5u7w_g7GDa1w$ Engineering Mechanics 3, Supplementary Problems: Dynamics https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!eYHZJmdbbAvMW_dwF5Gl5kuFLWS9Tllmp4OCyVBwo0hVI25qi3ZiV-maVgXQ7wmZ8dKjEGu1hb5u7w97Vbz6Dw$ ------------------------------------------------------------------- NSF NHERI SimCenter https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!eYHZJmdbbAvMW_dwF5Gl5kuFLWS9Tllmp4OCyVBwo0hVI25qi3ZiV-maVgXQ7wmZ8dKjEGu1hb5u7w_SCyLZOg$ ------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Mon Mar 24 19:03:39 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 24 Mar 2025 17:03:39 -0700 Subject: [petsc-users] VecSetValue In-Reply-To: <54241aff-3748-40d9-936d-1fc423704d8a@berkeley.edu> References: <54241aff-3748-40d9-936d-1fc423704d8a@berkeley.edu> Message-ID: <91168bd3-54aa-457c-83d9-905880b2870c@berkeley.edu> Problem solved.? I my haste clean up the code, I messed up the definition of by rhs values.? b(:) instead of b(*). The conversion to the main branch seems to be complete and all my test cases seem to work correctly now. - On 3/24/25 3:26 PM, Sanjay Govindjee wrote: > My odessy to update my code to the main branch continues...I am now > encountering an error with VecSetValue: > > [0]PETSC ERROR: VecSetValues() at > /Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c:926 > Null Pointer: Parameter # 4 > > This is on a single processor run -- I got tired of closing all those > debug windows.? My call looks like > > call VecSetValue(rhs, k-1, b(j), ADD_VALUES, ierr) > > and the first two data types are Vec and integer,? and for the third > > real (kind=8) :: b(:) ! comes through the argument list for the > subroutine > > vecsetvalue_ however seems to be receiving va=0x0000000000000000. I > see that formally the value is supposed to be PetscScalar; is that the > issue?? The thread backtrace is below. > > (lldb) process attach --pid 51640 > Process 51640 stopped > * thread #1, queue = 'com.apple.main-thread', stop reason = signal > SIGSTOP > ??? frame #0: 0x00007fff69d92746 > libsystem_kernel.dylib`__semwait_signal + 10 > libsystem_kernel.dylib`__semwait_signal: > ->? 0x7fff69d92746 <+10>: jae 0x7fff69d92750??????????? ; <+20> > ??? 0x7fff69d92748 <+12>: movq?? %rax, %rdi > ??? 0x7fff69d9274b <+15>: jmp??? 0x7fff69d9121d ; cerror > ??? 0x7fff69d92750 <+20>: retq > Target 0: (feap) stopped. > > Executable module set to "/Users/sg/Feap/ver87/parfeap/feap". > Architecture set to: x86_64h-apple-macosx-. > (lldb) thread backtrace > * thread #1, queue = 'com.apple.main-thread', stop reason = signal > SIGSTOP > ? * frame #0: 0x00007fff69d92746 > libsystem_kernel.dylib`__semwait_signal + 10 > ??? frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep + 196 > ??? frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 > ??? frame #3: 0x000000010a1fb04c > libpetsc.3.022.dylib`PetscSleep(s=10) at psleep.c:48:5 > ??? frame #4: 0x0000000109e52961 > libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 > ??? frame #5: 0x000000010ea1c7c8 > libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x0000000110e78de8, > line=926, fun="VecSetValues", > file="/Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c", > num=85, p=PETSC_ERROR_INITIAL, mess="Null Pointer: Parameter # 4", > ctx=0x0000000000000000) at adebug.c:522:9 > ??? frame #6: 0x000000010ea1cdb0 > libpetsc.3.022.dylib`PetscError(comm=0x0000000110e78de8, line=926, > func="VecSetValues", > file="/Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c", > n=85, p=PETSC_ERROR_INITIAL, mess="Null Pointer: Parameter # %d") > at err.c:409:15 > ??? frame #7: 0x000000010a866bf2 > libpetsc.3.022.dylib`VecSetValues(x=0x00007fdb53068450, ni=1, > ix=0x00007ffee6d74098, y=0x0000000000000000, iora=ADD_VALUES) at > rvector.c:926:3 > ??? frame #8: 0x000000010a82c1dc > libpetsc.3.022.dylib`vecsetvalue_(v=0x00000001093ca8f8, > i=0x00007ffee6d74098, va=0x0000000000000000, > mode=0x00007ffee6d74094, ierr=0x00007ffee6d74238) at zvectorf.c:20:11 > ??? frame #9: 0x0000000108ebbb48 feap`usolve_ at usolve.F:256:72 > ??? frame #10: 0x000000010904cd9c feap`psolve_ at psolve.f:232:72 > ??? frame #11: 0x0000000108fb7edc feap`pmacr1_ at pmacr1.f:667:72 > ??? frame #12: 0x0000000108fb20dd feap`pmacr_ at pmacr.f:614:72 > ??? frame #13: 0x0000000108f56e7f feap`pcontr_ at pcontr.f:1375:72 > ??? frame #14: 0x00000001092fe18e feap`main at feap87.f:173:72 > ??? frame #15: 0x00007fff69c4ecc9 libdyld.dylib`start + 1 > (lldb) up > frame #1: 0x00007fff69d15eea libsystem_c.dylib`nanosleep + 196 > libsystem_c.dylib`nanosleep: > ->? 0x7fff69d15eea <+196>: testl? %eax, %eax > ??? 0x7fff69d15eec <+198>: jns 0x7fff69d15eb5??????????? ; <+143> > ??? 0x7fff69d15eee <+200>: callq 0x7fff69d1e008??????????? ; > symbol stub for: __error > ??? 0x7fff69d15ef3 <+205>: cmpl?? $0x3c, (%rax) > (lldb) up > frame #2: 0x00007fff69d15d52 libsystem_c.dylib`sleep + 41 > libsystem_c.dylib`sleep: > ->? 0x7fff69d15d52 <+41>: movl?? %eax, %ecx > ??? 0x7fff69d15d54 <+43>: xorl?? %eax, %eax > ??? 0x7fff69d15d56 <+45>: cmpl?? $-0x1, %ecx > ??? 0x7fff69d15d59 <+48>: jne??? 0x7fff69d15d85 ; <+92> > (lldb) up > frame #3: 0x000000010a1fb04c libpetsc.3.022.dylib`PetscSleep(s=10) > at psleep.c:48:5 > ?? 45 > ?? 46????? #if defined(PETSC_HAVE_SLEEP) > ?? 47??????? else > -> 48????????? sleep((int)s); > ?? 49????? #elif defined(PETSC_HAVE__SLEEP) && > defined(PETSC_HAVE__SLEEP_MILISEC) > ?? 50??????? else _sleep((int)(s * 1000)); > ?? 51????? #elif defined(PETSC_HAVE__SLEEP) > (lldb) up > frame #4: 0x0000000109e52961 > libpetsc.3.022.dylib`PetscAttachDebugger at adebug.c:458:5 > ?? 455?????????? while (left > 0) left = PetscSleep(left) - 1; > ?? 456???????? } > ?? 457?????? #else > -> 458???????? PetscCall(PetscSleep(sleeptime)); > ?? 459?????? #endif > ?? 460?????? } > ?? 461???? #endif > (lldb) up > frame #5: 0x000000010ea1c7c8 > libpetsc.3.022.dylib`PetscAttachDebuggerErrorHandler(comm=0x0000000110e78de8, > line=926, fun="VecSetValues", > file="/Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c", > num=85, p=PETSC_ERROR_INITIAL, mess="Null Pointer: Parameter # 4", > ctx=0x0000000000000000) at adebug.c:522:9 > ?? 519?????? if (fun) (void)(*PetscErrorPrintf)("%s() at %s:%d > %s\n", fun, file, line, mess); > ?? 520?????? else (void)(*PetscErrorPrintf)("%s:%d %s\n", file, > line, mess); > ?? 521 > -> 522?????? (void)PetscAttachDebugger(); > ?? 523?????? abort(); /* call abort because don't want to kill > other MPI ranks that may successfully attach to debugger */ > ?? 524?????? PetscFunctionReturn(PETSC_SUCCESS); > ?? 525???? } > (lldb) up > frame #6: 0x000000010ea1cdb0 > libpetsc.3.022.dylib`PetscError(comm=0x0000000110e78de8, line=926, > func="VecSetValues", > file="/Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c", > n=85, p=PETSC_ERROR_INITIAL, mess="Null Pointer: Parameter # %d") > at err.c:409:15 > ?? 406?????? if (p == PETSC_ERROR_INITIAL && n != PETSC_ERR_MEMC) > (void)PetscMallocValidate(__LINE__, PETSC_FUNCTION_NAME, __FILE__); > ?? 407 > ?? 408?????? if (!eh) ierr = PetscTraceBackErrorHandler(comm, > line, func, file, n, p, lbuf, NULL); > -> 409?????? else ierr = (*eh->handler)(comm, line, func, file, n, > p, lbuf, eh->ctx); > ?? 410?????? PetscStackClearTop; > ?? 411 > ?? 412?????? /* > (lldb) up > frame #7: 0x000000010a866bf2 > libpetsc.3.022.dylib`VecSetValues(x=0x00007fdb53068450, ni=1, > ix=0x00007ffee6d74098, y=0x0000000000000000, iora=ADD_VALUES) at > rvector.c:926:3 > ?? 923?????? PetscValidHeaderSpecific(x, VEC_CLASSID, 1); > ?? 924?????? if (!ni) PetscFunctionReturn(PETSC_SUCCESS); > ?? 925?????? PetscAssertPointer(ix, 3); > -> 926?????? PetscAssertPointer(y, 4); > ?? 927?????? PetscValidType(x, 1); > ?? 928 > ?? 929?????? PetscCall(PetscLogEventBegin(VEC_SetValues, x, 0, 0, 0)); > (lldb) up > frame #8: 0x000000010a82c1dc > libpetsc.3.022.dylib`vecsetvalue_(v=0x00000001093ca8f8, > i=0x00007ffee6d74098, va=0x0000000000000000, > mode=0x00007ffee6d74094, ierr=0x00007ffee6d74238) at zvectorf.c:20:11 > ?? 17????? PETSC_EXTERN void vecsetvalue_(Vec *v, PetscInt *i, > PetscScalar *va, InsertMode *mode, PetscErrorCode *ierr) > ?? 18????? { > ?? 19??????? /* cannot use VecSetValue() here since that uses > PetscCall() which has a return in it */ > -> 20??????? *ierr = VecSetValues(*v, 1, i, va, *mode); > ?? 21????? } > ?? 22 > ?? 23????? PETSC_EXTERN void vecsetvaluelocal_(Vec *v, PetscInt > *i, PetscScalar *va, InsertMode *mode, PetscErrorCode *ierr) > (lldb) up > frame #9: 0x0000000108ebbb48 feap`usolve_ at usolve.F:256:72 > ?? 253???????????????? if( k .gt. 0 ) then > ?? 254?????????????????? j = j + 1 > ?? 255???? !????????????? write(*,*) 'rank: ',rank,' k,j,b(j) > ',k,j,b(j) > -> 256?????????????????? call VecSetValue(rhs, k-1, b(j), > ADD_VALUES, ierr) > ?? 257???????????????? endif > ?? 258?????????????? end do > > -- > ------------------------------------------------------------------- > Sanjay Govindjee, PhD, PE > Horace, Dorothy, and Katherine Johnson Professor in Engineering > Distinguished Professor of Civil and Environmental Engineering > > 779 Davis Hall > University of California > Berkeley, CA 94720-1710 > > Voice: +1 510 642 6060 > FAX: +1 510 643 5264 > s_g at berkeley.edu > https://urldefense.us/v3/__http://faculty.ce.berkeley.edu/sanjay__;!!G_uCfscf7eWS!dYf1SlS-8ja3_n6peHkUMIB5DdSGjm-UvoWPR-vhJMl2ouqSLJD-L1T_ejTPQkZsSeGKD6bYHN6ZfqVNP5YBrA$ > ------------------------------------------------------------------- > > Books: > > Introduction to Mechanics of Solid Materials > https://urldefense.us/v3/__https://global.oup.com/academic/product/introduction-to-mechanics-of-solid-materials-9780192866080__;!!G_uCfscf7eWS!dYf1SlS-8ja3_n6peHkUMIB5DdSGjm-UvoWPR-vhJMl2ouqSLJD-L1T_ejTPQkZsSeGKD6bYHN6ZfqUXpj3hgw$ > > Continuum Mechanics of Solids > https://urldefense.us/v3/__https://global.oup.com/academic/product/continuum-mechanics-of-solids-9780198864721__;!!G_uCfscf7eWS!dYf1SlS-8ja3_n6peHkUMIB5DdSGjm-UvoWPR-vhJMl2ouqSLJD-L1T_ejTPQkZsSeGKD6bYHN6ZfqWpCx2KBg$ > > Example Problems for Continuum Mechanics of Solids > https://urldefense.us/v3/__https://www.amazon.com/dp/1083047361/__;!!G_uCfscf7eWS!dYf1SlS-8ja3_n6peHkUMIB5DdSGjm-UvoWPR-vhJMl2ouqSLJD-L1T_ejTPQkZsSeGKD6bYHN6ZfqX_UomMAA$ > > Engineering Mechanics of Deformable Solids > https://urldefense.us/v3/__https://www.amazon.com/dp/0199651647__;!!G_uCfscf7eWS!dYf1SlS-8ja3_n6peHkUMIB5DdSGjm-UvoWPR-vhJMl2ouqSLJD-L1T_ejTPQkZsSeGKD6bYHN6ZfqVCtbC14A$ > > Engineering Mechanics 3 (Dynamics) 2nd Edition > https://urldefense.us/v3/__http://www.amazon.com/dp/3642537111__;!!G_uCfscf7eWS!dYf1SlS-8ja3_n6peHkUMIB5DdSGjm-UvoWPR-vhJMl2ouqSLJD-L1T_ejTPQkZsSeGKD6bYHN6ZfqW9ylWWbg$ > > Engineering Mechanics 3, Supplementary Problems: Dynamics > https://urldefense.us/v3/__http://www.amzn.com/B00SOXN8JU__;!!G_uCfscf7eWS!dYf1SlS-8ja3_n6peHkUMIB5DdSGjm-UvoWPR-vhJMl2ouqSLJD-L1T_ejTPQkZsSeGKD6bYHN6ZfqWpqNSfAg$ > > ------------------------------------------------------------------- > NSF NHERI SimCenter > https://urldefense.us/v3/__https://simcenter.designsafe-ci.org/__;!!G_uCfscf7eWS!dYf1SlS-8ja3_n6peHkUMIB5DdSGjm-UvoWPR-vhJMl2ouqSLJD-L1T_ejTPQkZsSeGKD6bYHN6ZfqVEqpkvTg$ > ------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ye_Changqing at outlook.com Tue Mar 25 00:54:51 2025 From: Ye_Changqing at outlook.com (Ye Changqing) Date: Tue, 25 Mar 2025 05:54:51 +0000 Subject: [petsc-users] =?utf-8?b?5Zue5aSNOiAgRmxvYXRpbmcgcG9pbnQgZXhjZXB0?= =?utf-8?q?ion_when_save_complex_vector_into_a_hdf5_file?= In-Reply-To: References: Message-ID: Thanks, Barry. It works! Best, Changqing ________________________________________ ???: Barry Smith ????: 2025?3?25? 3:18 ???: Ye Changqing ??: Mark Adams; petsc-users at mcs.anl.gov ??: Re: [petsc-users] Floating point exception when save complex vector into a hdf5 file Please switch to the main git branch of PETSc and try again. Some of the code has been updated to handle the very large arrays you are working with. Barry > On Mar 24, 2025, at 7:57?AM, Ye Changqing wrote: > > Dear Mark, > > As suggested, I did a fresh configuration on PETSc. The problem is still there. The attachment is the configure.log for your reference. > > Best, > Changqing > > ________________________________________ > ???: Mark Adams > ????: 2025?3?24? 19:14 > ???: Ye Changqing > ??: petsc-users at mcs.anl.gov > ??: Re: [petsc-users] ??: Floating point exception when save complex vector into a hdf5 file > > Just to check, you want to delete the linux-oneapi-complex-opt directory and do a fresh build when you get errors like this and you might send your configure log. > Mark > > On Mon, Mar 24, 2025 at 5:22?AM Ye Changqing > wrote: > Dear all, > > I reconfigured petsc with "--with-debugging=1". It throws more messages which I attached below. > > Best, > Changqing > > ________________________________________ > ???: Ye Changqing > > ????: 2025?3?24? 16:33 > ???: petsc-users at mcs.anl.gov > ??: Floating point exception when save complex vector into a hdf5 file > > Dear PETSc developers, > > I encountered a strange problem when I tried to save a DMDA vector into an hdf5 file, a floating point error was thrown. I can repeat the problem on the cluster. However, the same codes run fine on my local computer. > > Below the .cxx file is the minimal working example, the .txt is the runtime error obtained from SLURM, and the .py file should tell the configure options that I used to build the library. > > Any suggestions? > > Best, > Changqing > > > From bsmith at petsc.dev Tue Mar 25 13:39:10 2025 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 25 Mar 2025 14:39:10 -0400 Subject: [petsc-users] Floating point exception when save complex vector into a hdf5 file In-Reply-To: References: Message-ID: Great! Thanks for letting us know, Barry > On Mar 25, 2025, at 1:54?AM, Ye Changqing wrote: > > Thanks, Barry. It works! > > Best, > Changqing > > ________________________________________ > ???: Barry Smith > ????: 2025?3?25? 3:18 > ???: Ye Changqing > ??: Mark Adams; petsc-users at mcs.anl.gov > ??: Re: [petsc-users] Floating point exception when save complex vector into a hdf5 file > > > Please switch to the main git branch of PETSc and try again. Some of the code has been updated to handle the very large arrays you are working with. > > Barry > > > >> On Mar 24, 2025, at 7:57?AM, Ye Changqing wrote: >> >> Dear Mark, >> >> As suggested, I did a fresh configuration on PETSc. The problem is still there. The attachment is the configure.log for your reference. >> >> Best, >> Changqing >> >> ________________________________________ >> ???: Mark Adams >> ????: 2025?3?24? 19:14 >> ???: Ye Changqing >> ??: petsc-users at mcs.anl.gov >> ??: Re: [petsc-users] ??: Floating point exception when save complex vector into a hdf5 file >> >> Just to check, you want to delete the linux-oneapi-complex-opt directory and do a fresh build when you get errors like this and you might send your configure log. >> Mark >> >> On Mon, Mar 24, 2025 at 5:22?AM Ye Changqing > wrote: >> Dear all, >> >> I reconfigured petsc with "--with-debugging=1". It throws more messages which I attached below. >> >> Best, >> Changqing >> >> ________________________________________ >> ???: Ye Changqing > >> ????: 2025?3?24? 16:33 >> ???: petsc-users at mcs.anl.gov >> ??: Floating point exception when save complex vector into a hdf5 file >> >> Dear PETSc developers, >> >> I encountered a strange problem when I tried to save a DMDA vector into an hdf5 file, a floating point error was thrown. I can repeat the problem on the cluster. However, the same codes run fine on my local computer. >> >> Below the .cxx file is the minimal working example, the .txt is the runtime error obtained from SLURM, and the .py file should tell the configure options that I used to build the library. >> >> Any suggestions? >> >> Best, >> Changqing >> >> >> > From bsmith at petsc.dev Tue Mar 25 13:39:10 2025 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 25 Mar 2025 14:39:10 -0400 Subject: [petsc-users] Floating point exception when save complex vector into a hdf5 file In-Reply-To: References: Message-ID: Great! Thanks for letting us know, Barry > On Mar 25, 2025, at 1:54?AM, Ye Changqing wrote: > > Thanks, Barry. It works! > > Best, > Changqing > > ________________________________________ > ???: Barry Smith > ????: 2025?3?25? 3:18 > ???: Ye Changqing > ??: Mark Adams; petsc-users at mcs.anl.gov > ??: Re: [petsc-users] Floating point exception when save complex vector into a hdf5 file > > > Please switch to the main git branch of PETSc and try again. Some of the code has been updated to handle the very large arrays you are working with. > > Barry > > > >> On Mar 24, 2025, at 7:57?AM, Ye Changqing wrote: >> >> Dear Mark, >> >> As suggested, I did a fresh configuration on PETSc. The problem is still there. The attachment is the configure.log for your reference. >> >> Best, >> Changqing >> >> ________________________________________ >> ???: Mark Adams >> ????: 2025?3?24? 19:14 >> ???: Ye Changqing >> ??: petsc-users at mcs.anl.gov >> ??: Re: [petsc-users] ??: Floating point exception when save complex vector into a hdf5 file >> >> Just to check, you want to delete the linux-oneapi-complex-opt directory and do a fresh build when you get errors like this and you might send your configure log. >> Mark >> >> On Mon, Mar 24, 2025 at 5:22?AM Ye Changqing > wrote: >> Dear all, >> >> I reconfigured petsc with "--with-debugging=1". It throws more messages which I attached below. >> >> Best, >> Changqing >> >> ________________________________________ >> ???: Ye Changqing > >> ????: 2025?3?24? 16:33 >> ???: petsc-users at mcs.anl.gov >> ??: Floating point exception when save complex vector into a hdf5 file >> >> Dear PETSc developers, >> >> I encountered a strange problem when I tried to save a DMDA vector into an hdf5 file, a floating point error was thrown. I can repeat the problem on the cluster. However, the same codes run fine on my local computer. >> >> Below the .cxx file is the minimal working example, the .txt is the runtime error obtained from SLURM, and the .py file should tell the configure options that I used to build the library. >> >> Any suggestions? >> >> Best, >> Changqing >> >> >> > From bramkamp at nsc.liu.se Wed Mar 26 12:51:12 2025 From: bramkamp at nsc.liu.se (Frank Bramkamp) Date: Wed, 26 Mar 2025 18:51:12 +0100 Subject: [petsc-users] Fortran stubs 3.22.4 Message-ID: <75646141-FA29-4BC1-A709-BB43539C7C6A@nsc.liu.se> Dear PETSc Team, a user of our computer center reported a PETSC issue. He uses petsc version 3.22.4 with gcc compiler The problem arises with MatGetRow and MatRestoreRow. PETSC compiled properly but his Fortran application then cannot find those routines. It seems to work in version 3.22.2. For me it also worked in 3.21.1 calling those routines from fortran. It looks that the fortran stubs might have been broken for those two routines in version 3.22.4 I will also test 3.22.4 now myself to confirm it or I can probably test a fixed version as well. Greetings, Frank Bramkamp -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 26 13:29:04 2025 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 26 Mar 2025 14:29:04 -0400 Subject: [petsc-users] Fortran stubs 3.22.4 In-Reply-To: <75646141-FA29-4BC1-A709-BB43539C7C6A@nsc.liu.se> References: <75646141-FA29-4BC1-A709-BB43539C7C6A@nsc.liu.se> Message-ID: On Wed, Mar 26, 2025 at 1:51?PM Frank Bramkamp wrote: > Dear PETSc Team, > > a user of our computer center reported a PETSC issue. > He uses petsc version 3.22.4 with gcc compiler > > The problem arises with MatGetRow and MatRestoreRow. > PETSC compiled properly but his Fortran application then cannot find those > routines. It seems to work in version 3.22.2. For me it also worked in > 3.21.1 > calling those routines from fortran. > > It looks that the fortran stubs might have been broken for those two > routines in version 3.22.4 > > I will also test 3.22.4 now myself to confirm it or I can probably test a > fixed version as well. > Hi Frank, Yes, we have had that report. Barry has now completely rewritten the Fortran bindings, replacing our old binding generator, so that every exposed function should now have a binding. This is a large change, but should make everything much more maintainable. We plan to release 3.23 in a week or two, and would advise every Fortran user to upgrade, since this will be the standard going forward. Does that timeline work for you? Thanks, Matt > Greetings, Frank Bramkamp > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YWfr-xc2NrmK8QCbpIUPpfzLuBVFIjQ6FDjpm9Iqv6mev-Y8WYiVtcGcK87_yJyOo3UR-RCvj05E0oOUsNmx$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Wed Mar 26 13:40:33 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Wed, 26 Mar 2025 13:40:33 -0500 (CDT) Subject: [petsc-users] Fortran stubs 3.22.4 In-Reply-To: <75646141-FA29-4BC1-A709-BB43539C7C6A@nsc.liu.se> References: <75646141-FA29-4BC1-A709-BB43539C7C6A@nsc.liu.se> Message-ID: <5ddafcf9-e920-d752-091e-fcde0e9e93b8@fastmail.org> Hm - works in 3.22.2 but not 3.22.4? That is strange. balay at p1 /home/balay/tmp $ nm -Ao petsc-3.22.*/arch*/lib/libpetsc.so |grep matgetrow_ petsc-3.22.2/arch-linux-c-debug/lib/libpetsc.so:0000000001b44665 T matgetrow_ petsc-3.22.4/arch-linux-c-debug/lib/libpetsc.so:0000000001b46f9f T matgetrow_ balay at p1 /home/balay/tmp $ nm -Ao petsc-3.22.*/arch*/lib/libpetsc.so |grep matrestorerow_ petsc-3.22.2/arch-linux-c-debug/lib/libpetsc.so:0000000001b44a2d T matrestorerow_ petsc-3.22.4/arch-linux-c-debug/lib/libpetsc.so:0000000001b47367 T matrestorerow_ balay at p1 /home/balay/tmp I see the symbols in my builds of both the versions. Suggest rechecking build of 3.22.4 [or redo this build] to make sure its identical to the working 3.22.2 build Satish On Wed, 26 Mar 2025, Frank Bramkamp wrote: > Dear PETSc Team, > > > a user of our computer center reported a PETSC issue. > He uses petsc version 3.22.4 with gcc compiler > > The problem arises with MatGetRow and MatRestoreRow. > PETSC compiled properly but his Fortran application then cannot find those > routines. It seems to work in version 3.22.2. For me it also worked in 3.21.1 > calling those routines from fortran. > > It looks that the fortran stubs might have been broken for those two routines in version 3.22.4 > > I will also test 3.22.4 now myself to confirm it or I can probably test a fixed version as well. > > > Greetings, Frank Bramkamp > > > > > From bsmith at petsc.dev Wed Mar 26 15:17:33 2025 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 26 Mar 2025 16:17:33 -0400 Subject: [petsc-users] Fortran stubs 3.22.4 In-Reply-To: <5ddafcf9-e920-d752-091e-fcde0e9e93b8@fastmail.org> References: <75646141-FA29-4BC1-A709-BB43539C7C6A@nsc.liu.se> <5ddafcf9-e920-d752-091e-fcde0e9e93b8@fastmail.org> Message-ID: <2B66BD79-FBB0-4FBE-8BDC-4B893C94111E@petsc.dev> My changes do not affect the 3.22.* series at all. Your user likely used the main branch for which this problem would have occurred, but should now be fixed and will be fine in the 2.23.* series > On Mar 26, 2025, at 2:40?PM, Satish Balay wrote: > > Hm - works in 3.22.2 but not 3.22.4? That is strange. > > balay at p1 /home/balay/tmp > $ nm -Ao petsc-3.22.*/arch*/lib/libpetsc.so |grep matgetrow_ > petsc-3.22.2/arch-linux-c-debug/lib/libpetsc.so:0000000001b44665 T matgetrow_ > petsc-3.22.4/arch-linux-c-debug/lib/libpetsc.so:0000000001b46f9f T matgetrow_ > balay at p1 /home/balay/tmp > $ nm -Ao petsc-3.22.*/arch*/lib/libpetsc.so |grep matrestorerow_ > petsc-3.22.2/arch-linux-c-debug/lib/libpetsc.so:0000000001b44a2d T matrestorerow_ > petsc-3.22.4/arch-linux-c-debug/lib/libpetsc.so:0000000001b47367 T matrestorerow_ > balay at p1 /home/balay/tmp > > > I see the symbols in my builds of both the versions. Suggest rechecking build of 3.22.4 [or redo this build] to make sure its identical to the working 3.22.2 build > > Satish > > On Wed, 26 Mar 2025, Frank Bramkamp wrote: > >> Dear PETSc Team, >> >> >> a user of our computer center reported a PETSC issue. >> He uses petsc version 3.22.4 with gcc compiler >> >> The problem arises with MatGetRow and MatRestoreRow. >> PETSC compiled properly but his Fortran application then cannot find those >> routines. It seems to work in version 3.22.2. For me it also worked in 3.21.1 >> calling those routines from fortran. >> >> It looks that the fortran stubs might have been broken for those two routines in version 3.22.4 >> >> I will also test 3.22.4 now myself to confirm it or I can probably test a fixed version as well. >> >> >> Greetings, Frank Bramkamp >> >> >> >> >> > From ctchengben at mail.scut.edu.cn Thu Mar 27 02:56:48 2025 From: ctchengben at mail.scut.edu.cn (=?UTF-8?B?56iL5aWU?=) Date: Thu, 27 Mar 2025 15:56:48 +0800 (GMT+08:00) Subject: [petsc-users] If DMPLEX can be created from BDF mesh format? Message-ID: <2cc9518.23a48.195d69a8f29.Coremail.ctchengben@mail.scut.edu.cn> Hello, Recently I create mesh from MSC Patran and its mesh format is BDF file ( attached with a square structure with Unstructured tetrahedral mesh ). So i ask for the help that if DMPlexCreateFromFile can import the BDF file or I should explicitly change the BDF file to the MSH file format. Looking forward to your reply! sinserely, Cheng. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: square.bdf Type: application/octet-stream Size: 33517924 bytes Desc: not available URL: From 13971216897 at 163.com Fri Mar 28 02:03:48 2025 From: 13971216897 at 163.com (Zhao-Yi Yan) Date: Fri, 28 Mar 2025 15:03:48 +0800 (GMT+08:00) Subject: [petsc-users] Bug Report : Continue line fails when wrapped wtih PetscCall(A) Message-ID: <1353f6b8.16a98.195db906729.Coremail.13971216897@163.com> Dear Developers, I am a Fortran programmer, and find the continue line symbol "&" in fortran fails to work when I call the routine with PetscCallA, for example The code like this " PetscCallA DMDACreate3D(PETSC_COMM_WORLD,& DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,& DMDA_STENCIL_STAR,three,three,three,& PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,& one,& ! number of degrees of freedom per node one,& ! stencil width PETSC_NULL_INTEGER_ARRAY,PETSC_NULL_INTEGER_ARRAY,PETSC_NULL_INTEGER_ARRAY,& dm, & ! Output -- the resulting distributed array object ierr) " could NOT pass the compiler, which would report error that call DMDACreate3D(PETSC_COMM_WORLD, & DM_BOUNDARY_NONE,DM_BOUNDARY_NON 1 Error: Syntax error in argument list at (1) A simple work-around method is to replace PetscCallA with a regular call, which means maybe there are something wrong within PetscCallA. Related information: Petsc Release Version 3.22.4 Linux mgt1 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux Best regards, | | Zhao-Yi Yan | | 13971216897 at 163.com | -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Fri Mar 28 09:55:26 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Fri, 28 Mar 2025 09:55:26 -0500 (CDT) Subject: [petsc-users] Bug Report : Continue line fails when wrapped wtih PetscCall(A) In-Reply-To: <1353f6b8.16a98.195db906729.Coremail.13971216897@163.com> References: <1353f6b8.16a98.195db906729.Coremail.13971216897@163.com> Message-ID: <58100b61-9374-6812-fd09-fbb9f8914b88@fastmail.org> Yeah - likely the reason why we default to using 'gfortran -ffree-line-length-none -ffree-line-length-0' > include/petsc/finclude/petscsysbase.h:#define PetscCallA(func) call func; CHKERRA(ierr) so if you are using regular "call" with line continuation, use: call DMDACreate3D(PETSC_COMM_WORLD,& DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,& DMDA_STENCIL_STAR,three,three,three,& PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,& one,& ! number of degrees of freedom per node one,& ! stencil width PETSC_NULL_INTEGER_ARRAY,PETSC_NULL_INTEGER_ARRAY,PETSC_NULL_INTEGER_ARRAY,& dm, & ! Output -- the resulting distributed array object ierr) CHKERRA(ierr) Satish On Fri, 28 Mar 2025, Zhao-Yi Yan wrote: > Dear Developers, > > > I am a Fortran programmer, and find the continue line symbol "&" in fortran fails to work when I call the routine with PetscCallA, for example > > > The code like this > > > " PetscCallA DMDACreate3D(PETSC_COMM_WORLD,& > DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,& > DMDA_STENCIL_STAR,three,three,three,& > PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,& > one,& ! number of degrees of freedom per node > one,& ! stencil width > PETSC_NULL_INTEGER_ARRAY,PETSC_NULL_INTEGER_ARRAY,PETSC_NULL_INTEGER_ARRAY,& > dm, & ! Output -- the resulting distributed array object > ierr) > " > > > could NOT pass the compiler, which would report error that > > > call DMDACreate3D(PETSC_COMM_WORLD, & DM_BOUNDARY_NONE,DM_BOUNDARY_NON > 1 > Error: Syntax error in argument list at (1) > > > A simple work-around method is to replace PetscCallA with a regular call, which means maybe there are something wrong within PetscCallA. > > > > > > Related information: > Petsc Release Version 3.22.4 > Linux mgt1 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux > > > Best regards, > | | > Zhao-Yi Yan > | > | > 13971216897 at 163.com > | From mfadams at lbl.gov Fri Mar 28 17:45:44 2025 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 28 Mar 2025 18:45:44 -0400 Subject: [petsc-users] If DMPLEX can be created from BDF mesh format? In-Reply-To: <2cc9518.23a48.195d69a8f29.Coremail.ctchengben@mail.scut.edu.cn> References: <2cc9518.23a48.195d69a8f29.Coremail.ctchengben@mail.scut.edu.cn> Message-ID: Hi Cheng, Sorry for the late reply. I believe we only support reading .msh files in DMPlex. You can look at src/dm/impls/plex/tests/ex1.c as an example (testing input parameters are in a large comment at the bottom of the file). Mark On Thu, Mar 27, 2025 at 11:03?AM ?? wrote: > Hello, > Recently I create mesh from MSC Patran and its mesh format is BDF file ( > attached with a square structure with Unstructured tetrahedral mesh ). So > i ask for the help that if *DMPlexCreateFromFile > * > can import the BDF file or I should explicitly change the BDF file to the > MSH file format. > > > Looking forward to your reply! > > > sinserely, > Cheng. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Mar 29 09:09:45 2025 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 29 Mar 2025 10:09:45 -0400 Subject: [petsc-users] If DMPLEX can be created from BDF mesh format? In-Reply-To: References: <2cc9518.23a48.195d69a8f29.Coremail.ctchengben@mail.scut.edu.cn> Message-ID: On Fri, Mar 28, 2025 at 6:46?PM Mark Adams wrote: > Hi Cheng, > > Sorry for the late reply. I believe we only support reading .msh files in > DMPlex. > You can look at src/dm/impls/plex/tests/ex1.c as an example (testing input > parameters are in a large comment at the bottom of the file). > Mark is correct. We do not have support for BDF. It could be written (we recently added support for Fluent CASE files), but if you can make GMsh files, that is both simpler and more robust since it is a better format. Thanks, Matt > Mark > > On Thu, Mar 27, 2025 at 11:03?AM ?? wrote: > >> Hello, >> Recently I create mesh from MSC Patran and its mesh format is BDF file ( >> attached with a square structure with Unstructured tetrahedral mesh ). >> So i ask for the help that if *DMPlexCreateFromFile >> * >> can import the BDF file or I should explicitly change the BDF file to >> the MSH file format. >> >> >> Looking forward to your reply! >> >> >> sinserely, >> Cheng. >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cW2a80LlgAoyBTqDEWPooxsQ8VQsSeBM1xq-9RIiDjbpmYWFn5SrCZnJd-apEfIX7GijJFL89R7yKmUVcHGX$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Mon Mar 31 03:40:57 2025 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Mon, 31 Mar 2025 08:40:57 +0000 Subject: [petsc-users] FAQ about fast interconnect Message-ID: The FAQ about the kind of parallel computers or clusters needed to use PETSc states: "any ethernet (even 10 GigE) simply cannot provide the needed performance." Does this statement still hold now that 100 GigE is common? The broader question is that we are buying a new cluster to run our in-house CFD solver ReFRESCO. Typical production runs involve meshes up-to a few hundred million cells with half a dozen to a dozen equations. Most of the time is spend in KSPSolve. What kind of interconnect should we consider? Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development T +31 317 49 33 44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!dt6x5cNbKBO3l1-VxdzJE9uFUPtHLCnCR8P_q3-23XmiXKMfKyoNWokTX4wRbgFq3HSDmSVi6AIdO8MFA71dUnQ$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image916957.png Type: image/png Size: 5004 bytes Desc: image916957.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image775640.png Type: image/png Size: 487 bytes Desc: image775640.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image297572.png Type: image/png Size: 504 bytes Desc: image297572.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image268268.png Type: image/png Size: 482 bytes Desc: image268268.png URL: From jroman at dsic.upv.es Mon Mar 31 04:04:20 2025 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 31 Mar 2025 09:04:20 +0000 Subject: [petsc-users] FAQ about fast interconnect In-Reply-To: References: Message-ID: <3E01E44A-0A37-4159-AC8F-B28B1D5621AD@dsic.upv.es> If you look at the top500 list, 50% of the machines have Infiniband and 37% Gigabit Ethernet. InfiniBand has a low latency of 3?5 microseconds, which is much lower than Ethernet's 20?80 microseconds. But Ethernet can be a cost-effective solution with reasonably good performance, provided that you configure the machine appropriately to minimize the software latency - by using a RoCE driver. Jose > El 31 mar 2025, a las 10:40, Klaij, Christiaan via petsc-users escribi?: > > The FAQ about the kind of parallel computers or clusters needed > to use PETSc states: > > "any ethernet (even 10 GigE) simply cannot provide the needed performance." > > Does this statement still hold now that 100 GigE is common? > > The broader question is that we are buying a new cluster to run > our in-house CFD solver ReFRESCO. Typical production runs involve > meshes up-to a few hundred million cells with half a dozen to a > dozen equations. Most of the time is spend in KSPSolve. What kind > of interconnect should we consider? > > Chris > dr. ir.???? Christiaan Klaij | Senior Researcher | Research & Development > T +31 317 49 33 44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!aONPw6PPgHjgXIh5u-tf1Buvha38iSHeSim-KXX_1xXxgi0QiRpkdyrmx7jTObbjT2wgz4dkcLzVv2W9IOitYnmE$ From 13971216897 at 163.com Mon Mar 31 06:45:33 2025 From: 13971216897 at 163.com (Zhao-Yi Yan) Date: Mon, 31 Mar 2025 19:45:33 +0800 (GMT+08:00) Subject: [petsc-users] DMDAVecGetArrayF90 Message-ID: <5ddb4fc4.13595.195ec056df7.Coremail.13971216897@163.com> Hi, It should be warned that DMDAVecGetArrayF90 should be used in Fortran instead of DMDAVecGetArray. The example in ex13f90.F90 has been adapted, but html document has not. DMDAVecGetArray ? PETSc 3.23.0 documentation https://urldefense.us/v3/__https://petsc.org/release/manualpages/DMDA/DMDAVecGetArray/*dmdavecgetarray__;Iw!!G_uCfscf7eWS!atC_NTcSJ25s2w01o6XAq3w6JxLFsOSL5XBNJfpElQLPCKmpcbLmk_wv67xp2XSt4g_WfU9ApYG85ODUl69qoLoLboI$ | | Zhao-Yi Yan | | 13971216897 at 163.com | -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Mon Mar 31 06:52:30 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Mon, 31 Mar 2025 13:52:30 +0200 Subject: [petsc-users] DMDAVecGetArrayF90 In-Reply-To: <5ddb4fc4.13595.195ec056df7.Coremail.13971216897@163.com> References: <5ddb4fc4.13595.195ec056df7.Coremail.13971216897@163.com> Message-ID: > On 31 Mar 2025, at 1:45?PM, Zhao-Yi Yan <13971216897 at 163.com> wrote: > > Hi, > > It should be warned that DMDAVecGetArrayF90 should be used in Fortran instead of DMDAVecGetArray. Not anymore, see the bottom of https://urldefense.us/v3/__https://petsc.org/release/changes/323/__;!!G_uCfscf7eWS!YvQlSEiYwYCdAtR57Fek4ifjt--n7njeUKpTMWljcwUFbPXzI56VX5mlqNL-zU-jln00hp8c271Mv1AuzvMZmQ$ Thanks, Pierre > The example in ex13f90.F90 has been adapted, but html document has not. > > DMDAVecGetArray ? PETSc 3.23.0 documentation > > https://urldefense.us/v3/__https://petsc.org/release/manualpages/DMDA/DMDAVecGetArray/*dmdavecgetarray__;Iw!!G_uCfscf7eWS!YvQlSEiYwYCdAtR57Fek4ifjt--n7njeUKpTMWljcwUFbPXzI56VX5mlqNL-zU-jln00hp8c271Mv1AOM6RNBA$ > > > Zhao-Yi Yan > 13971216897 at 163.com > <> -------------- next part -------------- An HTML attachment was scrubbed... URL: From 13971216897 at 163.com Mon Mar 31 14:12:25 2025 From: 13971216897 at 163.com (Zhao-Yi Yan) Date: Tue, 1 Apr 2025 03:12:25 +0800 (GMT+08:00) Subject: [petsc-users] DMDAVecRestoreArrayF90 Message-ID: <1fedfb57.1f9ca.195ed9e8bd7.Coremail.13971216897@163.com> Hi, Dear developers, I am using 3.22.4 and find that DMDAVecGetArrayF90 should be used in pair with DMDAVecRestoreArrayF90 And should NOT be used in pair with DMDAVecRestoreArray Otherwise, strange memory leakage can be caused. I know F90-style function has been discarded in 3.23 version. So this is a simply record mail; However, it is appreciated if any comment feedback. Best regard, Zhao-Yi Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Mar 31 14:46:18 2025 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 31 Mar 2025 15:46:18 -0400 Subject: [petsc-users] DMDAVecRestoreArrayF90 In-Reply-To: <1fedfb57.1f9ca.195ed9e8bd7.Coremail.13971216897@163.com> References: <1fedfb57.1f9ca.195ed9e8bd7.Coremail.13971216897@163.com> Message-ID: PETSc 2.23 introduced some incompatibilities in the Fortran bindings with previous versions of PETSc. As part of upgrading Fortran code for use with PETSc 2.23 the suffix F90 should be removed from all PETSc Fortran calls; for routines such as DMDAVecGetArrayF90() no other changes are needed to the function call. The array arguments remain Fortran pointers to arrays. Barry > On Mar 31, 2025, at 3:12?PM, Zhao-Yi Yan <13971216897 at 163.com> wrote: > > > Hi, Dear developers, > > I am using 3.22.4 and find that > > DMDAVecGetArrayF90 > > should be used in pair with > > DMDAVecRestoreArrayF90 > > And should NOT be used in pair with > > DMDAVecRestoreArray > > Otherwise, strange memory leakage can be caused. > > I know F90-style function has been discarded in 3.23 version. So this is a simply record mail; However, it is appreciated if any comment feedback. > > Best regard, > > Zhao-Yi Yan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: