From mwhitte6 at jhu.edu Fri Jan 2 13:42:41 2026 From: mwhitte6 at jhu.edu (Michael Whitten) Date: Fri, 2 Jan 2026 19:42:41 +0000 Subject: [petsc-users] Having trouble with basic Fortran-PETSc interoperability Message-ID: Hi PETSc mailing list users, I have managed to install PETSc and run some PETSc examples and little test codes of my own in Fortran. I am now trying to make PETSc work with my existing Fortran code. I have tried to build little test examples of the functionality that I can then incorporate into my larger code base. However, I am having trouble just passing vectors back and forth between PETSc and Fortran. I have attached a minimum semi-working example that can be compiled with the standard 'Makefile.user'. It throws an error when I try to copy the PETSc vector back to a Fortran vector using VecGetValues(). I get that it can only access values of the array on the local process but how do I fix this? Is this even the right approach? In the final implementation I want to be able to assemble my matrix and vector, convert them to PETSc data structures, use PETSc to solve, and then convert the solution vector back to Fortran and return. I want to be able to do this with both the linear and nonlinear solvers. It seems like this is what PETSc is, in part, built to do. Is this a reasonable expectation to achieve? Is this a reasonable use case for PETSc? Thanks in advance for any help you can offer. best, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.F90 Type: text/x-fortran Size: 1075 bytes Desc: test.F90 URL: From rlmackie862 at gmail.com Fri Jan 2 15:04:37 2026 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 2 Jan 2026 13:04:37 -0800 Subject: [petsc-users] Having trouble with basic Fortran-PETSc interoperability In-Reply-To: References: Message-ID: <0F4CC079-FF0B-4F62-BF1C-1DF8958C6AC3@gmail.com> Hi Michael, If you are trying to get all the values on the 0th processor, you can find that here: https://urldefense.us/v3/__https://petsc.org/release/faq/*how-do-i-collect-to-a-single-processor-all-the-values-from-a-parallel-petsc-vec__;Iw!!G_uCfscf7eWS!YE1AKcuJN7bNqZse7mgrlZ2TiBVr-GkWaxPCs0X2sP7m8St0Fv3UcHfECEU62p_dEn1yhnnYvSA_xCw9OUgbw4iNOw$ Randy > On Jan 2, 2026, at 11:42?AM, Michael Whitten via petsc-users wrote: > > Hi PETSc mailing list users, > > I have managed to install PETSc and run some PETSc examples and little test codes of my own in Fortran. I am now trying to make PETSc work with my existing Fortran code. I have tried to build little test examples of the functionality that I can then incorporate into my larger code base. However, I am having trouble just passing vectors back and forth between PETSc and Fortran. > > I have attached a minimum semi-working example that can be compiled with the standard 'Makefile.user'. It throws an error when I try to copy the PETSc vector back to a Fortran vector using VecGetValues(). I get that it can only access values of the array on the local process but how do I fix this? Is this even the right approach? > > In the final implementation I want to be able to assemble my matrix and vector, convert them to PETSc data structures, use PETSc to solve, and then convert the solution vector back to Fortran and return. I want to be able to do this with both the linear and nonlinear solvers. It seems like this is what PETSc is, in part, built to do. Is this a reasonable expectation to achieve? Is this a reasonable use case for PETSc? > > Thanks in advance for any help you can offer. > > best, > Michael > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jan 2 15:33:27 2026 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 2 Jan 2026 16:33:27 -0500 Subject: [petsc-users] Having trouble with basic Fortran-PETSc interoperability In-Reply-To: References: Message-ID: <3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E@petsc.dev> VecGetValues() uses 0 based indexing in both Fortran and C. You don't want to use VecGetValues() and VecSetValues() usually since they result in two copies of the arrays and copying entire arrays back and forth. You can avoid copying between PETSc vectors and your arrays by using VecGetArray(), VecGetArrayWrite(), and VecGetArrayRead(). You can also use VecCreateMPIWithArray() to create a PETSc vector using your array; for example for input to the right hand side of a KSP. These arrays start their indexing with the Fortran default of 1. Barry > On Jan 2, 2026, at 2:42?PM, Michael Whitten via petsc-users wrote: > > Hi PETSc mailing list users, > > I have managed to install PETSc and run some PETSc examples and little test codes of my own in Fortran. I am now trying to make PETSc work with my existing Fortran code. I have tried to build little test examples of the functionality that I can then incorporate into my larger code base. However, I am having trouble just passing vectors back and forth between PETSc and Fortran. > > I have attached a minimum semi-working example that can be compiled with the standard 'Makefile.user'. It throws an error when I try to copy the PETSc vector back to a Fortran vector using VecGetValues(). I get that it can only access values of the array on the local process but how do I fix this? Is this even the right approach? > > In the final implementation I want to be able to assemble my matrix and vector, convert them to PETSc data structures, use PETSc to solve, and then convert the solution vector back to Fortran and return. I want to be able to do this with both the linear and nonlinear solvers. It seems like this is what PETSc is, in part, built to do. Is this a reasonable expectation to achieve? Is this a reasonable use case for PETSc? > > Thanks in advance for any help you can offer. > > best, > Michael > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 2 16:49:30 2026 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 2 Jan 2026 17:49:30 -0500 Subject: [petsc-users] Having trouble with basic Fortran-PETSc interoperability In-Reply-To: References: Message-ID: On Fri, Jan 2, 2026 at 2:48?PM Michael Whitten via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi PETSc mailing list users, > > I have managed to install PETSc and run some PETSc examples and little > test codes of my own in Fortran. I am now trying to make PETSc work with my > existing Fortran code. I have tried to build little test examples of the > functionality that I can then incorporate into my larger code base. > However, I am having trouble just passing vectors back and forth between > PETSc and Fortran. > > I have attached a minimum semi-working example that can be compiled with > the standard 'Makefile.user'. It throws an error when I try to copy the > PETSc vector back to a Fortran vector using VecGetValues(). I get that it > can only access values of the array on the local process but how do I fix > this? Is this even the right approach? > No. You should just call VecGetArray(), to get back an F90 pointer to the values. This is much more convenient. > In the final implementation I want to be able to assemble my matrix and > vector, convert them to PETSc data structures, use PETSc to solve, and then > convert the solution vector back to Fortran and return. I want to be able > to do this with both the linear and nonlinear solvers. It seems like this > is what PETSc is, in part, built to do. Is this a reasonable expectation to > achieve? Is this a reasonable use case for PETSc? > Yes, that should work getting the array directly. Thanks, Matt > Thanks in advance for any help you can offer. > > best, > Michael > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrh36ooFV$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.wick.1980 at gmail.com Tue Jan 6 04:12:26 2026 From: michael.wick.1980 at gmail.com (Michael Wick) Date: Tue, 6 Jan 2026 18:12:26 +0800 Subject: [petsc-users] VecNorm() of VECNEST returns 0 while sub-vectors have nonzero norms Message-ID: Hello PETSc users, I have a VECNEST norm issue and would appreciate help. I create a nest vector: VecCreateNest(PETSC_COMM_WORLD, 2, NULL, subG, &G); But: VecNorm(G, NORM_2, &nG) returns 0 While the sub-vectors are clearly nonzero: VecNestGetSubVec(G,0,&v0); VecNorm(v0,NORM_2,&n0); // n0 > 0 VecNestGetSubVec(G,1,&v1); VecNorm(v1,NORM_2,&n1); // n1 > 0 Can someone give a hint on what might go wrong? Thanks, M -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Tue Jan 6 04:39:19 2026 From: pierre at joliv.et (Pierre Jolivet) Date: Tue, 6 Jan 2026 11:39:19 +0100 Subject: [petsc-users] VecNorm() of VECNEST returns 0 while sub-vectors have nonzero norms In-Reply-To: References: Message-ID: <81323D69-376C-4A50-811B-E6CC17A4C0EB@joliv.et> Could you please share a runnable piece of code? I?ve tried to generate an example myself, and I don?t get 0 for the norm of G. Thanks, Pierre > On 6 Jan 2026, at 11:12?AM, Michael Wick wrote: > > Hello PETSc users, > > I have a VECNEST norm issue and would appreciate help. > > I create a nest vector: > VecCreateNest(PETSC_COMM_WORLD, 2, NULL, subG, &G); > > But: > VecNorm(G, NORM_2, &nG) returns 0 > > While the sub-vectors are clearly nonzero: > VecNestGetSubVec(G,0,&v0); VecNorm(v0,NORM_2,&n0); // n0 > 0 > VecNestGetSubVec(G,1,&v1); VecNorm(v1,NORM_2,&n1); // n1 > 0 > > Can someone give a hint on what might go wrong? > > Thanks, > > M > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.knezevic at akselos.com Tue Jan 6 10:39:17 2026 From: david.knezevic at akselos.com (David Knezevic) Date: Tue, 6 Jan 2026 11:39:17 -0500 Subject: [petsc-users] Question regarding SNES error about locked vectors In-Reply-To: References: <855F3D06-08B9-4CD1-ABE8-3E55D4DD802E@petsc.dev> Message-ID: Hi Barry, We've tested with your branch, and I confirm that it resolves the issue. We now get DIVERGED_FUNCTION_DOMAIN as the converged reason (instead of DIVERGED_LINE_SEARCH). Thanks! David On Wed, Dec 24, 2025 at 11:02?PM Barry Smith wrote: > I have started a merge request to properly propagate failure reasons up > from the line search to the SNESSolve in > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/8914__;!!G_uCfscf7eWS!cnkIkXaLhhZhd3a1A5CqB97mUISEhvouSWAqfaXtXHskJ0R8OUmhQIdglu1aMDAz4BoLqGeAVZqZZ4byEj9ft7Lr6sFK0R4$ Could you give it a > try when you get the chance? > > > On Dec 22, 2025, at 3:03?PM, David Knezevic > wrote: > > P.S. As a test I removed the "postcheck" callback, and I still get > the same behavior with the DIVERGED_LINE_SEARCH converged reason, so I > guess the "postcheck" is not related. > > > On Mon, Dec 22, 2025 at 1:58?PM David Knezevic > wrote: > >> The print out I get from -snes_view is shown below. I wonder if the issue >> is related to "using user-defined postcheck step"? >> >> >> SNES Object: 1 MPI process >> type: newtonls >> maximum iterations=5, maximum function evaluations=10000 >> tolerances: relative=0., absolute=0., solution=0. >> total number of linear solver iterations=3 >> total number of function evaluations=4 >> norm schedule ALWAYS >> SNESLineSearch Object: 1 MPI process >> type: basic >> maxstep=1.000000e+08, minlambda=1.000000e-12 >> tolerances: relative=1.000000e-08, absolute=1.000000e-15, >> lambda=1.000000e-08 >> maximum iterations=40 >> using user-defined postcheck step >> KSP Object: 1 MPI process >> type: preonly >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> left preconditioning >> using NONE norm type for convergence test >> PC Object: 1 MPI process >> type: cholesky >> out-of-place factorization >> tolerance for zero pivot 2.22045e-14 >> matrix ordering: external >> factor fill ratio given 0., needed 0. >> Factored matrix follows: >> Mat Object: 1 MPI process >> type: mumps >> rows=1152, cols=1152 >> package used to perform factorization: mumps >> total: nonzeros=126936, allocated nonzeros=126936 >> MUMPS run parameters: >> Use -ksp_view ::ascii_info_detail to display information >> for all processes >> RINFOG(1) (global estimated flops for the elimination >> after analysis): 1.63461e+07 >> RINFOG(2) (global estimated flops for the assembly after >> factorization): 74826. >> RINFOG(3) (global estimated flops for the elimination >> after factorization): 1.63461e+07 >> (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): >> (0.,0.)*(2^0) >> INFOG(3) (estimated real workspace for factors on all >> processors after analysis): 150505 >> INFOG(4) (estimated integer workspace for factors on all >> processors after analysis): 6276 >> INFOG(5) (estimated maximum front size in the complete >> tree): 216 >> INFOG(6) (number of nodes in the complete tree): 24 >> INFOG(7) (ordering option effectively used after >> analysis): 2 >> INFOG(8) (structural symmetry in percent of the permuted >> matrix after analysis): 100 >> INFOG(9) (total real/complex workspace to store the >> matrix factors after factorization): 150505 >> INFOG(10) (total integer space store the matrix factors >> after factorization): 6276 >> INFOG(11) (order of largest frontal matrix after >> factorization): 216 >> INFOG(12) (number of off-diagonal pivots): 1044 >> INFOG(13) (number of delayed pivots after factorization): >> 0 >> INFOG(14) (number of memory compress after >> factorization): 0 >> INFOG(15) (number of steps of iterative refinement after >> solution): 0 >> INFOG(16) (estimated size (in MB) of all MUMPS internal >> data for factorization after analysis: value on the most memory consuming >> processor): 2 >> INFOG(17) (estimated size of all MUMPS internal data for >> factorization after analysis: sum over all processors): 2 >> INFOG(18) (size of all MUMPS internal data allocated >> during factorization: value on the most memory consuming processor): 2 >> INFOG(19) (size of all MUMPS internal data allocated >> during factorization: sum over all processors): 2 >> INFOG(20) (estimated number of entries in the factors): >> 126936 >> INFOG(21) (size in MB of memory effectively used during >> factorization - value on the most memory consuming processor): 2 >> INFOG(22) (size in MB of memory effectively used during >> factorization - sum over all processors): 2 >> INFOG(23) (after analysis: value of ICNTL(6) effectively >> used): 0 >> INFOG(24) (after analysis: value of ICNTL(12) effectively >> used): 1 >> INFOG(25) (after factorization: number of pivots modified >> by static pivoting): 0 >> INFOG(28) (after factorization: number of null pivots >> encountered): 0 >> INFOG(29) (after factorization: effective number of >> entries in the factors (sum over all processors)): 126936 >> INFOG(30, 31) (after solution: size in Mbytes of memory >> used during solution phase): 2, 2 >> INFOG(32) (after analysis: type of analysis done): 1 >> INFOG(33) (value used for ICNTL(8)): 7 >> INFOG(34) (exponent of the determinant if determinant is >> requested): 0 >> INFOG(35) (after factorization: number of entries taking >> into account BLR factor compression - sum over all processors): 126936 >> INFOG(36) (after analysis: estimated size of all MUMPS >> internal data for running BLR in-core - value on the most memory consuming >> processor): 0 >> INFOG(37) (after analysis: estimated size of all MUMPS >> internal data for running BLR in-core - sum over all processors): 0 >> INFOG(38) (after analysis: estimated size of all MUMPS >> internal data for running BLR out-of-core - value on the most memory >> consuming processor): 0 >> INFOG(39) (after analysis: estimated size of all MUMPS >> internal data for running BLR out-of-core - sum over all processors): 0 >> linear system matrix = precond matrix: >> Mat Object: 1 MPI process >> type: seqaij >> rows=1152, cols=1152 >> total: nonzeros=60480, allocated nonzeros=60480 >> total number of mallocs used during MatSetValues calls=0 >> using I-node routines: found 384 nodes, limit used is 5 >> >> >> >> On Mon, Dec 22, 2025 at 9:25?AM Barry Smith wrote: >> >>> David, >>> >>> This is due to a software glitch. SNES_DIVERGED_FUNCTION_DOMAIN was >>> added long after the origins of SNES and, in places, the code was never >>> fully updated to handle function domain problems. In particular, parts of >>> the line search don't handle it correctly. Can you run with -snes_view and >>> that will help us find the spot that needs to be updated. >>> >>> Barry >>> >>> >>> On Dec 21, 2025, at 5:53?PM, David Knezevic >>> wrote: >>> >>> Hi, actually, I have a follow up on this topic. >>> >>> I noticed that when I call SNESSetFunctionDomainError(), it exits the >>> solve as expected, but it leads to a converged reason >>> "DIVERGED_LINE_SEARCH" instead of "DIVERGED_FUNCTION_DOMAIN". If I also >>> set SNESSetConvergedReason(snes, SNES_DIVERGED_FUNCTION_DOMAIN) in the >>> callback, then I get the expected SNES_DIVERGED_FUNCTION_DOMAIN converged >>> reason, so that's what I'm doing now. I was surprised by this behavior, >>> though, since I expected that calling SNESSetFunctionDomainError woudld >>> lead to the DIVERGED_FUNCTION_DOMAIN converged reason, so I just wanted to >>> check on what could be causing this. >>> >>> FYI, I'm using PETSc 3.23.4 >>> >>> Thanks, >>> David >>> >>> >>> On Thu, Dec 18, 2025 at 8:10?AM David Knezevic < >>> david.knezevic at akselos.com> wrote: >>> >>>> Thank you very much for this guidance. I switched to use >>>> SNES_DIVERGED_FUNCTION_DOMAIN, and I don't get any errors now. >>>> >>>> Thanks! >>>> David >>>> >>>> >>>> On Wed, Dec 17, 2025 at 3:43?PM Barry Smith wrote: >>>> >>>>> >>>>> >>>>> On Dec 17, 2025, at 2:47?PM, David Knezevic < >>>>> david.knezevic at akselos.com> wrote: >>>>> >>>>> Stefano and Barry: Thank you, this is very helpful. >>>>> >>>>> I'll give some more info here which may help to clarify further. >>>>> Normally we do just get a negative "converged reason", as you described. >>>>> But in this specific case where I'm having issues the solve is a >>>>> numerically sensitive creep solve, which has exponential terms in the >>>>> residual and jacobian callback that can "blow up" and give NaN values. In >>>>> this case, the root cause is that we hit a NaN value during a callback, and >>>>> then we throw an exception (in libMesh C++ code) which I gather leads to >>>>> the SNES solve exiting with this error code. >>>>> >>>>> Is there a way to tell the SNES to terminate with a negative >>>>> "converged reason" because we've encountered some issue during the callback? >>>>> >>>>> >>>>> In your callback you should call SNESSetFunctionDomainError() and >>>>> make sure the function value has an infinity or NaN in it (you can call >>>>> VecFlag() for this purpose)). >>>>> >>>>> Now SNESConvergedReason will be a completely >>>>> reasonable SNES_DIVERGED_FUNCTION_DOMAIN >>>>> >>>>> Barry >>>>> >>>>> If you are using an ancient version of PETSc (I hope you are using the >>>>> latest since that always has more bug fixes and features) that does not >>>>> have SNESSetFunctionDomainError then just make sure the function vector >>>>> result has an infinity or NaN in it and then SNESConvergedReason will be >>>>> SNES_DIVERGED_FNORM_NAN >>>>> >>>>> >>>>> >>>>> >>>>> Thanks, >>>>> David >>>>> >>>>> >>>>> On Wed, Dec 17, 2025 at 2:25?PM Barry Smith wrote: >>>>> >>>>>> >>>>>> >>>>>> On Dec 17, 2025, at 2:08?PM, David Knezevic via petsc-users < >>>>>> petsc-users at mcs.anl.gov> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I'm using PETSc via the libMesh framework, so creating a MWE is >>>>>> complicated by that, unfortunately. >>>>>> >>>>>> The situation is that I am not modifying the solution vector in a >>>>>> callback. The SNES solve has terminated, with PetscErrorCode 82, and I then >>>>>> want to update the solution vector (reset it to the "previously converged >>>>>> value") and then try to solve again with a smaller load increment. This is >>>>>> a typical "auto load stepping" strategy in FE. >>>>>> >>>>>> >>>>>> Once a PetscError is generated you CANNOT continue the PETSc >>>>>> program, it is not designed to allow this and trying to continue will lead >>>>>> to further problems. >>>>>> >>>>>> So what you need to do is prevent PETSc from getting to the point >>>>>> where an actual PetscErrorCode of 82 is generated. Normally SNESSolve() >>>>>> returns without generating an error even if the nonlinear solver failed >>>>>> (for example did not converge). One then uses SNESGetConvergedReason to >>>>>> check if it converged or not. Normally when SNESSolve() returns, regardless >>>>>> of whether the converged reason is negative or positive, there will be no >>>>>> locked vectors and one can modify the SNES object and call SNESSolve again. >>>>>> >>>>>> So my guess is that an actual PETSc error is being generated >>>>>> because SNESSetErrorIfNotConverged(snes,PETSC_TRUE) is being called by >>>>>> either your code or libMesh or the option -snes_error_if_not_converged is >>>>>> being used. In your case when you wish the code to work after a >>>>>> non-converged SNESSolve() these options should never be set instead you >>>>>> should check the result of SNESGetConvergedReason() to check if SNESSolve >>>>>> has failed. If SNESSetErrorIfNotConverged() is never being set that may >>>>>> indicate you are using an old version of PETSc or have it a bug inside >>>>>> PETSc's SNES that does not handle errors correctly and we can help fix the >>>>>> problem if you can provide a full debug output version of when the error >>>>>> occurs. >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> I think the key piece of info I'd like to know is, at what point is >>>>>> the solution vector "unlocked" by the SNES object? Should it be unlocked as >>>>>> soon as the SNES solve has terminated with PetscErrorCode 82? Since it >>>>>> seems to me that it hasn't been unlocked yet (maybe just on a subset of the >>>>>> processes). Should I manually "unlock" the solution vector by >>>>>> calling VecLockWriteSet? >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Dec 17, 2025 at 2:02?PM Stefano Zampini < >>>>>> stefano.zampini at gmail.com> wrote: >>>>>> >>>>>>> You are not allowed to call VecGetArray on the solution vector of an >>>>>>> SNES object within a user callback, nor to modify its values in any other >>>>>>> way. >>>>>>> Put in C++ lingo, the solution vector is a "const" argument >>>>>>> It would be great if you could provide an MWE to help us understand >>>>>>> your problem >>>>>>> >>>>>>> >>>>>>> Il giorno mer 17 dic 2025 alle ore 20:51 David Knezevic via >>>>>>> petsc-users ha scritto: >>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> I have a question about this error: >>>>>>>> >>>>>>>>> Vector 'Vec_0x84000005_0' (argument #2) was locked for read-only >>>>>>>>> access in unknown_function() at unknown file:0 (line numbers only accurate >>>>>>>>> to function begin) >>>>>>>> >>>>>>>> >>>>>>>> I'm encountering this error in an FE solve where there is an error >>>>>>>> encountered during the residual/jacobian assembly, and what we normally do >>>>>>>> in that situation is shrink the load step and continue, starting from the >>>>>>>> "last converged solution". However, in this case I'm running on 32 >>>>>>>> processes, and 5 of the processes report the error above about a "locked >>>>>>>> vector". >>>>>>>> >>>>>>>> We clear the SNES object (via SNESDestroy) before we reset the >>>>>>>> solution to the "last converged solution", and then we make a new SNES >>>>>>>> object subsequently. But it seems to me that somehow the solution vector is >>>>>>>> still marked as "locked" on 5 of the processes when we modify the solution >>>>>>>> vector, which leads to the error above. >>>>>>>> >>>>>>>> I was wondering if someone could advise on what the best way to >>>>>>>> handle this would be? I thought one option could be to add an MPI barrier >>>>>>>> call prior to updating the solution vector to "last converged solution", to >>>>>>>> make sure that the SNES object is destroyed on all procs (and hence the >>>>>>>> locks cleared) before editing the solution vector, but I'm unsure if that >>>>>>>> would make a difference. Any help would be most appreciated! >>>>>>>> >>>>>>>> Thanks, >>>>>>>> David >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Stefano >>>>>>> >>>>>> >>>>>> >>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jan 6 13:04:08 2026 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 6 Jan 2026 14:04:08 -0500 Subject: [petsc-users] Question regarding SNES error about locked vectors In-Reply-To: References: <855F3D06-08B9-4CD1-ABE8-3E55D4DD802E@petsc.dev> Message-ID: Great, thanks > On Jan 6, 2026, at 11:39?AM, David Knezevic wrote: > > Hi Barry, > > We've tested with your branch, and I confirm that it resolves the issue. We now get DIVERGED_FUNCTION_DOMAIN as the converged reason (instead of DIVERGED_LINE_SEARCH). > > Thanks! > David > > > On Wed, Dec 24, 2025 at 11:02?PM Barry Smith > wrote: >> I have started a merge request to properly propagate failure reasons up from the line search to the SNESSolve in https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/8914__;!!G_uCfscf7eWS!b6nENAaHnj7yQFBKwtNflbY6D3iEE9H3j8aRJNBjSO_JPjS43jCRoSK7y2hBPj5UsoRATQUQwyaqoEiqp_Z5h-U$ Could you give it a try when you get the chance? >> >> >>> On Dec 22, 2025, at 3:03?PM, David Knezevic > wrote: >>> >>> P.S. As a test I removed the "postcheck" callback, and I still get the same behavior with the DIVERGED_LINE_SEARCH converged reason, so I guess the "postcheck" is not related. >>> >>> >>> On Mon, Dec 22, 2025 at 1:58?PM David Knezevic > wrote: >>>> The print out I get from -snes_view is shown below. I wonder if the issue is related to "using user-defined postcheck step"? >>>> >>>> >>>> SNES Object: 1 MPI process >>>> type: newtonls >>>> maximum iterations=5, maximum function evaluations=10000 >>>> tolerances: relative=0., absolute=0., solution=0. >>>> total number of linear solver iterations=3 >>>> total number of function evaluations=4 >>>> norm schedule ALWAYS >>>> SNESLineSearch Object: 1 MPI process >>>> type: basic >>>> maxstep=1.000000e+08, minlambda=1.000000e-12 >>>> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >>>> maximum iterations=40 >>>> using user-defined postcheck step >>>> KSP Object: 1 MPI process >>>> type: preonly >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>> left preconditioning >>>> using NONE norm type for convergence test >>>> PC Object: 1 MPI process >>>> type: cholesky >>>> out-of-place factorization >>>> tolerance for zero pivot 2.22045e-14 >>>> matrix ordering: external >>>> factor fill ratio given 0., needed 0. >>>> Factored matrix follows: >>>> Mat Object: 1 MPI process >>>> type: mumps >>>> rows=1152, cols=1152 >>>> package used to perform factorization: mumps >>>> total: nonzeros=126936, allocated nonzeros=126936 >>>> MUMPS run parameters: >>>> Use -ksp_view ::ascii_info_detail to display information for all processes >>>> RINFOG(1) (global estimated flops for the elimination after analysis): 1.63461e+07 >>>> RINFOG(2) (global estimated flops for the assembly after factorization): 74826. >>>> RINFOG(3) (global estimated flops for the elimination after factorization): 1.63461e+07 >>>> (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0) >>>> INFOG(3) (estimated real workspace for factors on all processors after analysis): 150505 >>>> INFOG(4) (estimated integer workspace for factors on all processors after analysis): 6276 >>>> INFOG(5) (estimated maximum front size in the complete tree): 216 >>>> INFOG(6) (number of nodes in the complete tree): 24 >>>> INFOG(7) (ordering option effectively used after analysis): 2 >>>> INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100 >>>> INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 150505 >>>> INFOG(10) (total integer space store the matrix factors after factorization): 6276 >>>> INFOG(11) (order of largest frontal matrix after factorization): 216 >>>> INFOG(12) (number of off-diagonal pivots): 1044 >>>> INFOG(13) (number of delayed pivots after factorization): 0 >>>> INFOG(14) (number of memory compress after factorization): 0 >>>> INFOG(15) (number of steps of iterative refinement after solution): 0 >>>> INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 2 >>>> INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 2 >>>> INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 2 >>>> INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 2 >>>> INFOG(20) (estimated number of entries in the factors): 126936 >>>> INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 2 >>>> INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 2 >>>> INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0 >>>> INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1 >>>> INFOG(25) (after factorization: number of pivots modified by static pivoting): 0 >>>> INFOG(28) (after factorization: number of null pivots encountered): 0 >>>> INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 126936 >>>> INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 2, 2 >>>> INFOG(32) (after analysis: type of analysis done): 1 >>>> INFOG(33) (value used for ICNTL(8)): 7 >>>> INFOG(34) (exponent of the determinant if determinant is requested): 0 >>>> INFOG(35) (after factorization: number of entries taking into account BLR factor compression - sum over all processors): 126936 >>>> INFOG(36) (after analysis: estimated size of all MUMPS internal data for running BLR in-core - value on the most memory consuming processor): 0 >>>> INFOG(37) (after analysis: estimated size of all MUMPS internal data for running BLR in-core - sum over all processors): 0 >>>> INFOG(38) (after analysis: estimated size of all MUMPS internal data for running BLR out-of-core - value on the most memory consuming processor): 0 >>>> INFOG(39) (after analysis: estimated size of all MUMPS internal data for running BLR out-of-core - sum over all processors): 0 >>>> linear system matrix = precond matrix: >>>> Mat Object: 1 MPI process >>>> type: seqaij >>>> rows=1152, cols=1152 >>>> total: nonzeros=60480, allocated nonzeros=60480 >>>> total number of mallocs used during MatSetValues calls=0 >>>> using I-node routines: found 384 nodes, limit used is 5 >>>> >>>> >>>> >>>> On Mon, Dec 22, 2025 at 9:25?AM Barry Smith > wrote: >>>>> David, >>>>> >>>>> This is due to a software glitch. SNES_DIVERGED_FUNCTION_DOMAIN was added long after the origins of SNES and, in places, the code was never fully updated to handle function domain problems. In particular, parts of the line search don't handle it correctly. Can you run with -snes_view and that will help us find the spot that needs to be updated. >>>>> >>>>> Barry >>>>> >>>>> >>>>>> On Dec 21, 2025, at 5:53?PM, David Knezevic > wrote: >>>>>> >>>>>> Hi, actually, I have a follow up on this topic. >>>>>> >>>>>> I noticed that when I call SNESSetFunctionDomainError(), it exits the solve as expected, but it leads to a converged reason "DIVERGED_LINE_SEARCH" instead of "DIVERGED_FUNCTION_DOMAIN". If I also set SNESSetConvergedReason(snes, SNES_DIVERGED_FUNCTION_DOMAIN) in the callback, then I get the expected SNES_DIVERGED_FUNCTION_DOMAIN converged reason, so that's what I'm doing now. I was surprised by this behavior, though, since I expected that calling SNESSetFunctionDomainError woudld lead to the DIVERGED_FUNCTION_DOMAIN converged reason, so I just wanted to check on what could be causing this. >>>>>> >>>>>> FYI, I'm using PETSc 3.23.4 >>>>>> >>>>>> Thanks, >>>>>> David >>>>>> >>>>>> >>>>>> On Thu, Dec 18, 2025 at 8:10?AM David Knezevic > wrote: >>>>>>> Thank you very much for this guidance. I switched to use SNES_DIVERGED_FUNCTION_DOMAIN, and I don't get any errors now. >>>>>>> >>>>>>> Thanks! >>>>>>> David >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 17, 2025 at 3:43?PM Barry Smith > wrote: >>>>>>>> >>>>>>>> >>>>>>>>> On Dec 17, 2025, at 2:47?PM, David Knezevic > wrote: >>>>>>>>> >>>>>>>>> Stefano and Barry: Thank you, this is very helpful. >>>>>>>>> >>>>>>>>> I'll give some more info here which may help to clarify further. Normally we do just get a negative "converged reason", as you described. But in this specific case where I'm having issues the solve is a numerically sensitive creep solve, which has exponential terms in the residual and jacobian callback that can "blow up" and give NaN values. In this case, the root cause is that we hit a NaN value during a callback, and then we throw an exception (in libMesh C++ code) which I gather leads to the SNES solve exiting with this error code. >>>>>>>>> >>>>>>>>> Is there a way to tell the SNES to terminate with a negative "converged reason" because we've encountered some issue during the callback? >>>>>>>> >>>>>>>> In your callback you should call SNESSetFunctionDomainError() and make sure the function value has an infinity or NaN in it (you can call VecFlag() for this purpose)). >>>>>>>> >>>>>>>> Now SNESConvergedReason will be a completely reasonable SNES_DIVERGED_FUNCTION_DOMAIN >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> If you are using an ancient version of PETSc (I hope you are using the latest since that always has more bug fixes and features) that does not have SNESSetFunctionDomainError then just make sure the function vector result has an infinity or NaN in it and then SNESConvergedReason will be SNES_DIVERGED_FNORM_NAN >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> David >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 17, 2025 at 2:25?PM Barry Smith > wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> On Dec 17, 2025, at 2:08?PM, David Knezevic via petsc-users > wrote: >>>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> I'm using PETSc via the libMesh framework, so creating a MWE is complicated by that, unfortunately. >>>>>>>>>>> >>>>>>>>>>> The situation is that I am not modifying the solution vector in a callback. The SNES solve has terminated, with PetscErrorCode 82, and I then want to update the solution vector (reset it to the "previously converged value") and then try to solve again with a smaller load increment. This is a typical "auto load stepping" strategy in FE. >>>>>>>>>> >>>>>>>>>> Once a PetscError is generated you CANNOT continue the PETSc program, it is not designed to allow this and trying to continue will lead to further problems. >>>>>>>>>> >>>>>>>>>> So what you need to do is prevent PETSc from getting to the point where an actual PetscErrorCode of 82 is generated. Normally SNESSolve() returns without generating an error even if the nonlinear solver failed (for example did not converge). One then uses SNESGetConvergedReason to check if it converged or not. Normally when SNESSolve() returns, regardless of whether the converged reason is negative or positive, there will be no locked vectors and one can modify the SNES object and call SNESSolve again. >>>>>>>>>> >>>>>>>>>> So my guess is that an actual PETSc error is being generated because SNESSetErrorIfNotConverged(snes,PETSC_TRUE) is being called by either your code or libMesh or the option -snes_error_if_not_converged is being used. In your case when you wish the code to work after a non-converged SNESSolve() these options should never be set instead you should check the result of SNESGetConvergedReason() to check if SNESSolve has failed. If SNESSetErrorIfNotConverged() is never being set that may indicate you are using an old version of PETSc or have it a bug inside PETSc's SNES that does not handle errors correctly and we can help fix the problem if you can provide a full debug output version of when the error occurs. >>>>>>>>>> >>>>>>>>>> Barry >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I think the key piece of info I'd like to know is, at what point is the solution vector "unlocked" by the SNES object? Should it be unlocked as soon as the SNES solve has terminated with PetscErrorCode 82? Since it seems to me that it hasn't been unlocked yet (maybe just on a subset of the processes). Should I manually "unlock" the solution vector by calling VecLockWriteSet? >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> David >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 17, 2025 at 2:02?PM Stefano Zampini > wrote: >>>>>>>>>>>> You are not allowed to call VecGetArray on the solution vector of an SNES object within a user callback, nor to modify its values in any other way. >>>>>>>>>>>> Put in C++ lingo, the solution vector is a "const" argument >>>>>>>>>>>> It would be great if you could provide an MWE to help us understand your problem >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Il giorno mer 17 dic 2025 alle ore 20:51 David Knezevic via petsc-users > ha scritto: >>>>>>>>>>>>> Hi all, >>>>>>>>>>>>> >>>>>>>>>>>>> I have a question about this error: >>>>>>>>>>>>>> Vector 'Vec_0x84000005_0' (argument #2) was locked for read-only access in unknown_function() at unknown file:0 (line numbers only accurate to function begin) >>>>>>>>>>>>> >>>>>>>>>>>>> I'm encountering this error in an FE solve where there is an error encountered during the residual/jacobian assembly, and what we normally do in that situation is shrink the load step and continue, starting from the "last converged solution". However, in this case I'm running on 32 processes, and 5 of the processes report the error above about a "locked vector". >>>>>>>>>>>>> >>>>>>>>>>>>> We clear the SNES object (via SNESDestroy) before we reset the solution to the "last converged solution", and then we make a new SNES object subsequently. But it seems to me that somehow the solution vector is still marked as "locked" on 5 of the processes when we modify the solution vector, which leads to the error above. >>>>>>>>>>>>> >>>>>>>>>>>>> I was wondering if someone could advise on what the best way to handle this would be? I thought one option could be to add an MPI barrier call prior to updating the solution vector to "last converged solution", to make sure that the SNES object is destroyed on all procs (and hence the locks cleared) before editing the solution vector, but I'm unsure if that would make a difference. Any help would be most appreciated! >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> David >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Stefano >>>>>>>>>> >>>>>>>> >>>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From logic_imagination at 163.com Thu Jan 8 06:29:14 2026 From: logic_imagination at 163.com (=?GBK?B?s8LF4Mi6?=) Date: Thu, 8 Jan 2026 20:29:14 +0800 (CST) Subject: [petsc-users] Some questions about modifying the linear system in SNES Message-ID: <12c7e067.9b71.19b9d951f80.Coremail.logic_imagination@163.com> $\quad$ Hello, I use `SNESComputeJacobianDefault` to build a preconditioning matrix for JFNK and use `MatGetRowSumAbs` to obtain the vector for scaling. When I modify the preconditioning matrix in formJacobian[`SNESSetJacobian`] and modify rhs and the solution variables in preSolve[`KSPSetPreSolve`] and postSolve[`KSPSetPostSolve`] as scaling, I encounter the following questions. 1. If I need to scale the solution variables, do I need to call `SNESGetSolutionUpdate` to scale the increment vector instead of the solution vector in `postSolve(KSP /*ksp*/, Vec rhs, Vec x, void * ctx)`? 2. Whether this can achieve the same scaling effect as `-pc_jacobi_type rowl1`, so that the built-in preconditioner scheme of petsc can be applied on the basis of the above modified linear system? Or whether this will affect the residual used to construct the matrix through the finite difference and then lead to the wrong scaling effect? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Thu Jan 8 13:00:52 2026 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 8 Jan 2026 14:00:52 -0500 Subject: [petsc-users] Some questions about modifying the linear system in SNES In-Reply-To: <12c7e067.9b71.19b9d951f80.Coremail.logic_imagination@163.com> References: <12c7e067.9b71.19b9d951f80.Coremail.logic_imagination@163.com> Message-ID: You don't want to scale the matrix manually. -pc_jacobi_type rowl1 do what you want. And Jacobi is a more common choice. Mark On Thu, Jan 8, 2026 at 1:02?PM ??? wrote: > $\quad$ Hello, I use `SNESComputeJacobianDefault` to build a > preconditioning matrix for JFNK and use `MatGetRowSumAbs` to obtain the > vector for scaling. When I modify the preconditioning matrix in formJacobian > [`SNESSetJacobian`] and modify rhs and the solution variables in preSolve[ > `KSPSetPreSolve`] and postSolve[`KSPSetPostSolve`] as scaling, I > encounter the following questions. > 1. If I need to scale the solution variables, do I need to call ` > SNESGetSolutionUpdate` to scale the increment vector instead of the > solution vector in `postSolve(KSP /*ksp*/, Vec rhs, Vec x, void * ctx)`? > > 2. Whether this can achieve the same scaling effect as `-pc_jacobi_type > rowl1`, so that the built-in preconditioner scheme of petsc can be > applied on the basis of the above modified linear system? Or whether this > will affect the residual used to construct the matrix through the finite > difference and then lead to the wrong scaling effect? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From logic_imagination at 163.com Thu Jan 8 19:57:01 2026 From: logic_imagination at 163.com (=?UTF-8?B?6ZmI5Z+5576kIA==?=) Date: Fri, 9 Jan 2026 09:57:01 +0800 (CST) Subject: [petsc-users] Some questions about modifying the linear system in SNES In-Reply-To: References: <12c7e067.9b71.19b9d951f80.Coremail.logic_imagination@163.com> Message-ID: <727d0149.1ba5.19ba078a9df.Coremail.logic_imagination@163.com> $\quad$ Thanks. What I want is to use `rowl1` as scaling and then apply other preconditioner, but when I use `-pc_type composite -pc_composite_type multiplicative -pc_composite_pcs jacobi,...`, I can't set `-pc_jacobi_type rowl1`. Can `-sub_pc_type` or `PCCOMPOSITE` achieve the above requirements? $\quad$ I couldn't find the relevant command line. so I manually scale the linear system in SNES. But it seems that when the preconditioning matrix is constructed by the finite difference method based on the residual, the use of a row scaling similar to the left preconditioner will cause the residual to change and lead to repeated scaling. Is that so? $\quad$ Then how does the left preconditioner in petsc do not affect the residuals used to construct the preconditioning matrix? At 2026-01-09 03:00:52, "Mark Adams" wrote: You don't want to scale the matrix manually. -pc_jacobi_type rowl1 do what you want. And Jacobi is a more common choice. Mark On Thu, Jan 8, 2026 at 1:02?PM ??? wrote: $\quad$ Hello, I use `SNESComputeJacobianDefault` to build a preconditioning matrix for JFNK and use `MatGetRowSumAbs` to obtain the vector for scaling. When I modify the preconditioning matrix in formJacobian[`SNESSetJacobian`] and modify rhs and the solution variables in preSolve[`KSPSetPreSolve`] and postSolve[`KSPSetPostSolve`] as scaling, I encounter the following questions. 1. If I need to scale the solution variables, do I need to call `SNESGetSolutionUpdate` to scale the increment vector instead of the solution vector in `postSolve(KSP /*ksp*/, Vec rhs, Vec x, void * ctx)`? 2. Whether this can achieve the same scaling effect as `-pc_jacobi_type rowl1`, so that the built-in preconditioner scheme of petsc can be applied on the basis of the above modified linear system? Or whether this will affect the residual used to construct the matrix through the finite difference and then lead to the wrong scaling effect? -------------- next part -------------- An HTML attachment was scrubbed... URL: From logic_imagination at 163.com Thu Jan 8 20:46:12 2026 From: logic_imagination at 163.com (=?UTF-8?B?6ZmI5Z+5576kIA==?=) Date: Fri, 9 Jan 2026 10:46:12 +0800 (CST) Subject: [petsc-users] Some questions about modifying the linear system in SNES In-Reply-To: <727d0149.1ba5.19ba078a9df.Coremail.logic_imagination@163.com> References: <12c7e067.9b71.19b9d951f80.Coremail.logic_imagination@163.com> <727d0149.1ba5.19ba078a9df.Coremail.logic_imagination@163.com> Message-ID: <4c7ce9cb.2b84.19ba0a5b31e.Coremail.logic_imagination@163.com> $\quad$`-pc_type composite -pc_composite_type multiplicative -pc_composite_pcs jacobi,ilu -sub_0_pc_jacobi_type rowl1` can set the type, but there is still no way to set the left preconditioner or right preconditioner. Is there a command line such as `-sub_0_ksp_pc_side left -sub_1_ksp_pc_side right`? At 2026-01-09 09:57:01, "??? " wrote: $\quad$ Thanks. What I want is to use `rowl1` as scaling and then apply other preconditioner, but when I use `-pc_type composite -pc_composite_type multiplicative -pc_composite_pcs jacobi,...`, I can't set `-pc_jacobi_type rowl1`. Can `-sub_pc_type` or `PCCOMPOSITE` achieve the above requirements? $\quad$ I couldn't find the relevant command line. so I manually scale the linear system in SNES. But it seems that when the preconditioning matrix is constructed by the finite difference method based on the residual, the use of a row scaling similar to the left preconditioner will cause the residual to change and lead to repeated scaling. Is that so? $\quad$ Then how does the left preconditioner in petsc do not affect the residuals used to construct the preconditioning matrix? At 2026-01-09 03:00:52, "Mark Adams" wrote: You don't want to scale the matrix manually. -pc_jacobi_type rowl1 do what you want. And Jacobi is a more common choice. Mark On Thu, Jan 8, 2026 at 1:02?PM ??? wrote: $\quad$ Hello, I use `SNESComputeJacobianDefault` to build a preconditioning matrix for JFNK and use `MatGetRowSumAbs` to obtain the vector for scaling. When I modify the preconditioning matrix in formJacobian[`SNESSetJacobian`] and modify rhs and the solution variables in preSolve[`KSPSetPreSolve`] and postSolve[`KSPSetPostSolve`] as scaling, I encounter the following questions. 1. If I need to scale the solution variables, do I need to call `SNESGetSolutionUpdate` to scale the increment vector instead of the solution vector in `postSolve(KSP /*ksp*/, Vec rhs, Vec x, void * ctx)`? 2. Whether this can achieve the same scaling effect as `-pc_jacobi_type rowl1`, so that the built-in preconditioner scheme of petsc can be applied on the basis of the above modified linear system? Or whether this will affect the residual used to construct the matrix through the finite difference and then lead to the wrong scaling effect? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jan 8 22:19:20 2026 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 8 Jan 2026 23:19:20 -0500 Subject: [petsc-users] Some questions about modifying the linear system in SNES In-Reply-To: <727d0149.1ba5.19ba078a9df.Coremail.logic_imagination@163.com> References: <12c7e067.9b71.19b9d951f80.Coremail.logic_imagination@163.com> <727d0149.1ba5.19ba078a9df.Coremail.logic_imagination@163.com> Message-ID: I told you how to try out your idea by modifying the PETSc source code in KSP to implement the scaling you want in KSPSolve. You are wasting a lot of your time and other people's time trying to do it other ways that will not work. It appears you may be intimidated by the thought of actually changing the PETSc source code to accomplish what you would like to try. Don't be, it is just source code you can modify to try your idea. Doing it will give you the scaling you want; there is no other way to achieve this scaling using any other part of PETSc that is not designed for this purpose. > On Jan 8, 2026, at 8:57?PM, ??? wrote: > > $\quad$ Thanks. What I want is to use `rowl1` as scaling and then apply other preconditioner, but when I use `-pc_type composite -pc_composite_type multiplicative -pc_composite_pcs jacobi,...`, I can't set `-pc_jacobi_type rowl1`. Can `-sub_pc_type` or `PCCOMPOSITE` achieve the above requirements? > $\quad$ I couldn't find the relevant command line. so I manually scale the linear system in SNES. But it seems that when the preconditioning matrix is constructed by the finite difference method based on the residual, the use of a row scaling similar to the left preconditioner will cause the residual to change and lead to repeated scaling. Is that so? > $\quad$ Then how does the left preconditioner in petsc do not affect the residuals used to construct the preconditioning matrix? > > At 2026-01-09 03:00:52, "Mark Adams" wrote: > > You don't want to scale the matrix manually. -pc_jacobi_type rowl1 > do what you want. And Jacobi is a more common choice. > > Mark > > On Thu, Jan 8, 2026 at 1:02?PM ??? > wrote: >> $\quad$ Hello, I use `SNESComputeJacobianDefault` to build a preconditioning matrix for JFNK and use `MatGetRowSumAbs` to obtain the vector for scaling. When I modify the preconditioning matrix in formJacobian[`SNESSetJacobian`] and modify rhs and the solution variables in preSolve[`KSPSetPreSolve`] and postSolve[`KSPSetPostSolve`] as scaling, I encounter the following questions. >> 1. If I need to scale the solution variables, do I need to call `SNESGetSolutionUpdate` to scale the increment vector instead of the solution vector in `postSolve(KSP /*ksp*/, Vec rhs, Vec x, void * ctx)`? >> 2. Whether this can achieve the same scaling effect as `-pc_jacobi_type rowl1`, so that the built-in preconditioner scheme of petsc can be applied on the basis of the above modified linear system? Or whether this will affect the residual used to construct the matrix through the finite difference and then lead to the wrong scaling effect? >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Jan 13 09:18:34 2026 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 13 Jan 2026 09:18:34 -0600 Subject: [petsc-users] Last Call for User Presentations: PETSc Online BoF (Feb. 11, 2026) Message-ID: Dear PETSc/TAO Community, Thank you to those who have already signed up to present their work at the upcoming PETSc Birds-of-a-Feather (BoF) session. We are still seeking additional user presentations. The PETSc BoF will be held online via Zoom on *February 11, 2026, from 11:00 AM to 12:30 PM U.S. Eastern Time*. Please note that this session will not be recorded. We are seeking approximately five user presentations, each consisting of a 5-minute talk followed by 2 minutes of questions. We currently have four and can still accept one or two presentations. If you are interested, please contact petsc-maint at mcs.anl.gov with your talk title and a brief abstract. The BoF is hosted by the Consortium for the Advancement of Scientific Software (CASS ) and led by the PESO (Partnering for Scientific Software Ecosystem Stewardship) project. The CASS BoF events will take place from February 10?12, 2026. In addition to user presentations, during the PETSc BoF, PETSc developers will highlight recent advances developed following the Exascale Computing Project, including the new PETSc Fortran bindings, PetscRegressor, TaoTerm, updates to PETSc GPU backends, mixed-precision support in PETSc/MUMPS, and integration with OpenFOAM, among other topics. The program will also include an open discussion of emerging PETSc research directions, such as leveraging agentic artificial intelligence to enhance and exploit the PETSc knowledge base. The BoF will provide insight into PETSc?s near-term development roadmap and offer a forum for user feedback on desired features and improvements. Active participation and questions from the audience are strongly encouraged, enabling the PETSc team to better align future development with community needs. The agenda and Zoom link will be shared once the program is finalized. Thank you, and we look forward to your participation. Junchao Zhang On behalf of the PETSc Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From malaboeuf at cines.fr Tue Jan 13 11:17:54 2026 From: malaboeuf at cines.fr (malaboeuf at cines.fr) Date: Tue, 13 Jan 2026 18:17:54 +0100 (CET) Subject: [petsc-users] =?utf-8?q?Confusion_about_MatSetValuesStencil_batc?= =?utf-8?q?hing_over_x=E2=80=93dimension_with_DMDA?= Message-ID: <1796418512.622531.1768324674144.JavaMail.zimbra@cines.fr> Hi all, First, I have to apologize for I lack some experience on this subject, I have tried looking around to solve my issue without much success. I am assembling a 3D Laplacian matrix on a DMDA using MatSetValuesStencil. The following code, which inserts one row at a time, works as expected: for (PetscInt iz = info.zs; iz < info.zs + info.zm; ++iz) { for (PetscInt iy = info.ys; iy < info.ys + info.ym; ++iy) { for (PetscInt ix = info.xs; ix < info.xs + info.xm; ++ix) { const std::array rows{MatStencil{iz, iy, ix, 0}}; const std::array cols{{ {iz, iy, ix, 0}, {iz, iy, ix - 1, 0}, {iz, iy, ix + 1, 0}, {iz, iy - 1, ix, 0}, {iz, iy + 1, ix, 0}, {iz - 1, iy, ix, 0}, {iz + 1, iy, ix, 0} }}; CHKERRQ(MatSetValuesStencil(matrix_A, rows.size(), rows.data(), cols.size(), cols.data(), laplacian.data(), INSERT_VALUES)); } } } How would you batch the whole x dimension as a single MatSetValuesStencil call ? Each time I try, I endup getting errors such as: Inserting a new nonzero at global row/column (..., ...) into matrix Regards, -- Etienne Malaboeuf Ing?nieur de recherche HPC, D?partement Calcul Intensif (DCI) Centre Informatique National de l'Enseignement Sup?rieur (CINES) 950 rue de Saint Priest, 34097 Montpellier tel : (334) 67 14 14 02 web : https://urldefense.us/v3/__https://www.cines.fr__;!!G_uCfscf7eWS!dEs8i8p3qAHap5UIsgKxq7Nfj5KU_Mvp9mYj2fNWI5QwZEW8qEuMqPCDekMeZYZHxKcIdmfpIl-BxaWBE-6j-BIpOA$ | https://urldefense.us/v3/__https://dci.dci-gitlab.cines.fr/webextranet__;!!G_uCfscf7eWS!dEs8i8p3qAHap5UIsgKxq7Nfj5KU_Mvp9mYj2fNWI5QwZEW8qEuMqPCDekMeZYZHxKcIdmfpIl-BxaWBE-6NNu-h_Q$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From u1472208 at utah.edu Tue Jan 13 12:44:43 2026 From: u1472208 at utah.edu (Alberto Cattaneo) Date: Tue, 13 Jan 2026 18:44:43 +0000 Subject: [petsc-users] PETSc GPU matrix creation Message-ID: Greetings I hope this email reaches you all well. I was wondering whether it was possible to create PETSc mat objects directly from data that exists on the GPU in AIJ format without copying? For example, either via DLPack or just an assurance that the pointer provided to a creation method is in the needed AIJ format? Ideally, I'd like to be able to build a PETSc AIJCUSPARSE object out of data created by another program. I know there are a few builder methods and paradigms, but I'm a bit confused as to which would be ideal in this circumstance since in some sense the matrix is already created in memory, just not as a PETSc object. Thank you for your assistance, please let me know if I should provide more information. Respectfully: Alberto Cattaneo -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jan 13 15:53:04 2026 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 13 Jan 2026 16:53:04 -0500 Subject: [petsc-users] PETSc GPU matrix creation In-Reply-To: References: Message-ID: <035AF3A3-173A-4AEB-BA0F-946A40ECE9BB@petsc.dev> Alberto, We don't have such a routine yet, but we should and it would not be terribly difficult to implement (using pieces of PETSc that already exist and cut and pasting them into a new function MatCreateSeqAICUSPARSEWithArrays()). Barry > On Jan 13, 2026, at 1:44?PM, Alberto Cattaneo via petsc-users wrote: > > Greetings > I hope this email reaches you all well. I was wondering whether it was possible to create PETSc mat objects directly from data that exists on the GPU in AIJ format without copying? For example, either via DLPack or just an assurance that the pointer provided to a creation method is in the needed AIJ format? Ideally, I'd like to be able to build a PETSc AIJCUSPARSE object out of data created by another program. I know there are a few builder methods and paradigms, but I'm a bit confused as to which would be ideal in this circumstance since in some sense the matrix is already created in memory, just not as a PETSc object. > Thank you for your assistance, please let me know if I should provide more information. > Respectfully: > Alberto Cattaneo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Jan 13 15:58:59 2026 From: jed at jedbrown.org (Jed Brown) Date: Tue, 13 Jan 2026 14:58:59 -0700 Subject: [petsc-users] PETSc GPU matrix creation In-Reply-To: <035AF3A3-173A-4AEB-BA0F-946A40ECE9BB@petsc.dev> References: <035AF3A3-173A-4AEB-BA0F-946A40ECE9BB@petsc.dev> Message-ID: <87jyxl8998.fsf@jedbrown.org> Note that petsc4py does have DLPack interfaces for Vec. So if you like DLPack, you could extend those interfaces. Also, MatSetValuesCOO can assemble matrices in which all the data is provided on the device. If you're generating sparse matrices on device, that routine is likely to be useful. Barry Smith writes: > Alberto, > > We don't have such a routine yet, but we should and it would not be terribly difficult to implement (using pieces of PETSc that already exist and cut and pasting them into a new function MatCreateSeqAICUSPARSEWithArrays()). > > Barry > > >> On Jan 13, 2026, at 1:44?PM, Alberto Cattaneo via petsc-users wrote: >> >> Greetings >> I hope this email reaches you all well. I was wondering whether it was possible to create PETSc mat objects directly from data that exists on the GPU in AIJ format without copying? For example, either via DLPack or just an assurance that the pointer provided to a creation method is in the needed AIJ format? Ideally, I'd like to be able to build a PETSc AIJCUSPARSE object out of data created by another program. I know there are a few builder methods and paradigms, but I'm a bit confused as to which would be ideal in this circumstance since in some sense the matrix is already created in memory, just not as a PETSc object. >> Thank you for your assistance, please let me know if I should provide more information. >> Respectfully: >> Alberto Cattaneo From junchao.zhang at gmail.com Tue Jan 13 17:14:45 2026 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 13 Jan 2026 17:14:45 -0600 Subject: [petsc-users] PETSc GPU matrix creation In-Reply-To: <87jyxl8998.fsf@jedbrown.org> References: <035AF3A3-173A-4AEB-BA0F-946A40ECE9BB@petsc.dev> <87jyxl8998.fsf@jedbrown.org> Message-ID: Note we have MatCreateSeqAIJKokkosWithKokkosViews(MPI_Comm, PetscInt, PetscInt, Kokkos::View &, Kokkos::View &, Kokkos::View &, Mat *); Yes, I think we could provide MatCreateSeqAIJCUSPARSEWithArrays(MPI_Comm comm, PetscInt m, PetscInt n, PetscInt i[], PetscInt j[], PetscScalar a[], Mat *mat) with i, j, a on device. Note we already have MatCreateMPIAIJWithSeqAIJ(MPI_Comm comm, Mat A, Mat B, const PetscInt garray[], Mat *mat). With that, users are able to create MPIAIJ matrices on device with their own data, if they can manage the complexity. --Junchao Zhang On Tue, Jan 13, 2026 at 4:15?PM Jed Brown wrote: > Note that petsc4py does have DLPack interfaces for Vec. So if you like > DLPack, you could extend those interfaces. > > Also, MatSetValuesCOO can assemble matrices in which all the data is > provided on the device. If you're generating sparse matrices on device, > that routine is likely to be useful. > > Barry Smith writes: > > > Alberto, > > > > We don't have such a routine yet, but we should and it would not be > terribly difficult to implement (using pieces of PETSc that already exist > and cut and pasting them into a new function > MatCreateSeqAICUSPARSEWithArrays()). > > > > Barry > > > > > >> On Jan 13, 2026, at 1:44?PM, Alberto Cattaneo via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> > >> Greetings > >> I hope this email reaches you all well. I was wondering whether it was > possible to create PETSc mat objects directly from data that exists on the > GPU in AIJ format without copying? For example, either via DLPack or just > an assurance that the pointer provided to a creation method is in the > needed AIJ format? Ideally, I'd like to be able to build a PETSc > AIJCUSPARSE object out of data created by another program. I know there are > a few builder methods and paradigms, but I'm a bit confused as to which > would be ideal in this circumstance since in some sense the matrix is > already created in memory, just not as a PETSc object. > >> Thank you for your assistance, please let me know if I should provide > more information. > >> Respectfully: > >> Alberto Cattaneo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From micol.bassanini at epfl.ch Wed Jan 14 08:23:28 2026 From: micol.bassanini at epfl.ch (Micol Bassanini) Date: Wed, 14 Jan 2026 14:23:28 +0000 Subject: [petsc-users] Question on Matrix Product with DMDA Matrix Message-ID: Dear User Support Team, I have a question regarding my project. I would like to compute a matrix product between a DMDA matrix that is distributed across multiple ranks and a dense matrix that is available on only a single processor. What is the recommended or standard approach for implementing this kind of operation? Thank you in advance for your help. Best regards, Micol -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 14 11:49:44 2026 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Jan 2026 12:49:44 -0500 Subject: [petsc-users] Question on Matrix Product with DMDA Matrix In-Reply-To: References: Message-ID: On Wed, Jan 14, 2026 at 12:13?PM Micol Bassanini via petsc-users < petsc-users at mcs.anl.gov> wrote: > Dear User Support Team, > I have a question regarding my project. > I would like to compute a matrix product between a DMDA matrix that is > distributed across multiple ranks and a dense matrix that is available on > only a single processor. What is the recommended or standard approach for > implementing this kind of operation? > It sounds like you should distribute your dense matrix. You might do it using MatSetValues(), or MatCreateSubmatrix(). Thanks, Matt > Thank you in advance for your help. > Best regards, > Micol > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d6GQrZ5VUzxJEZ4hXZgXG8TrQNNtN-ZE3lm4bZVNzzLNU388FLqczz7lHcXEKa2F8LI0nctKw_i09TOCXD86$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Wed Jan 14 13:32:51 2026 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 14 Jan 2026 13:32:51 -0600 Subject: [petsc-users] PETSc GPU matrix creation In-Reply-To: References: Message-ID: BTW, are the i[], j[] arrays of the CSR matrix on the GPU, or merely the value array a[]? --Junchao Zhang On Tue, Jan 13, 2026 at 12:50?PM Alberto Cattaneo via petsc-users < petsc-users at mcs.anl.gov> wrote: > Greetings > I hope this email reaches you all well. I was wondering whether it was > possible to create PETSc mat objects directly from data that exists on the > GPU in AIJ format without copying? For example, either via DLPack or just > an assurance that the pointer provided to a creation method is in the > needed AIJ format? Ideally, I'd like to be able to build a PETSc > AIJCUSPARSE object out of data created by another program. I know there are > a few builder methods and paradigms, but I'm a bit confused as to which > would be ideal in this circumstance since in some sense the matrix is > already created in memory, just not as a PETSc object. > Thank you for your assistance, please let me know if I should provide more > information. > Respectfully: > Alberto Cattaneo > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwhitte6 at jhu.edu Fri Jan 16 08:40:48 2026 From: mwhitte6 at jhu.edu (Michael Whitten) Date: Fri, 16 Jan 2026 14:40:48 +0000 Subject: [petsc-users] petsc-users Digest, Vol 205, Issue 2 In-Reply-To: References: Message-ID: I hope that I am replying in the correct manner to respond to the "Having trouble with basic Fortran-PETSc interoperability" thread I previously started. If not, please correct me. Thank you everyone for your replies, they were very helpful. I have what appears to be a working example code using the suggested updates, specifically the VecGetArray() and VecRestoreArray() and getting the sequential vector back from PETSc using the information from the FAQ. <> I have a question about this example code to make sure I am writing reasonably efficient code. It seems like I have to create an additional PETSc vector 'out_seq' which will essentially be a copy of the PETSc vector 'v1p' which is not the most efficient use of memory. It also seems to me like there isn't a way around this additional 'out_seq' vector because there needs a place to aggregate the data from the various processes. Is this a reasonable use of PETSc or is there a more efficient way? Note, I am trying to interface my existing code base with PETSc to use the solvers and this may be the performance trade-off for not developing my program fully within the PETSc ecosystem. I have another question only tangentially related to this topic. Should I ask it as part of this thread or create a new topic? Michael ________________________________ From: petsc-users on behalf of petsc-users-request at mcs.anl.gov Sent: Saturday, January 3, 2026 1:00 PM To: petsc-users at mcs.anl.gov Subject: petsc-users Digest, Vol 205, Issue 2 External Email - Use Caution Send petsc-users mailing list submissions to petsc-users at mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600339697177*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=Jh2MHiTm*2Fj3*2Ft6gyDoGp295Ex3SzJAW0iyV7GN*2FDN0o*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!e5Z4bqpGanLUgkl1LVgQhM9FRHLRN-xFXCFpnkMtJ-l38HzJqQ3xHhR2i-mB0G6XbKMBZWzNnqZO2LHglX1FXiiB$ or, via email, send a message with subject or body 'help' to petsc-users-request at mcs.anl.gov You can reach the person managing the list at petsc-users-owner at mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of petsc-users digest..." Today's Topics: 1. Re: Having trouble with basic Fortran-PETSc interoperability (Barry Smith) 2. Re: Having trouble with basic Fortran-PETSc interoperability (Matthew Knepley) ---------------------------------------------------------------------- Message: 1 Date: Fri, 2 Jan 2026 16:33:27 -0500 From: Barry Smith To: Michael Whitten Cc: "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc interoperability Message-ID: <3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E at petsc.dev> Content-Type: text/plain; charset="utf-8" VecGetValues() uses 0 based indexing in both Fortran and C. You don't want to use VecGetValues() and VecSetValues() usually since they result in two copies of the arrays and copying entire arrays back and forth. You can avoid copying between PETSc vectors and your arrays by using VecGetArray(), VecGetArrayWrite(), and VecGetArrayRead(). You can also use VecCreateMPIWithArray() to create a PETSc vector using your array; for example for input to the right hand side of a KSP. These arrays start their indexing with the Fortran default of 1. Barry > On Jan 2, 2026, at 2:42?PM, Michael Whitten via petsc-users wrote: > > Hi PETSc mailing list users, > > I have managed to install PETSc and run some PETSc examples and little test codes of my own in Fortran. I am now trying to make PETSc work with my existing Fortran code. I have tried to build little test examples of the functionality that I can then incorporate into my larger code base. However, I am having trouble just passing vectors back and forth between PETSc and Fortran. > > I have attached a minimum semi-working example that can be compiled with the standard 'Makefile.user'. It throws an error when I try to copy the PETSc vector back to a Fortran vector using VecGetValues(). I get that it can only access values of the array on the local process but how do I fix this? Is this even the right approach? > > In the final implementation I want to be able to assemble my matrix and vector, convert them to PETSc data structures, use PETSc to solve, and then convert the solution vector back to Fortran and return. I want to be able to do this with both the linear and nonlinear solvers. It seems like this is what PETSc is, in part, built to do. Is this a reasonable expectation to achieve? Is this a reasonable use case for PETSc? > > Thanks in advance for any help you can offer. > > best, > Michael > -------------- next part -------------- An HTML attachment was scrubbed... URL: > ------------------------------ Message: 2 Date: Fri, 2 Jan 2026 17:49:30 -0500 From: Matthew Knepley To: Michael Whitten Cc: "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc interoperability Message-ID: Content-Type: text/plain; charset="utf-8" On Fri, Jan 2, 2026 at 2:48?PM Michael Whitten via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi PETSc mailing list users, > > I have managed to install PETSc and run some PETSc examples and little > test codes of my own in Fortran. I am now trying to make PETSc work with my > existing Fortran code. I have tried to build little test examples of the > functionality that I can then incorporate into my larger code base. > However, I am having trouble just passing vectors back and forth between > PETSc and Fortran. > > I have attached a minimum semi-working example that can be compiled with > the standard 'Makefile.user'. It throws an error when I try to copy the > PETSc vector back to a Fortran vector using VecGetValues(). I get that it > can only access values of the array on the local process but how do I fix > this? Is this even the right approach? > No. You should just call VecGetArray(), to get back an F90 pointer to the values. This is much more convenient. > In the final implementation I want to be able to assemble my matrix and > vector, convert them to PETSc data structures, use PETSc to solve, and then > convert the solution vector back to Fortran and return. I want to be able > to do this with both the linear and nonlinear solvers. It seems like this > is what PETSc is, in part, built to do. Is this a reasonable expectation to > achieve? Is this a reasonable use case for PETSc? > Yes, that should work getting the array directly. Thanks, Matt > Thanks in advance for any help you can offer. > > best, > Michael > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrh36ooFV*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350676162*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=8Uw0xpICKrArvT*2FWBDWc*2B*2FNJ6X9VgHWbSXFPKtK0R7g*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!e5Z4bqpGanLUgkl1LVgQhM9FRHLRN-xFXCFpnkMtJ-l38HzJqQ3xHhR2i-mB0G6XbKMBZWzNnqZO2LHglfXUsZnR$ -------------- next part -------------- An HTML attachment was scrubbed... URL: > ------------------------------ Subject: Digest Footer _______________________________________________ petsc-users mailing list petsc-users at mcs.anl.gov https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350725058*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=Ueb*2F8ra8XpzYQzLTm4qwWVo9lyMGs2P5*2FMD0nA3xsdg*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!e5Z4bqpGanLUgkl1LVgQhM9FRHLRN-xFXCFpnkMtJ-l38HzJqQ3xHhR2i-mB0G6XbKMBZWzNnqZO2LHglU9yvHkS$ ------------------------------ End of petsc-users Digest, Vol 205, Issue 2 ******************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test2.F90 Type: text/x-fortran Size: 1917 bytes Desc: test2.F90 URL: From bsmith at petsc.dev Fri Jan 16 08:51:29 2026 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 16 Jan 2026 08:51:29 -0600 Subject: [petsc-users] petsc-users Digest, Vol 205, Issue 2 In-Reply-To: References: Message-ID: 1) For the code v1 = 1.0d0 .... PetscCallA(VecCreate(PETSC_COMM_WORLD,v1p,ierr)) ... PetscCallA(VecGetArray(v1p,v1ptr,ierr)) v1ptr = v1 PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) This produces two copies of the array plus time to copy values over. Instead drop the v1 completely and just use PetscCallA(VecGetArray(v1p,v1ptr,ierr)) v1ptr = 1.0d0 ! or fill it up with a loop etc PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) 2) If you want/need PETSc's entire parallel vector on each process then you have no choice but to use the code you wrote with VecScatter. Normally when writing a new MPI code from scratch one designs it so the entire vector is never needed together on a single process (because that is not scalable) but if you have a current code you need to work with this is an ok way to start. Barry > On Jan 16, 2026, at 8:40?AM, Michael Whitten via petsc-users wrote: > > I hope that I am replying in the correct manner to respond to the "Having trouble with basic Fortran-PETSc interoperability" thread I previously started. If not, please correct me. > > Thank you everyone for your replies, they were very helpful. > > I have what appears to be a working example code using the suggested updates, specifically the VecGetArray() and VecRestoreArray() and getting the sequential vector back from PETSc using the information from the FAQ. <> > > I have a question about this example code to make sure I am writing reasonably efficient code. It seems like I have to create an additional PETSc vector 'out_seq' which will essentially be a copy of the PETSc vector 'v1p' which is not the most efficient use of memory. It also seems to me like there isn't a way around this additional 'out_seq' vector because there needs a place to aggregate the data from the various processes. Is this a reasonable use of PETSc or is there a more efficient way? Note, I am trying to interface my existing code base with PETSc to use the solvers and this may be the performance trade-off for not developing my program fully within the PETSc ecosystem. > > I have another question only tangentially related to this topic. Should I ask it as part of this thread or create a new topic? > > Michael > From: petsc-users > on behalf of petsc-users-request at mcs.anl.gov > > Sent: Saturday, January 3, 2026 1:00 PM > To: petsc-users at mcs.anl.gov > > Subject: petsc-users Digest, Vol 205, Issue 2 > > > External Email - Use Caution > > > > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600339697177*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=Jh2MHiTm*2Fj3*2Ft6gyDoGp295Ex3SzJAW0iyV7GN*2FDN0o*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUp58-iEhM$ > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: Having trouble with basic Fortran-PETSc interoperability > (Barry Smith) > 2. Re: Having trouble with basic Fortran-PETSc interoperability > (Matthew Knepley) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 2 Jan 2026 16:33:27 -0500 > From: Barry Smith > > To: Michael Whitten > > Cc: "petsc-users at mcs.anl.gov " > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > interoperability > Message-ID: <3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E at petsc.dev > > Content-Type: text/plain; charset="utf-8" > > > VecGetValues() uses 0 based indexing in both Fortran and C. > > You don't want to use VecGetValues() and VecSetValues() usually since they result in two copies of the arrays and copying entire arrays back and forth. > > You can avoid copying between PETSc vectors and your arrays by using VecGetArray(), VecGetArrayWrite(), and VecGetArrayRead(). You can also use VecCreateMPIWithArray() to create a PETSc vector using your array; for example for input to the right hand side of a KSP. These arrays start their indexing with the Fortran default of 1. > > > Barry > > > > > On Jan 2, 2026, at 2:42?PM, Michael Whitten via petsc-users > wrote: > > > > Hi PETSc mailing list users, > > > > I have managed to install PETSc and run some PETSc examples and little test codes of my own in Fortran. I am now trying to make PETSc work with my existing Fortran code. I have tried to build little test examples of the functionality that I can then incorporate into my larger code base. However, I am having trouble just passing vectors back and forth between PETSc and Fortran. > > > > I have attached a minimum semi-working example that can be compiled with the standard 'Makefile.user'. It throws an error when I try to copy the PETSc vector back to a Fortran vector using VecGetValues(). I get that it can only access values of the array on the local process but how do I fix this? Is this even the right approach? > > > > In the final implementation I want to be able to assemble my matrix and vector, convert them to PETSc data structures, use PETSc to solve, and then convert the solution vector back to Fortran and return. I want to be able to do this with both the linear and nonlinear solvers. It seems like this is what PETSc is, in part, built to do. Is this a reasonable expectation to achieve? Is this a reasonable use case for PETSc? > > > > Thanks in advance for any help you can offer. > > > > best, > > Michael > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 2 > Date: Fri, 2 Jan 2026 17:49:30 -0500 > From: Matthew Knepley > > To: Michael Whitten > > Cc: "petsc-users at mcs.anl.gov " > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > interoperability > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > On Fri, Jan 2, 2026 at 2:48?PM Michael Whitten via petsc-users < > petsc-users at mcs.anl.gov > wrote: > > > Hi PETSc mailing list users, > > > > I have managed to install PETSc and run some PETSc examples and little > > test codes of my own in Fortran. I am now trying to make PETSc work with my > > existing Fortran code. I have tried to build little test examples of the > > functionality that I can then incorporate into my larger code base. > > However, I am having trouble just passing vectors back and forth between > > PETSc and Fortran. > > > > I have attached a minimum semi-working example that can be compiled with > > the standard 'Makefile.user'. It throws an error when I try to copy the > > PETSc vector back to a Fortran vector using VecGetValues(). I get that it > > can only access values of the array on the local process but how do I fix > > this? Is this even the right approach? > > > > No. You should just call VecGetArray(), to get back an F90 pointer to the > values. This is much more convenient. > > > > In the final implementation I want to be able to assemble my matrix and > > vector, convert them to PETSc data structures, use PETSc to solve, and then > > convert the solution vector back to Fortran and return. I want to be able > > to do this with both the linear and nonlinear solvers. It seems like this > > is what PETSc is, in part, built to do. Is this a reasonable expectation to > > achieve? Is this a reasonable use case for PETSc? > > > > Yes, that should work getting the array directly. > > Thanks, > > Matt > > > > Thanks in advance for any help you can offer. > > > > best, > > Michael > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrh36ooFV*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350676162*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=8Uw0xpICKrArvT*2FWBDWc*2B*2FNJ6X9VgHWbSXFPKtK0R7g*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpbgPoAsw$ > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350725058*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=Ueb*2F8ra8XpzYQzLTm4qwWVo9lyMGs2P5*2FMD0nA3xsdg*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpdaF8pOQ$ > > > ------------------------------ > > End of petsc-users Digest, Vol 205, Issue 2 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwhitte6 at jhu.edu Fri Jan 16 10:03:43 2026 From: mwhitte6 at jhu.edu (Michael Whitten) Date: Fri, 16 Jan 2026 16:03:43 +0000 Subject: [petsc-users] petsc-users Digest, Vol 205, Issue 12 In-Reply-To: References: Message-ID: Both of your points elude to my other question so I am going to ask it in this thread. Is it possible/recommended/reasonable to run my current code sequentially, call a function that interfaces with the PETSc solver which runs parallel, and return a sequential vector to my code? 1. I understand your point about the additional memory usage. However, I am imagining v1 as my Fortran vector passed into a function. It is possible that my current plan is quite inefficient as I am becoming more aware of how much of a novice I am in parallel computing. 2. I don't need PETSc's entire parallel vector on each process. This was not my understanding of that part of the code. I need the entire vector returned to my sequential code. I think I am trying to do something intermediate between completely sequential and completely parallel. Michael ________________________________ From: petsc-users on behalf of petsc-users-request at mcs.anl.gov Sent: Friday, January 16, 2026 9:52 AM To: petsc-users at mcs.anl.gov Subject: petsc-users Digest, Vol 205, Issue 12 External Email - Use Caution Send petsc-users mailing list submissions to petsc-users at mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893354257*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=XiW8wenO4*2B2NKhgAqOP9ew9u31fdVug9JZmSw3WCqio*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bhoEoF0XYYKl-EOyDlgI2fLPdGcIPgvLKpHDlZJ8j2J8urOsoci3Ml3KBNAv13782FIpFxECA2a2yLsA4gaqtTF_$ or, via email, send a message with subject or body 'help' to petsc-users-request at mcs.anl.gov You can reach the person managing the list at petsc-users-owner at mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of petsc-users digest..." Today's Topics: 1. Re: petsc-users Digest, Vol 205, Issue 2 (Barry Smith) ---------------------------------------------------------------------- Message: 1 Date: Fri, 16 Jan 2026 08:51:29 -0600 From: Barry Smith To: Michael Whitten Cc: "petsc-users at mcs.anl.gov" Subject: Re: [petsc-users] petsc-users Digest, Vol 205, Issue 2 Message-ID: Content-Type: text/plain; charset="utf-8" 1) For the code v1 = 1.0d0 .... PetscCallA(VecCreate(PETSC_COMM_WORLD,v1p,ierr)) ... PetscCallA(VecGetArray(v1p,v1ptr,ierr)) v1ptr = v1 PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) This produces two copies of the array plus time to copy values over. Instead drop the v1 completely and just use PetscCallA(VecGetArray(v1p,v1ptr,ierr)) v1ptr = 1.0d0 ! or fill it up with a loop etc PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) 2) If you want/need PETSc's entire parallel vector on each process then you have no choice but to use the code you wrote with VecScatter. Normally when writing a new MPI code from scratch one designs it so the entire vector is never needed together on a single process (because that is not scalable) but if you have a current code you need to work with this is an ok way to start. Barry > On Jan 16, 2026, at 8:40?AM, Michael Whitten via petsc-users wrote: > > I hope that I am replying in the correct manner to respond to the "Having trouble with basic Fortran-PETSc interoperability" thread I previously started. If not, please correct me. > > Thank you everyone for your replies, they were very helpful. > > I have what appears to be a working example code using the suggested updates, specifically the VecGetArray() and VecRestoreArray() and getting the sequential vector back from PETSc using the information from the FAQ. <> > > I have a question about this example code to make sure I am writing reasonably efficient code. It seems like I have to create an additional PETSc vector 'out_seq' which will essentially be a copy of the PETSc vector 'v1p' which is not the most efficient use of memory. It also seems to me like there isn't a way around this additional 'out_seq' vector because there needs a place to aggregate the data from the various processes. Is this a reasonable use of PETSc or is there a more efficient way? Note, I am trying to interface my existing code base with PETSc to use the solvers and this may be the performance trade-off for not developing my program fully within the PETSc ecosystem. > > I have another question only tangentially related to this topic. Should I ask it as part of this thread or create a new topic? > > Michael > From: petsc-users > on behalf of petsc-users-request at mcs.anl.gov > > Sent: Saturday, January 3, 2026 1:00 PM > To: petsc-users at mcs.anl.gov > > Subject: petsc-users Digest, Vol 205, Issue 2 > > > External Email - Use Caution > > > > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600339697177*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DJh2MHiTm*2Fj3*2Ft6gyDoGp295Ex3SzJAW0iyV7GN*2FDN0o*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUp58-iEhM*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893387619*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=bJNbqOfO7jxYhemrhUYKP53ocQ*2BipAdFWeq8qtU9v2Q*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiolJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bhoEoF0XYYKl-EOyDlgI2fLPdGcIPgvLKpHDlZJ8j2J8urOsoci3Ml3KBNAv13782FIpFxECA2a2yLsA4kQLjbZP$ > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: Having trouble with basic Fortran-PETSc interoperability > (Barry Smith) > 2. Re: Having trouble with basic Fortran-PETSc interoperability > (Matthew Knepley) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 2 Jan 2026 16:33:27 -0500 > From: Barry Smith > > To: Michael Whitten > > Cc: "petsc-users at mcs.anl.gov " > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > interoperability > Message-ID: <3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E at petsc.dev > > Content-Type: text/plain; charset="utf-8" > > > VecGetValues() uses 0 based indexing in both Fortran and C. > > You don't want to use VecGetValues() and VecSetValues() usually since they result in two copies of the arrays and copying entire arrays back and forth. > > You can avoid copying between PETSc vectors and your arrays by using VecGetArray(), VecGetArrayWrite(), and VecGetArrayRead(). You can also use VecCreateMPIWithArray() to create a PETSc vector using your array; for example for input to the right hand side of a KSP. These arrays start their indexing with the Fortran default of 1. > > > Barry > > > > > On Jan 2, 2026, at 2:42?PM, Michael Whitten via petsc-users > wrote: > > > > Hi PETSc mailing list users, > > > > I have managed to install PETSc and run some PETSc examples and little test codes of my own in Fortran. I am now trying to make PETSc work with my existing Fortran code. I have tried to build little test examples of the functionality that I can then incorporate into my larger code base. However, I am having trouble just passing vectors back and forth between PETSc and Fortran. > > > > I have attached a minimum semi-working example that can be compiled with the standard 'Makefile.user'. It throws an error when I try to copy the PETSc vector back to a Fortran vector using VecGetValues(). I get that it can only access values of the array on the local process but how do I fix this? Is this even the right approach? > > > > In the final implementation I want to be able to assemble my matrix and vector, convert them to PETSc data structures, use PETSc to solve, and then convert the solution vector back to Fortran and return. I want to be able to do this with both the linear and nonlinear solvers. It seems like this is what PETSc is, in part, built to do. Is this a reasonable expectation to achieve? Is this a reasonable use case for PETSc? > > > > Thanks in advance for any help you can offer. > > > > best, > > Michael > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 2 > Date: Fri, 2 Jan 2026 17:49:30 -0500 > From: Matthew Knepley > > To: Michael Whitten > > Cc: "petsc-users at mcs.anl.gov " > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > interoperability > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > On Fri, Jan 2, 2026 at 2:48?PM Michael Whitten via petsc-users < > petsc-users at mcs.anl.gov > wrote: > > > Hi PETSc mailing list users, > > > > I have managed to install PETSc and run some PETSc examples and little > > test codes of my own in Fortran. I am now trying to make PETSc work with my > > existing Fortran code. I have tried to build little test examples of the > > functionality that I can then incorporate into my larger code base. > > However, I am having trouble just passing vectors back and forth between > > PETSc and Fortran. > > > > I have attached a minimum semi-working example that can be compiled with > > the standard 'Makefile.user'. It throws an error when I try to copy the > > PETSc vector back to a Fortran vector using VecGetValues(). I get that it > > can only access values of the array on the local process but how do I fix > > this? Is this even the right approach? > > > > No. You should just call VecGetArray(), to get back an F90 pointer to the > values. This is much more convenient. > > > > In the final implementation I want to be able to assemble my matrix and > > vector, convert them to PETSc data structures, use PETSc to solve, and then > > convert the solution vector back to Fortran and return. I want to be able > > to do this with both the linear and nonlinear solvers. It seems like this > > is what PETSc is, in part, built to do. Is this a reasonable expectation to > > achieve? Is this a reasonable use case for PETSc? > > > > Yes, that should work getting the array directly. > > Thanks, > > Matt > > > > Thanks in advance for any help you can offer. > > > > best, > > Michael > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrh36ooFV*24*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350676162*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3D8Uw0xpICKrArvT*2FWBDWc*2B*2FNJ6X9VgHWbSXFPKtK0R7g*3D*26reserved*3D0__*3BJSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpbgPoAsw*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893486117*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=UoISAPwso2SSB7TL3cam0gW9J3X7h28d20sSsj*2Bz52s*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKioqKioqKioqJSUqKioqKioqKioqKioqKioqJSUqKioqJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bhoEoF0XYYKl-EOyDlgI2fLPdGcIPgvLKpHDlZJ8j2J8urOsoci3Ml3KBNAv13782FIpFxECA2a2yLsA4iX6qQrp$ > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350725058*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DUeb*2F8ra8XpzYQzLTm4qwWVo9lyMGs2P5*2FMD0nA3xsdg*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpdaF8pOQ*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893633070*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=fmFnefcmhqrsOsKFsdq7Krc*2F0CDtDuB*2FCXbtDn6Rnr4*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bhoEoF0XYYKl-EOyDlgI2fLPdGcIPgvLKpHDlZJ8j2J8urOsoci3Ml3KBNAv13782FIpFxECA2a2yLsA4sUQzxGI$ > > > ------------------------------ > > End of petsc-users Digest, Vol 205, Issue 2 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: > ------------------------------ Subject: Digest Footer _______________________________________________ petsc-users mailing list petsc-users at mcs.anl.gov https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893746146*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=gNKZtz*2B5BHwQe2nCwMnYBo0dFDdYJ6lNPFaeElUrAOw*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bhoEoF0XYYKl-EOyDlgI2fLPdGcIPgvLKpHDlZJ8j2J8urOsoci3Ml3KBNAv13782FIpFxECA2a2yLsA4tts2eAl$ ------------------------------ End of petsc-users Digest, Vol 205, Issue 12 ******************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jan 16 10:38:45 2026 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 16 Jan 2026 10:38:45 -0600 Subject: [petsc-users] petsc-users Digest, Vol 205, Issue 12 In-Reply-To: References: Message-ID: <61B106C0-F550-432C-B39B-3FF25F789A3D@petsc.dev> Michael, Yes, I misunderstood the question. Are you specifically wanting to use PETSc's KSP linear solver in parallel while your code runs sequentially (and you do not plan to use SNES or TS or Tao)? If so then we have code just for that purpose, explained in the section Using PETSc's MPI parallel linear solvers from a non-MPI program in the users manual available at petsc.org The PETSc code manages all the parallelism and spreading out of the matrix and vectors to the multiple processes. You just write a sequential program that uses KSP and calls KSPSolve(), make sure it works correctly and then add the options listed in the users manual to run the linear solve in parallel. If you want to use SNES or TS it will require you to manage moving the vectors and matrices in your code from sequential to parallel using PETSc calls. But again you would just get your code running correctly with PETSc sequentially and then add some extra boilerplate to use the PETSc parallel solvers with it. Barry > On Jan 16, 2026, at 10:03?AM, Michael Whitten via petsc-users wrote: > > Both of your points elude to my other question so I am going to ask it in this thread. > > Is it possible/recommended/reasonable to run my current code sequentially, call a function that interfaces with the PETSc solver which runs parallel, and return a sequential vector to my code? > > I understand your point about the additional memory usage. However, I am imagining v1 as my Fortran vector passed into a function. It is possible that my current plan is quite inefficient as I am becoming more aware of how much of a novice I am in parallel computing. > I don't need PETSc's entire parallel vector on each process. This was not my understanding of that part of the code. I need the entire vector returned to my sequential code. > > I think I am trying to do something intermediate between completely sequential and completely parallel. > > Michael > From: petsc-users > on behalf of petsc-users-request at mcs.anl.gov > > Sent: Friday, January 16, 2026 9:52 AM > To: petsc-users at mcs.anl.gov > > Subject: petsc-users Digest, Vol 205, Issue 12 > > > External Email - Use Caution > > > > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893354257*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=XiW8wenO4*2B2NKhgAqOP9ew9u31fdVug9JZmSw3WCqio*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!akkX3mtT9HWbhzS3UC7bCNBSaKhbvQ937U5ePbM10Wdro75K-aTE0MZvfvGfyBfBwpQpD8YsK2NvRfeSC92iHkY$ > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: petsc-users Digest, Vol 205, Issue 2 (Barry Smith) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 16 Jan 2026 08:51:29 -0600 > From: Barry Smith > > To: Michael Whitten > > Cc: "petsc-users at mcs.anl.gov " > > Subject: Re: [petsc-users] petsc-users Digest, Vol 205, Issue 2 > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > > > 1) For the code > v1 = 1.0d0 > > .... > PetscCallA(VecCreate(PETSC_COMM_WORLD,v1p,ierr)) > ... > PetscCallA(VecGetArray(v1p,v1ptr,ierr)) > v1ptr = v1 > PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) > > This produces two copies of the array plus time to copy values over. Instead drop the v1 completely and just use > PetscCallA(VecGetArray(v1p,v1ptr,ierr)) > v1ptr = 1.0d0 ! or fill it up with a loop etc > PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) > > 2) If you want/need PETSc's entire parallel vector on each process then you have no choice but to use the code you wrote with VecScatter. Normally when writing a new MPI code from scratch one designs it so the entire vector is never needed together on a single process (because that is not scalable) but if you have a current code you need to work with this is an ok way to start. > > Barry > > > > > On Jan 16, 2026, at 8:40?AM, Michael Whitten via petsc-users > wrote: > > > > I hope that I am replying in the correct manner to respond to the "Having trouble with basic Fortran-PETSc interoperability" thread I previously started. If not, please correct me. > > > > Thank you everyone for your replies, they were very helpful. > > > > I have what appears to be a working example code using the suggested updates, specifically the VecGetArray() and VecRestoreArray() and getting the sequential vector back from PETSc using the information from the FAQ. <> > > > > I have a question about this example code to make sure I am writing reasonably efficient code. It seems like I have to create an additional PETSc vector 'out_seq' which will essentially be a copy of the PETSc vector 'v1p' which is not the most efficient use of memory. It also seems to me like there isn't a way around this additional 'out_seq' vector because there needs a place to aggregate the data from the various processes. Is this a reasonable use of PETSc or is there a more efficient way? Note, I am trying to interface my existing code base with PETSc to use the solvers and this may be the performance trade-off for not developing my program fully within the PETSc ecosystem. > > > > I have another question only tangentially related to this topic. Should I ask it as part of this thread or create a new topic? > > > > Michael > > From: petsc-users > on behalf of petsc-users-request at mcs.anl.gov > > > Sent: Saturday, January 3, 2026 1:00 PM > > To: petsc-users at mcs.anl.gov > > > Subject: petsc-users Digest, Vol 205, Issue 2 > > > > > > External Email - Use Caution > > > > > > > > Send petsc-users mailing list submissions to > > petsc-users at mcs.anl.gov > > > > To subscribe or unsubscribe via the World Wide Web, visit > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600339697177*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DJh2MHiTm*2Fj3*2Ft6gyDoGp295Ex3SzJAW0iyV7GN*2FDN0o*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUp58-iEhM*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893387619*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=bJNbqOfO7jxYhemrhUYKP53ocQ*2BipAdFWeq8qtU9v2Q*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiolJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!akkX3mtT9HWbhzS3UC7bCNBSaKhbvQ937U5ePbM10Wdro75K-aTE0MZvfvGfyBfBwpQpD8YsK2NvRfeSTm_uyzk$ > > > or, via email, send a message with subject or body 'help' to > > petsc-users-request at mcs.anl.gov > > > > You can reach the person managing the list at > > petsc-users-owner at mcs.anl.gov > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of petsc-users digest..." > > > > > > Today's Topics: > > > > 1. Re: Having trouble with basic Fortran-PETSc interoperability > > (Barry Smith) > > 2. Re: Having trouble with basic Fortran-PETSc interoperability > > (Matthew Knepley) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Fri, 2 Jan 2026 16:33:27 -0500 > > From: Barry Smith > > > To: Michael Whitten > > > Cc: "petsc-users at mcs.anl.gov " > > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > > interoperability > > Message-ID: <3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E at petsc.dev > > > Content-Type: text/plain; charset="utf-8" > > > > > > VecGetValues() uses 0 based indexing in both Fortran and C. > > > > You don't want to use VecGetValues() and VecSetValues() usually since they result in two copies of the arrays and copying entire arrays back and forth. > > > > You can avoid copying between PETSc vectors and your arrays by using VecGetArray(), VecGetArrayWrite(), and VecGetArrayRead(). You can also use VecCreateMPIWithArray() to create a PETSc vector using your array; for example for input to the right hand side of a KSP. These arrays start their indexing with the Fortran default of 1. > > > > > > Barry > > > > > > > > > On Jan 2, 2026, at 2:42?PM, Michael Whitten via petsc-users > wrote: > > > > > > Hi PETSc mailing list users, > > > > > > I have managed to install PETSc and run some PETSc examples and little test codes of my own in Fortran. I am now trying to make PETSc work with my existing Fortran code. I have tried to build little test examples of the functionality that I can then incorporate into my larger code base. However, I am having trouble just passing vectors back and forth between PETSc and Fortran. > > > > > > I have attached a minimum semi-working example that can be compiled with the standard 'Makefile.user'. It throws an error when I try to copy the PETSc vector back to a Fortran vector using VecGetValues(). I get that it can only access values of the array on the local process but how do I fix this? Is this even the right approach? > > > > > > In the final implementation I want to be able to assemble my matrix and vector, convert them to PETSc data structures, use PETSc to solve, and then convert the solution vector back to Fortran and return. I want to be able to do this with both the linear and nonlinear solvers. It seems like this is what PETSc is, in part, built to do. Is this a reasonable expectation to achieve? Is this a reasonable use case for PETSc? > > > > > > Thanks in advance for any help you can offer. > > > > > > best, > > > Michael > > > > > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: >> > > > > ------------------------------ > > > > Message: 2 > > Date: Fri, 2 Jan 2026 17:49:30 -0500 > > From: Matthew Knepley > > > To: Michael Whitten > > > Cc: "petsc-users at mcs.anl.gov " > > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > > interoperability > > Message-ID: > > > > > Content-Type: text/plain; charset="utf-8" > > > > On Fri, Jan 2, 2026 at 2:48?PM Michael Whitten via petsc-users < > > petsc-users at mcs.anl.gov > wrote: > > > > > Hi PETSc mailing list users, > > > > > > I have managed to install PETSc and run some PETSc examples and little > > > test codes of my own in Fortran. I am now trying to make PETSc work with my > > > existing Fortran code. I have tried to build little test examples of the > > > functionality that I can then incorporate into my larger code base. > > > However, I am having trouble just passing vectors back and forth between > > > PETSc and Fortran. > > > > > > I have attached a minimum semi-working example that can be compiled with > > > the standard 'Makefile.user'. It throws an error when I try to copy the > > > PETSc vector back to a Fortran vector using VecGetValues(). I get that it > > > can only access values of the array on the local process but how do I fix > > > this? Is this even the right approach? > > > > > > > No. You should just call VecGetArray(), to get back an F90 pointer to the > > values. This is much more convenient. > > > > > > > In the final implementation I want to be able to assemble my matrix and > > > vector, convert them to PETSc data structures, use PETSc to solve, and then > > > convert the solution vector back to Fortran and return. I want to be able > > > to do this with both the linear and nonlinear solvers. It seems like this > > > is what PETSc is, in part, built to do. Is this a reasonable expectation to > > > achieve? Is this a reasonable use case for PETSc? > > > > > > > Yes, that should work getting the array directly. > > > > Thanks, > > > > Matt > > > > > > > Thanks in advance for any help you can offer. > > > > > > best, > > > Michael > > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which their > > experiments lead. > > -- Norbert Wiener > > > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrh36ooFV*24*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350676162*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3D8Uw0xpICKrArvT*2FWBDWc*2B*2FNJ6X9VgHWbSXFPKtK0R7g*3D*26reserved*3D0__*3BJSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpbgPoAsw*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893486117*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=UoISAPwso2SSB7TL3cam0gW9J3X7h28d20sSsj*2Bz52s*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKioqKioqKioqJSUqKioqKioqKioqKioqKioqJSUqKioqJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!akkX3mtT9HWbhzS3UC7bCNBSaKhbvQ937U5ePbM10Wdro75K-aTE0MZvfvGfyBfBwpQpD8YsK2NvRfeSu3Ns0jM$ > > tion.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__http*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS !dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrgy-5x4Z*24&data=05*7C02*7Cmwhitte6*40jhu.edu *7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350692343*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=DbQtrUptuDX88vl*2FuO2cXRuwy4JMHfKDnFDj5mAMF6s*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpKLKGLaU$ > *40jhu.edu *7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350692343*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=DbQtrUptuDX88vl*2FuO2cXRuwy4JMHfKDnFDj5mAMF6s*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!e5Z4bqpGanLUgkl1LVgQhM9FRHLRN-xFXCFpnkMtJ-l38HzJqQ3xHhR2i-mB0G6XbKMBZWzNnqZO2LHglZ2lM1HK$> > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: >> > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > petsc-users mailing list > > petsc-users at mcs.anl.gov > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350725058*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DUeb*2F8ra8XpzYQzLTm4qwWVo9lyMGs2P5*2FMD0nA3xsdg*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpdaF8pOQ*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893633070*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=fmFnefcmhqrsOsKFsdq7Krc*2F0CDtDuB*2FCXbtDn6Rnr4*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!akkX3mtT9HWbhzS3UC7bCNBSaKhbvQ937U5ePbM10Wdro75K-aTE0MZvfvGfyBfBwpQpD8YsK2NvRfeS9iTWLTQ$ > > > > > > > ------------------------------ > > > > End of petsc-users Digest, Vol 205, Issue 2 > > ******************************************* > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893746146*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=gNKZtz*2B5BHwQe2nCwMnYBo0dFDdYJ6lNPFaeElUrAOw*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!akkX3mtT9HWbhzS3UC7bCNBSaKhbvQ937U5ePbM10Wdro75K-aTE0MZvfvGfyBfBwpQpD8YsK2NvRfeSwXBSvPk$ > > > ------------------------------ > > End of petsc-users Digest, Vol 205, Issue 12 > ******************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwhitte6 at jhu.edu Fri Jan 16 11:18:00 2026 From: mwhitte6 at jhu.edu (Michael Whitten) Date: Fri, 16 Jan 2026 17:18:00 +0000 Subject: [petsc-users] Having trouble with basic Fortran-PETSc interoperability In-Reply-To: <61B106C0-F550-432C-B39B-3FF25F789A3D@petsc.dev> References: <61B106C0-F550-432C-B39B-3FF25F789A3D@petsc.dev> Message-ID: I realized the subject line was incorrect so I fixed it. I want use SNES for sure and possibly TS and Tao. I would also like to be able to use the matrix-vector multiplication in a similar way. Is there similar guidance for SNES (TS and Tao) as there is for KSP somewhere? I had started reading the PETSc user manual while simultaneously experimenting with some code and hadn't made it to that section. It's a large ecosystem and I am trying to get oriented. Your help is much appreciated. Michael ________________________________ From: Barry Smith Sent: Friday, January 16, 2026 11:38 AM To: Michael Whitten Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] petsc-users Digest, Vol 205, Issue 12 External Email - Use Caution Michael, Yes, I misunderstood the question. Are you specifically wanting to use PETSc's KSP linear solver in parallel while your code runs sequentially (and you do not plan to use SNES or TS or Tao)? If so then we have code just for that purpose, explained in the section Using PETSc's MPI parallel linear solvers from a non-MPI program in the users manual available at petsc.org The PETSc code manages all the parallelism and spreading out of the matrix and vectors to the multiple processes. You just write a sequential program that uses KSP and calls KSPSolve(), make sure it works correctly and then add the options listed in the users manual to run the linear solve in parallel. If you want to use SNES or TS it will require you to manage moving the vectors and matrices in your code from sequential to parallel using PETSc calls. But again you would just get your code running correctly with PETSc sequentially and then add some extra boilerplate to use the PETSc parallel solvers with it. Barry On Jan 16, 2026, at 10:03?AM, Michael Whitten via petsc-users wrote: Both of your points elude to my other question so I am going to ask it in this thread. Is it possible/recommended/reasonable to run my current code sequentially, call a function that interfaces with the PETSc solver which runs parallel, and return a sequential vector to my code? 1. I understand your point about the additional memory usage. However, I am imagining v1 as my Fortran vector passed into a function. It is possible that my current plan is quite inefficient as I am becoming more aware of how much of a novice I am in parallel computing. 2. I don't need PETSc's entire parallel vector on each process. This was not my understanding of that part of the code. I need the entire vector returned to my sequential code. I think I am trying to do something intermediate between completely sequential and completely parallel. Michael ________________________________ From: petsc-users > on behalf of petsc-users-request at mcs.anl.gov > Sent: Friday, January 16, 2026 9:52 AM To: petsc-users at mcs.anl.gov > Subject: petsc-users Digest, Vol 205, Issue 12 External Email - Use Caution Send petsc-users mailing list submissions to petsc-users at mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893354257*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=XiW8wenO4*2B2NKhgAqOP9ew9u31fdVug9JZmSw3WCqio*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fz7gu-ydCmGQL5TnY6D1UYsCtPN06PH7qvMAmLG7pfOYZPc2OyKY_uIne_Yau1PHu2cOo9BZ_hNO2c3-h6OXMvSA$ or, via email, send a message with subject or body 'help' to petsc-users-request at mcs.anl.gov You can reach the person managing the list at petsc-users-owner at mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of petsc-users digest..." Today's Topics: 1. Re: petsc-users Digest, Vol 205, Issue 2 (Barry Smith) ---------------------------------------------------------------------- Message: 1 Date: Fri, 16 Jan 2026 08:51:29 -0600 From: Barry Smith > To: Michael Whitten > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] petsc-users Digest, Vol 205, Issue 2 Message-ID: > Content-Type: text/plain; charset="utf-8" 1) For the code v1 = 1.0d0 .... PetscCallA(VecCreate(PETSC_COMM_WORLD,v1p,ierr)) ... PetscCallA(VecGetArray(v1p,v1ptr,ierr)) v1ptr = v1 PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) This produces two copies of the array plus time to copy values over. Instead drop the v1 completely and just use PetscCallA(VecGetArray(v1p,v1ptr,ierr)) v1ptr = 1.0d0 ! or fill it up with a loop etc PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) 2) If you want/need PETSc's entire parallel vector on each process then you have no choice but to use the code you wrote with VecScatter. Normally when writing a new MPI code from scratch one designs it so the entire vector is never needed together on a single process (because that is not scalable) but if you have a current code you need to work with this is an ok way to start. Barry > On Jan 16, 2026, at 8:40?AM, Michael Whitten via petsc-users > wrote: > > I hope that I am replying in the correct manner to respond to the "Having trouble with basic Fortran-PETSc interoperability" thread I previously started. If not, please correct me. > > Thank you everyone for your replies, they were very helpful. > > I have what appears to be a working example code using the suggested updates, specifically the VecGetArray() and VecRestoreArray() and getting the sequential vector back from PETSc using the information from the FAQ. <> > > I have a question about this example code to make sure I am writing reasonably efficient code. It seems like I have to create an additional PETSc vector 'out_seq' which will essentially be a copy of the PETSc vector 'v1p' which is not the most efficient use of memory. It also seems to me like there isn't a way around this additional 'out_seq' vector because there needs a place to aggregate the data from the various processes. Is this a reasonable use of PETSc or is there a more efficient way? Note, I am trying to interface my existing code base with PETSc to use the solvers and this may be the performance trade-off for not developing my program fully within the PETSc ecosystem. > > I have another question only tangentially related to this topic. Should I ask it as part of this thread or create a new topic? > > Michael > From: petsc-users > on behalf of petsc-users-request at mcs.anl.gov > > Sent: Saturday, January 3, 2026 1:00 PM > To: petsc-users at mcs.anl.gov > > Subject: petsc-users Digest, Vol 205, Issue 2 > > > External Email - Use Caution > > > > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600339697177*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DJh2MHiTm*2Fj3*2Ft6gyDoGp295Ex3SzJAW0iyV7GN*2FDN0o*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUp58-iEhM*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893387619*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=bJNbqOfO7jxYhemrhUYKP53ocQ*2BipAdFWeq8qtU9v2Q*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiolJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fz7gu-ydCmGQL5TnY6D1UYsCtPN06PH7qvMAmLG7pfOYZPc2OyKY_uIne_Yau1PHu2cOo9BZ_hNO2c3-hwxnEzR2$ > > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: Having trouble with basic Fortran-PETSc interoperability > (Barry Smith) > 2. Re: Having trouble with basic Fortran-PETSc interoperability > (Matthew Knepley) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 2 Jan 2026 16:33:27 -0500 > From: Barry Smith > > To: Michael Whitten > > Cc: "petsc-users at mcs.anl.gov " > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > interoperability > Message-ID: <3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E at petsc.dev > > Content-Type: text/plain; charset="utf-8" > > > VecGetValues() uses 0 based indexing in both Fortran and C. > > You don't want to use VecGetValues() and VecSetValues() usually since they result in two copies of the arrays and copying entire arrays back and forth. > > You can avoid copying between PETSc vectors and your arrays by using VecGetArray(), VecGetArrayWrite(), and VecGetArrayRead(). You can also use VecCreateMPIWithArray() to create a PETSc vector using your array; for example for input to the right hand side of a KSP. These arrays start their indexing with the Fortran default of 1. > > > Barry > > > > > On Jan 2, 2026, at 2:42?PM, Michael Whitten via petsc-users > wrote: > > > > Hi PETSc mailing list users, > > > > I have managed to install PETSc and run some PETSc examples and little test codes of my own in Fortran. I am now trying to make PETSc work with my existing Fortran code. I have tried to build little test examples of the functionality that I can then incorporate into my larger code base. However, I am having trouble just passing vectors back and forth between PETSc and Fortran. > > > > I have attached a minimum semi-working example that can be compiled with the standard 'Makefile.user'. It throws an error when I try to copy the PETSc vector back to a Fortran vector using VecGetValues(). I get that it can only access values of the array on the local process but how do I fix this? Is this even the right approach? > > > > In the final implementation I want to be able to assemble my matrix and vector, convert them to PETSc data structures, use PETSc to solve, and then convert the solution vector back to Fortran and return. I want to be able to do this with both the linear and nonlinear solvers. It seems like this is what PETSc is, in part, built to do. Is this a reasonable expectation to achieve? Is this a reasonable use case for PETSc? > > > > Thanks in advance for any help you can offer. > > > > best, > > Michael > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: >> > > ------------------------------ > > Message: 2 > Date: Fri, 2 Jan 2026 17:49:30 -0500 > From: Matthew Knepley > > To: Michael Whitten > > Cc: "petsc-users at mcs.anl.gov " > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > interoperability > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > On Fri, Jan 2, 2026 at 2:48?PM Michael Whitten via petsc-users < > petsc-users at mcs.anl.gov > wrote: > > > Hi PETSc mailing list users, > > > > I have managed to install PETSc and run some PETSc examples and little > > test codes of my own in Fortran. I am now trying to make PETSc work with my > > existing Fortran code. I have tried to build little test examples of the > > functionality that I can then incorporate into my larger code base. > > However, I am having trouble just passing vectors back and forth between > > PETSc and Fortran. > > > > I have attached a minimum semi-working example that can be compiled with > > the standard 'Makefile.user'. It throws an error when I try to copy the > > PETSc vector back to a Fortran vector using VecGetValues(). I get that it > > can only access values of the array on the local process but how do I fix > > this? Is this even the right approach? > > > > No. You should just call VecGetArray(), to get back an F90 pointer to the > values. This is much more convenient. > > > > In the final implementation I want to be able to assemble my matrix and > > vector, convert them to PETSc data structures, use PETSc to solve, and then > > convert the solution vector back to Fortran and return. I want to be able > > to do this with both the linear and nonlinear solvers. It seems like this > > is what PETSc is, in part, built to do. Is this a reasonable expectation to > > achieve? Is this a reasonable use case for PETSc? > > > > Yes, that should work getting the array directly. > > Thanks, > > Matt > > > > Thanks in advance for any help you can offer. > > > > best, > > Michael > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrh36ooFV*24*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350676162*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3D8Uw0xpICKrArvT*2FWBDWc*2B*2FNJ6X9VgHWbSXFPKtK0R7g*3D*26reserved*3D0__*3BJSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpbgPoAsw*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893486117*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=UoISAPwso2SSB7TL3cam0gW9J3X7h28d20sSsj*2Bz52s*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKioqKioqKioqJSUqKioqKioqKioqKioqKioqJSUqKioqJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!fz7gu-ydCmGQL5TnY6D1UYsCtPN06PH7qvMAmLG7pfOYZPc2OyKY_uIne_Yau1PHu2cOo9BZ_hNO2c3-h4XuRKOh$ > tion.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__http*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrgy-5x4Z*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350692343*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=DbQtrUptuDX88vl*2FuO2cXRuwy4JMHfKDnFDj5mAMF6s*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpKLKGLaU$ *40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350692343*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=DbQtrUptuDX88vl*2FuO2cXRuwy4JMHfKDnFDj5mAMF6s*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!e5Z4bqpGanLUgkl1LVgQhM9FRHLRN-xFXCFpnkMtJ-l38HzJqQ3xHhR2i-mB0G6XbKMBZWzNnqZO2LHglZ2lM1HK$> > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: >> > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350725058*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DUeb*2F8ra8XpzYQzLTm4qwWVo9lyMGs2P5*2FMD0nA3xsdg*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpdaF8pOQ*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893633070*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=fmFnefcmhqrsOsKFsdq7Krc*2F0CDtDuB*2FCXbtDn6Rnr4*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fz7gu-ydCmGQL5TnY6D1UYsCtPN06PH7qvMAmLG7pfOYZPc2OyKY_uIne_Yau1PHu2cOo9BZ_hNO2c3-h4wzXYRT$ > > > > ------------------------------ > > End of petsc-users Digest, Vol 205, Issue 2 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: > ------------------------------ Subject: Digest Footer _______________________________________________ petsc-users mailing list petsc-users at mcs.anl.gov https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893746146*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=gNKZtz*2B5BHwQe2nCwMnYBo0dFDdYJ6lNPFaeElUrAOw*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fz7gu-ydCmGQL5TnY6D1UYsCtPN06PH7qvMAmLG7pfOYZPc2OyKY_uIne_Yau1PHu2cOo9BZ_hNO2c3-h6IP3hsd$ ------------------------------ End of petsc-users Digest, Vol 205, Issue 12 ******************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jan 16 11:26:30 2026 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 16 Jan 2026 11:26:30 -0600 Subject: [petsc-users] Having trouble with basic Fortran-PETSc interoperability In-Reply-To: References: <61B106C0-F550-432C-B39B-3FF25F789A3D@petsc.dev> Message-ID: <8B16E9E9-775D-4589-886A-D2829A4B0494@petsc.dev> > On Jan 16, 2026, at 11:18?AM, Michael Whitten wrote: > > I realized the subject line was incorrect so I fixed it. > > I want use SNES for sure and possibly TS and Tao. I would also like to be able to use the matrix-vector multiplication in a similar way. It doesn't make sense to use MatMult in this way. The granulatory is too small and the performance will suck. Just convert your program to use PETSc sequentially; get it debugged and optimized for your problem family. Then and only then come back to us and ask how to add boilerplate to do some things in parallel. (of course you can ask questions about converting your program to use PETSc anytime). Reason: making it parallel in the way you desire is completely orthogonal to getting it working with PETSc so you put off all those issues until it works well with PETSc. There is NO advantage in trying to get it working with PETSc and simultaneously parallel in the way you describe it will just be much harder for you to understand, write and debug. Honestly it will be a nightware doing it all at once. Barry > Is there similar guidance for SNES (TS and Tao) as there is for KSP somewhere? > > I had started reading the PETSc user manual while simultaneously experimenting with some code and hadn't made it to that section. It's a large ecosystem and I am trying to get oriented. Your help is much appreciated. > > Michael > > From: Barry Smith > > Sent: Friday, January 16, 2026 11:38 AM > To: Michael Whitten > > Cc: petsc-users at mcs.anl.gov > > Subject: Re: [petsc-users] petsc-users Digest, Vol 205, Issue 12 > > > External Email - Use Caution > > > > > > > Michael, > > Yes, I misunderstood the question. > > Are you specifically wanting to use PETSc's KSP linear solver in parallel while your code runs sequentially (and you do not plan to use SNES or TS or Tao)? If so then we have code just for that purpose, explained in the section > > Using PETSc's MPI parallel linear solvers from a non-MPI program > > in the users manual available at petsc.org > > The PETSc code manages all the parallelism and spreading out of the matrix and vectors to the multiple processes. You just write a sequential program that uses KSP and calls KSPSolve(), make sure it works correctly and then add the options listed in the users manual to run the linear solve in parallel. > > If you want to use SNES or TS it will require you to manage moving the vectors and matrices in your code from sequential to parallel using PETSc calls. But again you would just get your code running correctly with PETSc sequentially and then add some extra boilerplate to use the PETSc parallel solvers with it. > > Barry > > > > > > On Jan 16, 2026, at 10:03?AM, Michael Whitten via petsc-users wrote: > > Both of your points elude to my other question so I am going to ask it in this thread. > > Is it possible/recommended/reasonable to run my current code sequentially, call a function that interfaces with the PETSc solver which runs parallel, and return a sequential vector to my code? > > I understand your point about the additional memory usage. However, I am imagining v1 as my Fortran vector passed into a function. It is possible that my current plan is quite inefficient as I am becoming more aware of how much of a novice I am in parallel computing. > I don't need PETSc's entire parallel vector on each process. This was not my understanding of that part of the code. I need the entire vector returned to my sequential code. > > I think I am trying to do something intermediate between completely sequential and completely parallel. > > Michael > From: petsc-users > on behalf of petsc-users-request at mcs.anl.gov > > Sent: Friday, January 16, 2026 9:52 AM > To: petsc-users at mcs.anl.gov > > Subject: petsc-users Digest, Vol 205, Issue 12 > > > External Email - Use Caution > > > > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893354257*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=XiW8wenO4*2B2NKhgAqOP9ew9u31fdVug9JZmSw3WCqio*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!f-Mkg4LRmQSrcdD3NoCuTargOC3uptBHr7DrxDVO-qdvJDr0iMtf-n5qaEo5wHbYAmHe3wVeK8f3YqG6YCd1dbA$ > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: petsc-users Digest, Vol 205, Issue 2 (Barry Smith) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 16 Jan 2026 08:51:29 -0600 > From: Barry Smith > > To: Michael Whitten > > Cc: "petsc-users at mcs.anl.gov " > > Subject: Re: [petsc-users] petsc-users Digest, Vol 205, Issue 2 > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > > > 1) For the code > v1 = 1.0d0 > > .... > PetscCallA(VecCreate(PETSC_COMM_WORLD,v1p,ierr)) > ... > PetscCallA(VecGetArray(v1p,v1ptr,ierr)) > v1ptr = v1 > PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) > > This produces two copies of the array plus time to copy values over. Instead drop the v1 completely and just use > PetscCallA(VecGetArray(v1p,v1ptr,ierr)) > v1ptr = 1.0d0 ! or fill it up with a loop etc > PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) > > 2) If you want/need PETSc's entire parallel vector on each process then you have no choice but to use the code you wrote with VecScatter. Normally when writing a new MPI code from scratch one designs it so the entire vector is never needed together on a single process (because that is not scalable) but if you have a current code you need to work with this is an ok way to start. > > Barry > > > > > On Jan 16, 2026, at 8:40?AM, Michael Whitten via petsc-users > wrote: > > > > I hope that I am replying in the correct manner to respond to the "Having trouble with basic Fortran-PETSc interoperability" thread I previously started. If not, please correct me. > > > > Thank you everyone for your replies, they were very helpful. > > > > I have what appears to be a working example code using the suggested updates, specifically the VecGetArray() and VecRestoreArray() and getting the sequential vector back from PETSc using the information from the FAQ. <> > > > > I have a question about this example code to make sure I am writing reasonably efficient code. It seems like I have to create an additional PETSc vector 'out_seq' which will essentially be a copy of the PETSc vector 'v1p' which is not the most efficient use of memory. It also seems to me like there isn't a way around this additional 'out_seq' vector because there needs a place to aggregate the data from the various processes. Is this a reasonable use of PETSc or is there a more efficient way? Note, I am trying to interface my existing code base with PETSc to use the solvers and this may be the performance trade-off for not developing my program fully within the PETSc ecosystem. > > > > I have another question only tangentially related to this topic. Should I ask it as part of this thread or create a new topic? > > > > Michael > > From: petsc-users > on behalf of petsc-users-request at mcs.anl.gov > > > Sent: Saturday, January 3, 2026 1:00 PM > > To: petsc-users at mcs.anl.gov > > > Subject: petsc-users Digest, Vol 205, Issue 2 > > > > > > External Email - Use Caution > > > > > > > > Send petsc-users mailing list submissions to > > petsc-users at mcs.anl.gov > > > > To subscribe or unsubscribe via the World Wide Web, visit > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600339697177*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DJh2MHiTm*2Fj3*2Ft6gyDoGp295Ex3SzJAW0iyV7GN*2FDN0o*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUp58-iEhM*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893387619*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=bJNbqOfO7jxYhemrhUYKP53ocQ*2BipAdFWeq8qtU9v2Q*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiolJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!f-Mkg4LRmQSrcdD3NoCuTargOC3uptBHr7DrxDVO-qdvJDr0iMtf-n5qaEo5wHbYAmHe3wVeK8f3YqG6rkPpIhw$ > > > or, via email, send a message with subject or body 'help' to > > petsc-users-request at mcs.anl.gov > > > > You can reach the person managing the list at > > petsc-users-owner at mcs.anl.gov > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of petsc-users digest..." > > > > > > Today's Topics: > > > > 1. Re: Having trouble with basic Fortran-PETSc interoperability > > (Barry Smith) > > 2. Re: Having trouble with basic Fortran-PETSc interoperability > > (Matthew Knepley) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Fri, 2 Jan 2026 16:33:27 -0500 > > From: Barry Smith > > > To: Michael Whitten > > > Cc: "petsc-users at mcs.anl.gov " > > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > > interoperability > > Message-ID: <3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E at petsc.dev > > > Content-Type: text/plain; charset="utf-8" > > > > > > VecGetValues() uses 0 based indexing in both Fortran and C. > > > > You don't want to use VecGetValues() and VecSetValues() usually since they result in two copies of the arrays and copying entire arrays back and forth. > > > > You can avoid copying between PETSc vectors and your arrays by using VecGetArray(), VecGetArrayWrite(), and VecGetArrayRead(). You can also use VecCreateMPIWithArray() to create a PETSc vector using your array; for example for input to the right hand side of a KSP. These arrays start their indexing with the Fortran default of 1. > > > > > > Barry > > > > > > > > > On Jan 2, 2026, at 2:42?PM, Michael Whitten via petsc-users > wrote: > > > > > > Hi PETSc mailing list users, > > > > > > I have managed to install PETSc and run some PETSc examples and little test codes of my own in Fortran. I am now trying to make PETSc work with my existing Fortran code. I have tried to build little test examples of the functionality that I can then incorporate into my larger code base. However, I am having trouble just passing vectors back and forth between PETSc and Fortran. > > > > > > I have attached a minimum semi-working example that can be compiled with the standard 'Makefile.user'. It throws an error when I try to copy the PETSc vector back to a Fortran vector using VecGetValues(). I get that it can only access values of the array on the local process but how do I fix this? Is this even the right approach? > > > > > > In the final implementation I want to be able to assemble my matrix and vector, convert them to PETSc data structures, use PETSc to solve, and then convert the solution vector back to Fortran and return. I want to be able to do this with both the linear and nonlinear solvers. It seems like this is what PETSc is, in part, built to do. Is this a reasonable expectation to achieve? Is this a reasonable use case for PETSc? > > > > > > Thanks in advance for any help you can offer. > > > > > > best, > > > Michael > > > > > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: >> > > > > ------------------------------ > > > > Message: 2 > > Date: Fri, 2 Jan 2026 17:49:30 -0500 > > From: Matthew Knepley > > > To: Michael Whitten > > > Cc: "petsc-users at mcs.anl.gov " > > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > > interoperability > > Message-ID: > > > > > Content-Type: text/plain; charset="utf-8" > > > > On Fri, Jan 2, 2026 at 2:48?PM Michael Whitten via petsc-users < > > petsc-users at mcs.anl.gov > wrote: > > > > > Hi PETSc mailing list users, > > > > > > I have managed to install PETSc and run some PETSc examples and little > > > test codes of my own in Fortran. I am now trying to make PETSc work with my > > > existing Fortran code. I have tried to build little test examples of the > > > functionality that I can then incorporate into my larger code base. > > > However, I am having trouble just passing vectors back and forth between > > > PETSc and Fortran. > > > > > > I have attached a minimum semi-working example that can be compiled with > > > the standard 'Makefile.user'. It throws an error when I try to copy the > > > PETSc vector back to a Fortran vector using VecGetValues(). I get that it > > > can only access values of the array on the local process but how do I fix > > > this? Is this even the right approach? > > > > > > > No. You should just call VecGetArray(), to get back an F90 pointer to the > > values. This is much more convenient. > > > > > > > In the final implementation I want to be able to assemble my matrix and > > > vector, convert them to PETSc data structures, use PETSc to solve, and then > > > convert the solution vector back to Fortran and return. I want to be able > > > to do this with both the linear and nonlinear solvers. It seems like this > > > is what PETSc is, in part, built to do. Is this a reasonable expectation to > > > achieve? Is this a reasonable use case for PETSc? > > > > > > > Yes, that should work getting the array directly. > > > > Thanks, > > > > Matt > > > > > > > Thanks in advance for any help you can offer. > > > > > > best, > > > Michael > > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which their > > experiments lead. > > -- Norbert Wiener > > > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrh36ooFV*24*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350676162*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3D8Uw0xpICKrArvT*2FWBDWc*2B*2FNJ6X9VgHWbSXFPKtK0R7g*3D*26reserved*3D0__*3BJSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpbgPoAsw*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893486117*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=UoISAPwso2SSB7TL3cam0gW9J3X7h28d20sSsj*2Bz52s*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKioqKioqKioqJSUqKioqKioqKioqKioqKioqJSUqKioqJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!f-Mkg4LRmQSrcdD3NoCuTargOC3uptBHr7DrxDVO-qdvJDr0iMtf-n5qaEo5wHbYAmHe3wVeK8f3YqG6TRnUcVk$ > > tion.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__http*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS !dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrgy-5x4Z*24&data=05*7C02*7Cmwhitte6*40jhu.edu *7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350692343*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=DbQtrUptuDX88vl*2FuO2cXRuwy4JMHfKDnFDj5mAMF6s*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpKLKGLaU$ > *40jhu.edu *7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350692343*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=DbQtrUptuDX88vl*2FuO2cXRuwy4JMHfKDnFDj5mAMF6s*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!e5Z4bqpGanLUgkl1LVgQhM9FRHLRN-xFXCFpnkMtJ-l38HzJqQ3xHhR2i-mB0G6XbKMBZWzNnqZO2LHglZ2lM1HK$> > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: >> > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > petsc-users mailing list > > petsc-users at mcs.anl.gov > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350725058*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DUeb*2F8ra8XpzYQzLTm4qwWVo9lyMGs2P5*2FMD0nA3xsdg*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpdaF8pOQ*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893633070*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=fmFnefcmhqrsOsKFsdq7Krc*2F0CDtDuB*2FCXbtDn6Rnr4*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!f-Mkg4LRmQSrcdD3NoCuTargOC3uptBHr7DrxDVO-qdvJDr0iMtf-n5qaEo5wHbYAmHe3wVeK8f3YqG6MWZ6e5Y$ > > > > > > > ------------------------------ > > > > End of petsc-users Digest, Vol 205, Issue 2 > > ******************************************* > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893746146*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=gNKZtz*2B5BHwQe2nCwMnYBo0dFDdYJ6lNPFaeElUrAOw*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!f-Mkg4LRmQSrcdD3NoCuTargOC3uptBHr7DrxDVO-qdvJDr0iMtf-n5qaEo5wHbYAmHe3wVeK8f3YqG6j7xDgFs$ > > > ------------------------------ > > End of petsc-users Digest, Vol 205, Issue 12 > ******************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 16 11:47:52 2026 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 16 Jan 2026 12:47:52 -0500 Subject: [petsc-users] Having trouble with basic Fortran-PETSc interoperability In-Reply-To: References: <61B106C0-F550-432C-B39B-3FF25F789A3D@petsc.dev> Message-ID: On Fri, Jan 16, 2026 at 12:18?PM Michael Whitten via petsc-users < petsc-users at mcs.anl.gov> wrote: > I realized the subject line was incorrect so I fixed it. > > I want use SNES for sure and possibly TS and Tao. I would also like to be > able to use the matrix-vector multiplication in a similar way. Is there > similar guidance for SNES (TS and Tao) as there is for KSP somewhere? > It should work exactly the same as it does KSP. If you have a more specific question, please mail back. Thanks, Matt > I had started reading the PETSc user manual while simultaneously > experimenting with some code and hadn't made it to that section. It's a > large ecosystem and I am trying to get oriented. Your help is much > appreciated. > > Michael > > ------------------------------ > *From:* Barry Smith > *Sent:* Friday, January 16, 2026 11:38 AM > *To:* Michael Whitten > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] petsc-users Digest, Vol 205, Issue 12 > > > > * External Email - Use Caution * > > > > > Michael, > > Yes, I misunderstood the question. > > Are you specifically wanting to use PETSc's KSP linear solver in > parallel while your code runs sequentially (and you do not plan to use SNES > or TS or Tao)? If so then we have code just for that purpose, explained in > the section > > Using PETSc's MPI parallel linear solvers from a non-MPI program > > in the users manual available at petsc.org > > > > The PETSc code manages all the parallelism and spreading out of the matrix > and vectors to the multiple processes. You just write a sequential program > that uses KSP and calls KSPSolve(), make sure it works correctly and then > add the options listed in the users manual to run the linear solve in > parallel. > > If you want to use SNES or TS it will require you to manage moving the > vectors and matrices in your code from sequential to parallel using PETSc > calls. But again you would just get your code running correctly with PETSc > sequentially and then add some extra boilerplate to use the PETSc parallel > solvers with it. > > Barry > > > > > > On Jan 16, 2026, at 10:03?AM, Michael Whitten via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Both of your points elude to my other question so I am going to ask it in > this thread. > > Is it possible/recommended/reasonable to run my current code sequentially, > call a function that interfaces with the PETSc solver which runs parallel, > and return a sequential vector to my code? > > > 1. I understand your point about the additional memory usage. However, > I am imagining v1 as my Fortran vector passed into a function. It is > possible that my current plan is quite inefficient as I am becoming more > aware of how much of a novice I am in parallel computing. > 2. I don't need PETSc's entire parallel vector on each process. This > was not my understanding of that part of the code. I need the entire vector > returned to my sequential code. > > > I think I am trying to do something intermediate between completely > sequential and completely parallel. > > Michael > ------------------------------ > *From:* petsc-users on behalf of > petsc-users-request at mcs.anl.gov > *Sent:* Friday, January 16, 2026 9:52 AM > *To:* petsc-users at mcs.anl.gov > *Subject:* petsc-users Digest, Vol 205, Issue 12 > > > External Email - Use Caution > > > > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893354257*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=XiW8wenO4*2B2NKhgAqOP9ew9u31fdVug9JZmSw3WCqio*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEvz6ygtHv$ > > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: petsc-users Digest, Vol 205, Issue 2 (Barry Smith) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 16 Jan 2026 08:51:29 -0600 > From: Barry Smith > To: Michael Whitten > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] petsc-users Digest, Vol 205, Issue 2 > Message-ID: > Content-Type: text/plain; charset="utf-8" > > > > 1) For the code > v1 = 1.0d0 > > .... > PetscCallA(VecCreate(PETSC_COMM_WORLD,v1p,ierr)) > ... > PetscCallA(VecGetArray(v1p,v1ptr,ierr)) > v1ptr = v1 > PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) > > This produces two copies of the array plus time to copy values over. > Instead drop the v1 completely and just use > PetscCallA(VecGetArray(v1p,v1ptr,ierr)) > v1ptr = 1.0d0 ! or fill it up with a loop etc > PetscCallA(VecRestoreArray(v1p,v1ptr,ierr)) > > 2) If you want/need PETSc's entire parallel vector on each process then > you have no choice but to use the code you wrote with VecScatter. Normally > when writing a new MPI code from scratch one designs it so the entire > vector is never needed together on a single process (because that is not > scalable) but if you have a current code you need to work with this is an > ok way to start. > > Barry > > > > > On Jan 16, 2026, at 8:40?AM, Michael Whitten via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > > I hope that I am replying in the correct manner to respond to the > "Having trouble with basic Fortran-PETSc interoperability" thread I > previously started. If not, please correct me. > > > > Thank you everyone for your replies, they were very helpful. > > > > I have what appears to be a working example code using the suggested > updates, specifically the VecGetArray() and VecRestoreArray() and getting > the sequential vector back from PETSc using the information from the FAQ. > <> > > > > I have a question about this example code to make sure I am writing > reasonably efficient code. It seems like I have to create an additional > PETSc vector 'out_seq' which will essentially be a copy of the PETSc vector > 'v1p' which is not the most efficient use of memory. It also seems to me > like there isn't a way around this additional 'out_seq' vector because > there needs a place to aggregate the data from the various processes. Is > this a reasonable use of PETSc or is there a more efficient way? Note, I am > trying to interface my existing code base with PETSc to use the solvers and > this may be the performance trade-off for not developing my program fully > within the PETSc ecosystem. > > > > I have another question only tangentially related to this topic. Should > I ask it as part of this thread or create a new topic? > > > > Michael > > From: petsc-users mailto:petsc-users-bounces at mcs.anl.gov >> > on behalf of petsc-users-request at mcs.anl.gov< > mailto:petsc-users-request at mcs.anl.gov > > >> > > Sent: Saturday, January 3, 2026 1:00 PM > > To: petsc-users at mcs.anl.gov > mailto:petsc-users at mcs.anl.gov >> > > Subject: petsc-users Digest, Vol 205, Issue 2 > > > > > > External Email - Use Caution > > > > > > > > Send petsc-users mailing list submissions to > > petsc-users at mcs.anl.gov > > > > > To subscribe or unsubscribe via the World Wide Web, visit > > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600339697177*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DJh2MHiTm*2Fj3*2Ft6gyDoGp295Ex3SzJAW0iyV7GN*2FDN0o*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUp58-iEhM*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893387619*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=bJNbqOfO7jxYhemrhUYKP53ocQ*2BipAdFWeq8qtU9v2Q*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiolJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv5_dAoo1$ > > < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893419172*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=oF1WV15F2HwHTXa1mMb95KKl6MGMxmN*2BINiJ25rqSto*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEvyF9FaW2$ > > > > > or, via email, send a message with subject or body 'help' to > > petsc-users-request at mcs.anl.gov < > mailto:petsc-users-request at mcs.anl.gov > > > > > You can reach the person managing the list at > > petsc-users-owner at mcs.anl.gov < > mailto:petsc-users-owner at mcs.anl.gov > > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of petsc-users digest..." > > > > > > Today's Topics: > > > > 1. Re: Having trouble with basic Fortran-PETSc interoperability > > (Barry Smith) > > 2. Re: Having trouble with basic Fortran-PETSc interoperability > > (Matthew Knepley) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Fri, 2 Jan 2026 16:33:27 -0500 > > From: Barry Smith >> > > To: Michael Whitten >> > > Cc: "petsc-users at mcs.anl.gov >" mailto:petsc-users at mcs.anl.gov >> > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > > interoperability > > Message-ID: <3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E at petsc.dev < > mailto:3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E at petsc.dev > <3B11AFD2-F767-4D6A-9CB6-D8D153BEA27E at petsc.dev>>> > > Content-Type: text/plain; charset="utf-8" > > > > > > VecGetValues() uses 0 based indexing in both Fortran and C. > > > > You don't want to use VecGetValues() and VecSetValues() usually since > they result in two copies of the arrays and copying entire arrays back and > forth. > > > > You can avoid copying between PETSc vectors and your arrays by using > VecGetArray(), VecGetArrayWrite(), and VecGetArrayRead(). You can also use > VecCreateMPIWithArray() to create a PETSc vector using your array; for > example for input to the right hand side of a KSP. These arrays start their > indexing with the Fortran default of 1. > > > > > > Barry > > > > > > > > > On Jan 2, 2026, at 2:42?PM, Michael Whitten via petsc-users < > petsc-users at mcs.anl.gov >> wrote: > > > > > > Hi PETSc mailing list users, > > > > > > I have managed to install PETSc and run some PETSc examples and little > test codes of my own in Fortran. I am now trying to make PETSc work with my > existing Fortran code. I have tried to build little test examples of the > functionality that I can then incorporate into my larger code base. > However, I am having trouble just passing vectors back and forth between > PETSc and Fortran. > > > > > > I have attached a minimum semi-working example that can be compiled > with the standard 'Makefile.user'. It throws an error when I try to copy > the PETSc vector back to a Fortran vector using VecGetValues(). I get that > it can only access values of the array on the local process but how do I > fix this? Is this even the right approach? > > > > > > In the final implementation I want to be able to assemble my matrix > and vector, convert them to PETSc data structures, use PETSc to solve, and > then convert the solution vector back to Fortran and return. I want to be > able to do this with both the linear and nonlinear solvers. It seems like > this is what PETSc is, in part, built to do. Is this a reasonable > expectation to achieve? Is this a reasonable use case for PETSc? > > > > > > Thanks in advance for any help you can offer. > > > > > > best, > > > Michael > > > > > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttp*3A*2F*2Flists.mcs.anl.gov*2Fpipermail*2Fpetsc-users*2Fattachments*2F20260102*2Fedfefca2*2Fattachment-0001.html*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350658200*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DrgN4n0umo9J7Jj50HJVWtGub3AKBXQljLjTM5c4u19I*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUp3Vo7fCQ*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893442508*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=iG8rnruM8*2B6c2bnI*2FwtjoL7EpswYF6xJdZqraBAeCxE*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKioqKiolJSoqKioqKioqKioqKioqKiolJSolJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEvxhMpUKz$ > > < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=http*3A*2F*2Flists.mcs.anl.gov*2Fpipermail*2Fpetsc-users*2Fattachments*2F20260102*2Fedfefca2*2Fattachment-0001.html&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893465042*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=kFnDIwMbwkWWM6*2B4pnQcu2E17DR6qdaFQucWm5WleI0*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv1gI9bsE$ > > >> > > > > ------------------------------ > > > > Message: 2 > > Date: Fri, 2 Jan 2026 17:49:30 -0500 > > From: Matthew Knepley >> > > To: Michael Whitten >> > > Cc: "petsc-users at mcs.anl.gov >" mailto:petsc-users at mcs.anl.gov >> > > Subject: Re: [petsc-users] Having trouble with basic Fortran-PETSc > > interoperability > > Message-ID: > > < > CAMYG4GmKRM1=CoK6JWbo6iTWVPKksepEOH1q1CLLKgr2QhKKDw at mail.gmail.com< > mailto:CAMYG4GmKRM1=CoK6JWbo6iTWVPKksepEOH1q1CLLKgr2QhKKDw at mail.gmail.com > >> > > Content-Type: text/plain; charset="utf-8" > > > > On Fri, Jan 2, 2026 at 2:48?PM Michael Whitten via petsc-users < > > petsc-users at mcs.anl.gov >> wrote: > > > > > Hi PETSc mailing list users, > > > > > > I have managed to install PETSc and run some PETSc examples and little > > > test codes of my own in Fortran. I am now trying to make PETSc work > with my > > > existing Fortran code. I have tried to build little test examples of > the > > > functionality that I can then incorporate into my larger code base. > > > However, I am having trouble just passing vectors back and forth > between > > > PETSc and Fortran. > > > > > > I have attached a minimum semi-working example that can be compiled > with > > > the standard 'Makefile.user'. It throws an error when I try to copy the > > > PETSc vector back to a Fortran vector using VecGetValues(). I get that > it > > > can only access values of the array on the local process but how do I > fix > > > this? Is this even the right approach? > > > > > > > No. You should just call VecGetArray(), to get back an F90 pointer to the > > values. This is much more convenient. > > > > > > > In the final implementation I want to be able to assemble my matrix and > > > vector, convert them to PETSc data structures, use PETSc to solve, and > then > > > convert the solution vector back to Fortran and return. I want to be > able > > > to do this with both the linear and nonlinear solvers. It seems like > this > > > is what PETSc is, in part, built to do. Is this a reasonable > expectation to > > > achieve? Is this a reasonable use case for PETSc? > > > > > > > Yes, that should work getting the array directly. > > > > Thanks, > > > > Matt > > > > > > > Thanks in advance for any help you can offer. > > > > > > best, > > > Michael > > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > their > > experiments lead. > > -- Norbert Wiener > > > > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrh36ooFV*24*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350676162*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3D8Uw0xpICKrArvT*2FWBDWc*2B*2FNJ6X9VgHWbSXFPKtK0R7g*3D*26reserved*3D0__*3BJSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpbgPoAsw*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893486117*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=UoISAPwso2SSB7TL3cam0gW9J3X7h28d20sSsj*2Bz52s*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKioqKioqKioqJSUqKioqKioqKioqKioqKioqJSUqKioqJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv41qp_JZ$ > > < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrh36ooFV*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893506591*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=CLYc3zvivKWDIM8qpc*2Bun3toP5*2BfPFeJuUPu2iXh*2Fqw*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv_DN83Ch$ > > > < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protec&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893525859*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=TE9R*2FHbQCG0c9V4jmhI6MF7KC95c5BoRqLfmZMd8ZVs*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv6HdE0VC$ > > > tion.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__http*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS > > !dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrgy-5x4Z*24&data=05*7C02*7Cmwhitte6* > 40jhu.edu > *7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350692343*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=DbQtrUptuDX88vl*2FuO2cXRuwy4JMHfKDnFDj5mAMF6s*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpKLKGLaU$ > < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Furldefense.us*2Fv3*2F__http*3A*2F*2Fwww.cse.buffalo.edu*2F*knepley*2F__*3Bfg!!G_uCfscf7eWS!dmDhrQx1yGPd8y7YN9DUoj7jpohDJracq1zV5hiJ4GBq5ELNqsZZY7ymYloqOdThUhLu3seGNM2xrgy-5x4Z*24*26data*3D05*7C02*7Cmwhitte6&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893547277*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=49sfRYYe8EnUK4hd3PMxkjgcUauw6uRaZEnsz4FRD0E*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKioqKioqKioqJSUqKiUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv2zX6UlQ$ > > *40jhu.edu > *7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350692343*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=DbQtrUptuDX88vl*2FuO2cXRuwy4JMHfKDnFDj5mAMF6s*3D&reserved=0__;JSUlJSUlJSUlKiUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!e5Z4bqpGanLUgkl1LVgQhM9FRHLRN-xFXCFpnkMtJ-l38HzJqQ3xHhR2i-mB0G6XbKMBZWzNnqZO2LHglZ2lM1HK$> > > > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttp*3A*2F*2Flists.mcs.anl.gov*2Fpipermail*2Fpetsc-users*2Fattachments*2F20260102*2Fe60566c6*2Fattachment-0001.html*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350708618*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DePJIVOq1D6x6W2Lt7tvgtIXIL71CpCJaZpel*2BvG0r1I*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUp_2aaxZg*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893571057*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=4*2BIz4xw7Rz5*2F9g*2FN*2BcECWNSDa8s5Gorh*2F6kgrrqLxkg*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv0jF_L2r$ > > < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=http*3A*2F*2Flists.mcs.anl.gov*2Fpipermail*2Fpetsc-users*2Fattachments*2F20260102*2Fe60566c6*2Fattachment-0001.html&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893598980*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=aq*2BqnjbbjP53P9Aa3gFyyJVzsaHVliHAF4*2FH5VaIkmo*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv5Jm1dIL$ > > >> > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > petsc-users mailing list > > petsc-users at mcs.anl.gov > > > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__https*3A*2F*2Fnam02.safelinks.protection.outlook.com*2F*3Furl*3Dhttps*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users*26data*3D05*7C02*7Cmwhitte6*40jhu.edu*7C094639f3b5f5405cfc4e08de4af1fa67*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639030600350725058*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C*26sdata*3DUeb*2F8ra8XpzYQzLTm4qwWVo9lyMGs2P5*2FMD0nA3xsdg*3D*26reserved*3D0__*3BJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!bIMcmpMdXbyTPg56scE6PGSFLgQ9D07NeoK_UCdRbCrnEdwa1m3aLzbnLed2KLUNJJyHJeWiICB-ePUpdaF8pOQ*24&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893633070*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=fmFnefcmhqrsOsKFsdq7Krc*2F0CDtDuB*2FCXbtDn6Rnr4*3D&reserved=0__;JSUlJSUlJSUlJSUqKioqKiolJSoqKioqKioqKioqKioqKiolJSoqKiUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv-4OOChV$ > > < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893670294*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=o0XY48*2BZ5oVC0TxiObgNwusKIUub*2B3MPR6S30YvHQfI*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv7yMEE_u$ > > > > > > > > > ------------------------------ > > > > End of petsc-users Digest, Vol 205, Issue 2 > > ******************************************* > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=http*3A*2F*2Flists.mcs.anl.gov*2Fpipermail*2Fpetsc-users*2Fattachments*2F20260116*2F43b99245*2Fattachment.html&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893707812*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=8SaySLzxsoms5Vhl5vuo4PxEm4Insv3hcr*2BbbHKjgyI*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEvz0X4OjD$ > > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > > https://urldefense.us/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Flists.mcs.anl.gov*2Fmailman*2Flistinfo*2Fpetsc-users&data=05*7C02*7Cmwhitte6*40jhu.edu*7C24e75a940a1946f44a3608de550f304c*7C9fa4f438b1e6473b803f86f8aedf0dec*7C0*7C0*7C639041720893746146*7CUnknown*7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ*3D*3D*7C0*7C*7C*7C&sdata=gNKZtz*2B5BHwQe2nCwMnYBo0dFDdYJ6lNPFaeElUrAOw*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEv_zfyPm2$ > > > > ------------------------------ > > End of petsc-users Digest, Vol 205, Issue 12 > ******************************************** > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fCGSZ-K1lcfkVVEE95E8-TrDnok3eRJazqaX6N_DdWheUklbAIZiBVeslnlaotQT6iOErmawK4OEvw-qbTXN$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.tardieu at edf.fr Mon Jan 19 04:17:51 2026 From: nicolas.tardieu at edf.fr (TARDIEU Nicolas) Date: Mon, 19 Jan 2026 10:17:51 +0000 Subject: [petsc-users] SLEPSc and spectrum slicing Message-ID: Dear PETSc team, I am using SLEPc to solve a generalized non-symmetric eigenvalue problem. The underlying physical problem?coupled vibrations of an acoustic fluid enclosed in an elastic container?is actually conservative, so all eigenvalues and eigenvectors are real. The literature shows that a change of basis exists that makes the problem symmetric, but this transformation is computationally expensive. Since I need to compute a large number of eigenvalues, I would like to use spectrum slicing, which is only available for symmetric matrices. I am not familiar with the internal workings of spectrum slicing in SLEPc, and I am wondering whether it could be possible to force the algorithm to handle these (pseudo-)non-symmetric matrices, given that the actual spectrum is real. Thanks, Regards, -- Nicolas Tardieu Ing PhD Computational Mechanics EDF - R&D Dpt ERMES PARIS-SACLAY, FRANCE Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon Jan 19 04:28:26 2026 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 19 Jan 2026 10:28:26 +0000 Subject: [petsc-users] SLEPSc and spectrum slicing In-Reply-To: References: Message-ID: <123A0ACE-B594-4A37-9F66-1E5F49D59ED4@dsic.upv.es> To answer the question I would need more information. - Is your B matrix symmetric? Is it symmetric positive-definite? - Do your A and B matrices have a special block structure? - Is your spectrum symmetric with respect to the origin? - How many eigenvalues do you need? (in percentage) Jose > El 19 ene 2026, a las 11:17, TARDIEU Nicolas via petsc-users escribi?: > > Dear PETSc team, > > I am using SLEPc to solve a generalized non-symmetric eigenvalue problem. The underlying physical problem?coupled vibrations of an acoustic fluid enclosed in an elastic container?is actually conservative, so all eigenvalues and eigenvectors are real. The literature shows that a change of basis exists that makes the problem symmetric, but this transformation is computationally expensive. > > Since I need to compute a large number of eigenvalues, I would like to use spectrum slicing, which is only available for symmetric matrices. I am not familiar with the internal workings of spectrum slicing in SLEPc, and I am wondering whether it could be possible to force the algorithm to handle these (pseudo-)non-symmetric matrices, given that the actual spectrum is real. > > Thanks, > Regards, > -- > Nicolas Tardieu > Ing PhD Computational Mechanics > EDF - R&D Dpt ERMES > PARIS-SACLAY, FRANCE > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. From nicolas.tardieu at edf.fr Mon Jan 19 05:16:56 2026 From: nicolas.tardieu at edf.fr (TARDIEU Nicolas) Date: Mon, 19 Jan 2026 11:16:56 +0000 Subject: [petsc-users] SLEPSc and spectrum slicing In-Reply-To: <123A0ACE-B594-4A37-9F66-1E5F49D59ED4@dsic.upv.es> References: <123A0ACE-B594-4A37-9F66-1E5F49D59ED4@dsic.upv.es> Message-ID: Here are some details : * A and B are nonsymmetric * They both have a block structure * Since the underlying problem is conservative, the spectrum is strictly positive * I want to compute 1.e-2% eigenvalues with respect to the number of unknowns Regards, Nicolas -- Nicolas Tardieu Ing PhD Computational Mechanics EDF - R&D Dpt ERMES PARIS-SACLAY, FRANCE ________________________________ De : jroman at dsic.upv.es Envoy? : lundi 19 janvier 2026 11:28 ? : TARDIEU Nicolas Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] SLEPSc and spectrum slicing Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes External Sender : Check the sender before clicking any links or attachments To answer the question I would need more information. - Is your B matrix symmetric? Is it symmetric positive-definite? - Do your A and B matrices have a special block structure? - Is your spectrum symmetric with respect to the origin? - How many eigenvalues do you need? (in percentage) Jose > El 19 ene 2026, a las 11:17, TARDIEU Nicolas via petsc-users escribi?: > > Dear PETSc team, > > I am using SLEPc to solve a generalized non-symmetric eigenvalue problem. The underlying physical problem?coupled vibrations of an acoustic fluid enclosed in an elastic container?is actually conservative, so all eigenvalues and eigenvectors are real. The literature shows that a change of basis exists that makes the problem symmetric, but this transformation is computationally expensive. > > Since I need to compute a large number of eigenvalues, I would like to use spectrum slicing, which is only available for symmetric matrices. I am not familiar with the internal workings of spectrum slicing in SLEPc, and I am wondering whether it could be possible to force the algorithm to handle these (pseudo-)non-symmetric matrices, given that the actual spectrum is real. > > Thanks, > Regards, > -- > Nicolas Tardieu > Ing PhD Computational Mechanics > EDF - R&D Dpt ERMES > PARIS-SACLAY, FRANCE > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon Jan 19 05:38:20 2026 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 19 Jan 2026 11:38:20 +0000 Subject: [petsc-users] SLEPSc and spectrum slicing In-Reply-To: References: <123A0ACE-B594-4A37-9F66-1E5F49D59ED4@dsic.upv.es> Message-ID: <2B9E936A-33C6-40E4-903C-F4FF87A63A5F@dsic.upv.es> Is B^{-1}*A symmetric? Computing 0.01% of eigenvalues does not seem too much. For a problem size 10 million it's 1000 eigenvalues, which is manageable provided that you set a reasonable value of mpd, e.g., -eps_mpd 600. I don't think spectrum slicing is necessary. Are you seeing slow convergence? How are you currently solving the problem? Krylov-Schur with smallest_real? Or are you using shift-and-invert? Jose > El 19 ene 2026, a las 12:16, TARDIEU Nicolas escribi?: > > Here are some details : > ? A and B are nonsymmetric > ? They both have a block structure > ? Since the underlying problem is conservative, the spectrum is strictly positive > ? I want to compute 1.e-2% eigenvalues with respect to the number of unknowns > > Regards,Nicolas--Nicolas TardieuIng PhD Computational MechanicsEDF - R&D Dpt ERMESPARIS-SACLAY, FRANCEDe : jroman at dsic.upv.es > Envoy? : lundi 19 janvier 2026 11:28 > ? : TARDIEU Nicolas > Cc : petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] SLEPSc and spectrum slicing > Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes > > External Sender : Check the sender before clicking any links or attachments > > To answer the question I would need more information. > - Is your B matrix symmetric? Is it symmetric positive-definite? > - Do your A and B matrices have a special block structure? > - Is your spectrum symmetric with respect to the origin? > - How many eigenvalues do you need? (in percentage) > > Jose > > > El 19 ene 2026, a las 11:17, TARDIEU Nicolas via petsc-users escribi?: > > > > Dear PETSc team, > > > > I am using SLEPc to solve a generalized non-symmetric eigenvalue problem. The underlying physical problem?coupled vibrations of an acoustic fluid enclosed in an elastic container?is actually conservative, so all eigenvalues and eigenvectors are real. The literature shows that a change of basis exists that makes the problem symmetric, but this transformation is computationally expensive. > > > > Since I need to compute a large number of eigenvalues, I would like to use spectrum slicing, which is only available for symmetric matrices. I am not familiar with the internal workings of spectrum slicing in SLEPc, and I am wondering whether it could be possible to force the algorithm to handle these (pseudo-)non-symmetric matrices, given that the actual spectrum is real. > > > > Thanks, > > Regards, > > -- > > Nicolas Tardieu > > Ing PhD Computational Mechanics > > EDF - R&D Dpt ERMES > > PARIS-SACLAY, FRANCE > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. From mmolinos at us.es Mon Jan 19 06:56:12 2026 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Mon, 19 Jan 2026 12:56:12 +0000 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm Message-ID: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> Dear all, I have a question about using DMSwarm data with Kokkos-enabled vectors. My particle data (including ghost particles) are stored in a DMSwarm and for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? Thanks in advance. Best, Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.tardieu at edf.fr Mon Jan 19 07:20:54 2026 From: nicolas.tardieu at edf.fr (TARDIEU Nicolas) Date: Mon, 19 Jan 2026 13:20:54 +0000 Subject: [petsc-users] SLEPSc and spectrum slicing In-Reply-To: <2B9E936A-33C6-40E4-903C-F4FF87A63A5F@dsic.upv.es> References: <123A0ACE-B594-4A37-9F66-1E5F49D59ED4@dsic.upv.es> <2B9E936A-33C6-40E4-903C-F4FF87A63A5F@dsic.upv.es> Message-ID: Thanks for your help Jose. B^{-1}*A is not symmetric. I am currently using Krylov-Schur with TARGET_MAGNITUDE set to 0 and shift-and-invert. Regards, Nicolas -- Nicolas Tardieu Ing PhD Computational Mechanics EDF - R&D Dpt ERMES PARIS-SACLAY, FRANCE ________________________________ De : jroman at dsic.upv.es Envoy? : lundi 19 janvier 2026 12:38 ? : TARDIEU Nicolas Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] SLEPSc and spectrum slicing Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes External Sender : Check the sender before clicking any links or attachments Is B^{-1}*A symmetric? Computing 0.01% of eigenvalues does not seem too much. For a problem size 10 million it's 1000 eigenvalues, which is manageable provided that you set a reasonable value of mpd, e.g., -eps_mpd 600. I don't think spectrum slicing is necessary. Are you seeing slow convergence? How are you currently solving the problem? Krylov-Schur with smallest_real? Or are you using shift-and-invert? Jose > El 19 ene 2026, a las 12:16, TARDIEU Nicolas escribi?: > > Here are some details : > ? A and B are nonsymmetric > ? They both have a block structure > ? Since the underlying problem is conservative, the spectrum is strictly positive > ? I want to compute 1.e-2% eigenvalues with respect to the number of unknowns > > Regards,Nicolas--Nicolas TardieuIng PhD Computational MechanicsEDF - R&D Dpt ERMESPARIS-SACLAY, FRANCEDe : jroman at dsic.upv.es > Envoy? : lundi 19 janvier 2026 11:28 > ? : TARDIEU Nicolas > Cc : petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] SLEPSc and spectrum slicing > Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes > > External Sender : Check the sender before clicking any links or attachments > > To answer the question I would need more information. > - Is your B matrix symmetric? Is it symmetric positive-definite? > - Do your A and B matrices have a special block structure? > - Is your spectrum symmetric with respect to the origin? > - How many eigenvalues do you need? (in percentage) > > Jose > > > El 19 ene 2026, a las 11:17, TARDIEU Nicolas via petsc-users escribi?: > > > > Dear PETSc team, > > > > I am using SLEPc to solve a generalized non-symmetric eigenvalue problem. The underlying physical problem?coupled vibrations of an acoustic fluid enclosed in an elastic container?is actually conservative, so all eigenvalues and eigenvectors are real. The literature shows that a change of basis exists that makes the problem symmetric, but this transformation is computationally expensive. > > > > Since I need to compute a large number of eigenvalues, I would like to use spectrum slicing, which is only available for symmetric matrices. I am not familiar with the internal workings of spectrum slicing in SLEPc, and I am wondering whether it could be possible to force the algorithm to handle these (pseudo-)non-symmetric matrices, given that the actual spectrum is real. > > > > Thanks, > > Regards, > > -- > > Nicolas Tardieu > > Ing PhD Computational Mechanics > > EDF - R&D Dpt ERMES > > PARIS-SACLAY, FRANCE > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon Jan 19 07:38:58 2026 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 19 Jan 2026 13:38:58 +0000 Subject: [petsc-users] SLEPSc and spectrum slicing In-Reply-To: References: <123A0ACE-B594-4A37-9F66-1E5F49D59ED4@dsic.upv.es> <2B9E936A-33C6-40E4-903C-F4FF87A63A5F@dsic.upv.es> Message-ID: Ok, it is the right way to go. I don't see any algebraic structure that can be exploited. What you can try is compute eigenvalues in chunks. For instance, start with target=0 and compute nev=100. Then call the eigensolver again with a target moved to the right (close to the last computed eigenvalue) and passing the previously computed eigenvectors with EPSSetDeflationSpace(). Repeat this until you have all the wanted eigenvalues. This scheme will require a matrix factorization for each target, so overall cost may be higher, depending on convergence and the sparsity pattern. Spectrum slicing performs a similar scheme, but in an automated way and with guarantees that no eigenvalue is missed. Jose > El 19 ene 2026, a las 14:20, TARDIEU Nicolas escribi?: > > Thanks for your help Jose. > > B^{-1}*A is not symmetric. > > I am currently using Krylov-Schur with TARGET_MAGNITUDE set to 0 and shift-and-invert. > > Regards, > Nicolas > -- > Nicolas Tardieu > Ing PhD Computational Mechanics > EDF - R&D Dpt ERMES > PARIS-SACLAY, FRANCEDe : jroman at dsic.upv.es > Envoy? : lundi 19 janvier 2026 12:38 > ? : TARDIEU Nicolas > Cc : petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] SLEPSc and spectrum slicing > Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes > > External Sender : Check the sender before clicking any links or attachments > > Is B^{-1}*A symmetric? > > Computing 0.01% of eigenvalues does not seem too much. For a problem size 10 million it's 1000 eigenvalues, which is manageable provided that you set a reasonable value of mpd, e.g., -eps_mpd 600. I don't think spectrum slicing is necessary. > > Are you seeing slow convergence? How are you currently solving the problem? Krylov-Schur with smallest_real? Or are you using shift-and-invert? > > Jose > > > El 19 ene 2026, a las 12:16, TARDIEU Nicolas escribi?: > > > > Here are some details : > > ? A and B are nonsymmetric > > ? They both have a block structure > > ? Since the underlying problem is conservative, the spectrum is strictly positive > > ? I want to compute 1.e-2% eigenvalues with respect to the number of unknowns > > > > Regards,Nicolas--Nicolas TardieuIng PhD Computational MechanicsEDF - R&D Dpt ERMESPARIS-SACLAY, FRANCEDe : jroman at dsic.upv.es > > Envoy? : lundi 19 janvier 2026 11:28 > > ? : TARDIEU Nicolas > > Cc : petsc-users at mcs.anl.gov > > Objet : Re: [petsc-users] SLEPSc and spectrum slicing > > Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes > > > > External Sender : Check the sender before clicking any links or attachments > > > > To answer the question I would need more information. > > - Is your B matrix symmetric? Is it symmetric positive-definite? > > - Do your A and B matrices have a special block structure? > > - Is your spectrum symmetric with respect to the origin? > > - How many eigenvalues do you need? (in percentage) > > > > Jose > > > > > El 19 ene 2026, a las 11:17, TARDIEU Nicolas via petsc-users escribi?: > > > > > > Dear PETSc team, > > > > > > I am using SLEPc to solve a generalized non-symmetric eigenvalue problem. The underlying physical problem?coupled vibrations of an acoustic fluid enclosed in an elastic container?is actually conservative, so all eigenvalues and eigenvectors are real. The literature shows that a change of basis exists that makes the problem symmetric, but this transformation is computationally expensive. > > > > > > Since I need to compute a large number of eigenvalues, I would like to use spectrum slicing, which is only available for symmetric matrices. I am not familiar with the internal workings of spectrum slicing in SLEPc, and I am wondering whether it could be possible to force the algorithm to handle these (pseudo-)non-symmetric matrices, given that the actual spectrum is real. > > > > > > Thanks, > > > Regards, > > > -- > > > Nicolas Tardieu > > > Ing PhD Computational Mechanics > > > EDF - R&D Dpt ERMES > > > PARIS-SACLAY, FRANCE > > > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > > > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > > > ____________________________________________________ > > > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > > > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > > > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. > > > > > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. From nicolas.tardieu at edf.fr Mon Jan 19 07:49:35 2026 From: nicolas.tardieu at edf.fr (TARDIEU Nicolas) Date: Mon, 19 Jan 2026 13:49:35 +0000 Subject: [petsc-users] SLEPSc and spectrum slicing In-Reply-To: References: <123A0ACE-B594-4A37-9F66-1E5F49D59ED4@dsic.upv.es> <2B9E936A-33C6-40E4-903C-F4FF87A63A5F@dsic.upv.es> Message-ID: Thank you so much for your help Jose. ________________________________ De : jroman at dsic.upv.es Envoy? : lundi 19 janvier 2026 14:38 ? : TARDIEU Nicolas Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] SLEPSc and spectrum slicing Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes External Sender : Check the sender before clicking any links or attachments Ok, it is the right way to go. I don't see any algebraic structure that can be exploited. What you can try is compute eigenvalues in chunks. For instance, start with target=0 and compute nev=100. Then call the eigensolver again with a target moved to the right (close to the last computed eigenvalue) and passing the previously computed eigenvectors with EPSSetDeflationSpace(). Repeat this until you have all the wanted eigenvalues. This scheme will require a matrix factorization for each target, so overall cost may be higher, depending on convergence and the sparsity pattern. Spectrum slicing performs a similar scheme, but in an automated way and with guarantees that no eigenvalue is missed. Jose > El 19 ene 2026, a las 14:20, TARDIEU Nicolas escribi?: > > Thanks for your help Jose. > > B^{-1}*A is not symmetric. > > I am currently using Krylov-Schur with TARGET_MAGNITUDE set to 0 and shift-and-invert. > > Regards, > Nicolas > -- > Nicolas Tardieu > Ing PhD Computational Mechanics > EDF - R&D Dpt ERMES > PARIS-SACLAY, FRANCEDe : jroman at dsic.upv.es > Envoy? : lundi 19 janvier 2026 12:38 > ? : TARDIEU Nicolas > Cc : petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] SLEPSc and spectrum slicing > Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes > > External Sender : Check the sender before clicking any links or attachments > > Is B^{-1}*A symmetric? > > Computing 0.01% of eigenvalues does not seem too much. For a problem size 10 million it's 1000 eigenvalues, which is manageable provided that you set a reasonable value of mpd, e.g., -eps_mpd 600. I don't think spectrum slicing is necessary. > > Are you seeing slow convergence? How are you currently solving the problem? Krylov-Schur with smallest_real? Or are you using shift-and-invert? > > Jose > > > El 19 ene 2026, a las 12:16, TARDIEU Nicolas escribi?: > > > > Here are some details : > > ? A and B are nonsymmetric > > ? They both have a block structure > > ? Since the underlying problem is conservative, the spectrum is strictly positive > > ? I want to compute 1.e-2% eigenvalues with respect to the number of unknowns > > > > Regards,Nicolas--Nicolas TardieuIng PhD Computational MechanicsEDF - R&D Dpt ERMESPARIS-SACLAY, FRANCEDe : jroman at dsic.upv.es > > Envoy? : lundi 19 janvier 2026 11:28 > > ? : TARDIEU Nicolas > > Cc : petsc-users at mcs.anl.gov > > Objet : Re: [petsc-users] SLEPSc and spectrum slicing > > Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes > > > > External Sender : Check the sender before clicking any links or attachments > > > > To answer the question I would need more information. > > - Is your B matrix symmetric? Is it symmetric positive-definite? > > - Do your A and B matrices have a special block structure? > > - Is your spectrum symmetric with respect to the origin? > > - How many eigenvalues do you need? (in percentage) > > > > Jose > > > > > El 19 ene 2026, a las 11:17, TARDIEU Nicolas via petsc-users escribi?: > > > > > > Dear PETSc team, > > > > > > I am using SLEPc to solve a generalized non-symmetric eigenvalue problem. The underlying physical problem?coupled vibrations of an acoustic fluid enclosed in an elastic container?is actually conservative, so all eigenvalues and eigenvectors are real. The literature shows that a change of basis exists that makes the problem symmetric, but this transformation is computationally expensive. > > > > > > Since I need to compute a large number of eigenvalues, I would like to use spectrum slicing, which is only available for symmetric matrices. I am not familiar with the internal workings of spectrum slicing in SLEPc, and I am wondering whether it could be possible to force the algorithm to handle these (pseudo-)non-symmetric matrices, given that the actual spectrum is real. > > > > > > Thanks, > > > Regards, > > > -- > > > Nicolas Tardieu > > > Ing PhD Computational Mechanics > > > EDF - R&D Dpt ERMES > > > PARIS-SACLAY, FRANCE > > > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > > > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > > > ____________________________________________________ > > > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > > > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > > > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. > > > > > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 19 08:30:07 2026 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Jan 2026 09:30:07 -0500 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> Message-ID: On Mon, Jan 19, 2026 at 7:56?AM MIGUEL MOLINOS PEREZ wrote: > Dear all, > > I have a question about using DMSwarm data with Kokkos-enabled vectors. > > My particle data (including ghost particles) are stored in a DMSwarm and > for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. > I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct > that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS > using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? > You can use https://urldefense.us/v3/__https://petsc.org/main/manualpages/DMSwarm/DMSwarmCreateGlobalVectorFromField/__;!!G_uCfscf7eWS!YsdXvBMzbpnlSNPvPGsxFWG7blJSqc3ONTjAhREcp8h21m_QRZA5cWREcJ03KatOzxUieJcpbM1Js_m4B4AN$ if you have a single field, which is no-copy. However, if you want multiple fields in the Vec, then you need a copy. We do this in our PIC code, and the copy time is in the noise, so I would measure it before deciding you do not want a copy. I still do not think it would work with ghosted Vecs, because those demand that the shared particles are at the end of the Vec, but Swarm has no idea about this division. Thanks, Matt > Thanks in advance. > > Best, > > Miguel > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsdXvBMzbpnlSNPvPGsxFWG7blJSqc3ONTjAhREcp8h21m_QRZA5cWREcJ03KatOzxUieJcpbM1Js57SijbR$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Mon Jan 19 10:44:28 2026 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Mon, 19 Jan 2026 10:44:28 -0600 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> Message-ID: With VecCreateGhostWithArray, do you already have the array ready on device? --Junchao Zhang On Mon, Jan 19, 2026 at 6:56?AM MIGUEL MOLINOS PEREZ wrote: > Dear all, > > I have a question about using DMSwarm data with Kokkos-enabled vectors. > > My particle data (including ghost particles) are stored in a DMSwarm and > for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. > I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct > that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS > using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? > > Thanks in advance. > > Best, > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Mon Jan 19 11:17:40 2026 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Mon, 19 Jan 2026 17:17:40 +0000 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> Message-ID: <45E755E7-13A4-45D6-B454-33B8E330B364@us.es> I still do not think it would work with ghosted Vecs, because those demand that the shared particles are at the end of the Vec, but Swarm has no idea about this division. That?s my concern. What I?m doing is a MD code, so I need access to ghost values at vectors such as position to evaluate the potential. Miguel On Jan 19, 2026, at 3:30?PM, Matthew Knepley wrote: On Mon, Jan 19, 2026 at 7:56?AM MIGUEL MOLINOS PEREZ > wrote: Dear all, I have a question about using DMSwarm data with Kokkos-enabled vectors. My particle data (including ghost particles) are stored in a DMSwarm and for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? You can use https://urldefense.us/v3/__https://petsc.org/main/manualpages/DMSwarm/DMSwarmCreateGlobalVectorFromField/__;!!G_uCfscf7eWS!bQ47FGy_iSJU8ODIRPz-W5bpnPyoL-_CCMluodw_v8xbtgX1pbylov4V0-B3eea4bFiyEKjKMQJyYf-t3YvaIQ$ if you have a single field, which is no-copy. However, if you want multiple fields in the Vec, then you need a copy. We do this in our PIC code, and the copy time is in the noise, so I would measure it before deciding you do not want a copy. I still do not think it would work with ghosted Vecs, because those demand that the shared particles are at the end of the Vec, but Swarm has no idea about this division. Thanks, Matt Thanks in advance. Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bQ47FGy_iSJU8ODIRPz-W5bpnPyoL-_CCMluodw_v8xbtgX1pbylov4V0-B3eea4bFiyEKjKMQJyYf-gbn1yfA$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Mon Jan 19 11:19:18 2026 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Mon, 19 Jan 2026 17:19:18 +0000 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> Message-ID: I don?t think so because DMSwarm data is allocated on host. This is right Matt? Thanks, Miguel On Jan 19, 2026, at 5:44?PM, Junchao Zhang wrote: With VecCreateGhostWithArray, do you already have the array ready on device? --Junchao Zhang On Mon, Jan 19, 2026 at 6:56?AM MIGUEL MOLINOS PEREZ > wrote: Dear all, I have a question about using DMSwarm data with Kokkos-enabled vectors. My particle data (including ghost particles) are stored in a DMSwarm and for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? Thanks in advance. Best, Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 19 11:41:40 2026 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Jan 2026 12:41:40 -0500 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: <45E755E7-13A4-45D6-B454-33B8E330B364@us.es> References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> <45E755E7-13A4-45D6-B454-33B8E330B364@us.es> Message-ID: On Mon, Jan 19, 2026 at 12:17?PM MIGUEL MOLINOS PEREZ wrote: > I still do not think it would work with ghosted Vecs, because those demand > that the shared particles are at the end of the Vec, but Swarm has no idea > about this division. > > > That?s my concern. What I?m doing is a MD code, so I need access to ghost > values at vectors such as position to evaluate the potential. > That would work, just not the naie local-to-global maps. Thanks, Matt > Miguel > > On Jan 19, 2026, at 3:30?PM, Matthew Knepley wrote: > > On Mon, Jan 19, 2026 at 7:56?AM MIGUEL MOLINOS PEREZ > wrote: > >> Dear all, >> >> I have a question about using DMSwarm data with Kokkos-enabled vectors. >> >> My particle data (including ghost particles) are stored in a DMSwarm and >> for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. >> I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct >> that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS >> using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? >> > You can use > > > https://urldefense.us/v3/__https://petsc.org/main/manualpages/DMSwarm/DMSwarmCreateGlobalVectorFromField/__;!!G_uCfscf7eWS!a_jPC7NylpspVjXQgq9aDBNQ_XOXmSByj-wvc6qzC4vcK8m5RpIj3HzaoAz9upKM3JB97V5HvtwTpdPcz1u9$ > > if you have a single field, which is no-copy. However, if you want > multiple fields in the Vec, then you need a copy. We > do this in our PIC code, and the copy time is in the noise, so I would > measure it before deciding you do not want a copy. > > I still do not think it would work with ghosted Vecs, because those demand > that the shared particles are at the end of the Vec, but Swarm has no idea > about this division. > > Thanks, > > Matt > > >> Thanks in advance. >> >> Best, >> >> Miguel >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a_jPC7NylpspVjXQgq9aDBNQ_XOXmSByj-wvc6qzC4vcK8m5RpIj3HzaoAz9upKM3JB97V5HvtwTpQM9O25F$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a_jPC7NylpspVjXQgq9aDBNQ_XOXmSByj-wvc6qzC4vcK8m5RpIj3HzaoAz9upKM3JB97V5HvtwTpQM9O25F$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 19 11:42:25 2026 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Jan 2026 12:42:25 -0500 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> Message-ID: On Mon, Jan 19, 2026 at 12:19?PM MIGUEL MOLINOS PEREZ wrote: > I don?t think so because DMSwarm data is allocated on host. This is right > Matt? > I believe there is code in there to move it if you give a device type. Thanks, Matt > Thanks, > Miguel > > On Jan 19, 2026, at 5:44?PM, Junchao Zhang > wrote: > > With VecCreateGhostWithArray, do you already have the array ready on > device? > > --Junchao Zhang > > > On Mon, Jan 19, 2026 at 6:56?AM MIGUEL MOLINOS PEREZ > wrote: > >> Dear all, >> >> I have a question about using DMSwarm data with Kokkos-enabled vectors. >> >> My particle data (including ghost particles) are stored in a DMSwarm and >> for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. >> I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct >> that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS >> using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? >> >> Thanks in advance. >> >> Best, >> >> Miguel >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fcofSjhiWlSYjIf4sf-6DsjzjYknd0cPQ7LrMJfgK3_iWKKISDkxbRWBR76wRWIBxM8fbVOSRfnG9r4xO5G6$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Mon Jan 19 12:21:13 2026 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Mon, 19 Jan 2026 18:21:13 +0000 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> <45E755E7-13A4-45D6-B454-33B8E330B364@us.es> Message-ID: I see. Then I assume ?VecGhostUpdateBegin?or ?VecGhostUpdateEnd? won?t work neither, I?m right? Thanks, Miguel On Jan 19, 2026, at 6:41?PM, Matthew Knepley wrote: local-to-global maps. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Mon Jan 19 12:22:16 2026 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Mon, 19 Jan 2026 18:22:16 +0000 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> Message-ID: <194CF8DA-5EA2-4B6D-9716-3CB36C50E9E1@us.es> I?ll check the documentation more carefully to see if I can find this. Thanks, Miguel On Jan 19, 2026, at 6:42?PM, Matthew Knepley wrote: On Mon, Jan 19, 2026 at 12:19?PM MIGUEL MOLINOS PEREZ > wrote: I don?t think so because DMSwarm data is allocated on host. This is right Matt? I believe there is code in there to move it if you give a device type. Thanks, Matt Thanks, Miguel On Jan 19, 2026, at 5:44?PM, Junchao Zhang > wrote: With VecCreateGhostWithArray, do you already have the array ready on device? --Junchao Zhang On Mon, Jan 19, 2026 at 6:56?AM MIGUEL MOLINOS PEREZ > wrote: Dear all, I have a question about using DMSwarm data with Kokkos-enabled vectors. My particle data (including ghost particles) are stored in a DMSwarm and for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? Thanks in advance. Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dRaNaXfldNFi0stMY3wHDRXGeDjeFBpWj7Um9N-tDX7Z8BYqIaU08uflkdGL08hjG3x6PFSmxQ_smEbZ2nSgjA$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 19 12:23:21 2026 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Jan 2026 13:23:21 -0500 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> <45E755E7-13A4-45D6-B454-33B8E330B364@us.es> Message-ID: On Mon, Jan 19, 2026 at 1:21?PM MIGUEL MOLINOS PEREZ wrote: > I see. Then I assume ?VecGhostUpdateBegin?or ?VecGhostUpdateEnd? won?t > work neither, I?m right? > Yes, thats right. Thanks, Matt > Thanks, > Miguel > > On Jan 19, 2026, at 6:41?PM, Matthew Knepley wrote: > > local-to-global maps. > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bxw9a2vx0JerYitzmmBNoWhYLMLp8fWha3dlwEVYl9ijgwCbJcAFZhutbmTmrOpA2vhiwKImc65PxcvR4XBK$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 19 12:26:37 2026 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Jan 2026 13:26:37 -0500 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: <194CF8DA-5EA2-4B6D-9716-3CB36C50E9E1@us.es> References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> <194CF8DA-5EA2-4B6D-9716-3CB36C50E9E1@us.es> Message-ID: The code is the right place to look. For example https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/swarm/swarm.c?ref_type=heads*L307__;Iw!!G_uCfscf7eWS!emacIq_C5LrJ9eKK6yorz6jJyo8M_O8C6rorvSoed1o5-YpahunkVoQsWH597yayOS4SQZA4lpT4aU_jvqTs$ So the Vec will get the right type. This means that when the initial data comes in on the host, it will be copied to the device when we execute an operation. We (really Joe) are working on an update that allows swarm data to be held on the device. Thanks, Matt On Mon, Jan 19, 2026 at 1:22?PM MIGUEL MOLINOS PEREZ wrote: > I?ll check the documentation more carefully to see if I can find this. > > Thanks, > Miguel > > On Jan 19, 2026, at 6:42?PM, Matthew Knepley wrote: > > On Mon, Jan 19, 2026 at 12:19?PM MIGUEL MOLINOS PEREZ > wrote: > >> I don?t think so because DMSwarm data is allocated on host. This is right >> Matt? >> > > I believe there is code in there to move it if you give a device type. > > Thanks, > > Matt > > >> Thanks, >> Miguel >> >> On Jan 19, 2026, at 5:44?PM, Junchao Zhang >> wrote: >> >> With VecCreateGhostWithArray, do you already have the array ready on >> device? >> >> --Junchao Zhang >> >> >> On Mon, Jan 19, 2026 at 6:56?AM MIGUEL MOLINOS PEREZ >> wrote: >> >>> Dear all, >>> >>> I have a question about using DMSwarm data with Kokkos-enabled vectors. >>> >>> My particle data (including ghost particles) are stored in a DMSwarm and >>> for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. >>> I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct >>> that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS >>> using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? >>> >>> Thanks in advance. >>> >>> Best, >>> >>> Miguel >>> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!emacIq_C5LrJ9eKK6yorz6jJyo8M_O8C6rorvSoed1o5-YpahunkVoQsWH597yayOS4SQZA4lpT4acYVZicG$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!emacIq_C5LrJ9eKK6yorz6jJyo8M_O8C6rorvSoed1o5-YpahunkVoQsWH597yayOS4SQZA4lpT4acYVZicG$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Mon Jan 19 13:16:17 2026 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Mon, 19 Jan 2026 19:16:17 +0000 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> <194CF8DA-5EA2-4B6D-9716-3CB36C50E9E1@us.es> Message-ID: <56F34AF7-8E7F-4BA6-8D37-8ABF2A27EC15@us.es> Understood! I think I can make it work because the way I create ghost particles in my code is with: (i)DMSwarmAddNPoints, (ii) two auxiliar variables named idx_ghost and rank_ghost, (iii) and DMSWARM_MIGRATE_BASIC for the migration. In summary, shared particles are indeed at the end of the Vec. Actually you helped me with this hack months ago :-). I think I can use the code you mentioned as an inspiration? let's see! Miguel On Jan 19, 2026, at 7:26?PM, Matthew Knepley wrote: The code is the right place to look. For example https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/swarm/swarm.c?ref_type=heads*L307__;Iw!!G_uCfscf7eWS!dB-a-Zi9Mzr_7h5tsaTEWhXfpl8M-dAWMu3CsQ4axsXQNCXMO3XGub6G8G8OaSjDdg5zakXL6G1r8et7--tHHA$ So the Vec will get the right type. This means that when the initial data comes in on the host, it will be copied to the device when we execute an operation. We (really Joe) are working on an update that allows swarm data to be held on the device. Thanks, Matt On Mon, Jan 19, 2026 at 1:22?PM MIGUEL MOLINOS PEREZ > wrote: I?ll check the documentation more carefully to see if I can find this. Thanks, Miguel On Jan 19, 2026, at 6:42?PM, Matthew Knepley > wrote: On Mon, Jan 19, 2026 at 12:19?PM MIGUEL MOLINOS PEREZ > wrote: I don?t think so because DMSwarm data is allocated on host. This is right Matt? I believe there is code in there to move it if you give a device type. Thanks, Matt Thanks, Miguel On Jan 19, 2026, at 5:44?PM, Junchao Zhang > wrote: With VecCreateGhostWithArray, do you already have the array ready on device? --Junchao Zhang On Mon, Jan 19, 2026 at 6:56?AM MIGUEL MOLINOS PEREZ > wrote: Dear all, I have a question about using DMSwarm data with Kokkos-enabled vectors. My particle data (including ghost particles) are stored in a DMSwarm and for solver purposes I generate PETSc vectors using VecCreateGhostWithArray. I would like to use VecMPIKOKKOS for MPI+GPU computations. Am I correct that DMSwarm field memory cannot be directly wrapped into a VecMPIKOKKOS using VecCreateGhostWithArray or VecPlaceArray? Any idea on how to proceed? Thanks in advance. Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dB-a-Zi9Mzr_7h5tsaTEWhXfpl8M-dAWMu3CsQ4axsXQNCXMO3XGub6G8G8OaSjDdg5zakXL6G1r8etBf76ZFw$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dB-a-Zi9Mzr_7h5tsaTEWhXfpl8M-dAWMu3CsQ4axsXQNCXMO3XGub6G8G8OaSjDdg5zakXL6G1r8etBf76ZFw$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 19 13:23:19 2026 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Jan 2026 14:23:19 -0500 Subject: [petsc-users] Ghost vector with MPIKOKKOS under DMSwarm In-Reply-To: <56F34AF7-8E7F-4BA6-8D37-8ABF2A27EC15@us.es> References: <0D80B9B0-45A5-4724-8609-F2B395B274F2@us.es> <194CF8DA-5EA2-4B6D-9716-3CB36C50E9E1@us.es> <56F34AF7-8E7F-4BA6-8D37-8ABF2A27EC15@us.es> Message-ID: Great! Let me know if I need to do anything. Thanks, Matt On Mon, Jan 19, 2026 at 2:16?PM MIGUEL MOLINOS PEREZ wrote: > Understood! > > I think I can make it work because the way I create ghost particles in my > code is with: > (i)DMSwarmAddNPoints, > (ii) two auxiliar variables named idx_ghost and rank_ghost, > (iii) and DMSWARM_MIGRATE_BASIC for the migration. > > In summary, shared particles are indeed at the end of the Vec. > > Actually you helped me with this hack months ago :-). > > I think I can use the code you mentioned as an inspiration? let's see! > > Miguel > > On Jan 19, 2026, at 7:26?PM, Matthew Knepley wrote: > > The code is the right place to look. For example > > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/swarm/swarm.c?ref_type=heads*L307__;Iw!!G_uCfscf7eWS!eE960jNqKVw0srPisAtE4Y8NlgL2aPXto1rNFpISsZTuFMb2PEIeR2IZFPLH_Jg0MfO8sFXiX8OhusFg-MMo$ > > So the Vec will get the right type. This means that when the initial data > comes in on the host, it will be copied to the device when we execute an > operation. > > We (really Joe) are working on an update that allows swarm data to be held > on the device. > > Thanks, > > Matt > > On Mon, Jan 19, 2026 at 1:22?PM MIGUEL MOLINOS PEREZ > wrote: > >> I?ll check the documentation more carefully to see if I can find this. >> >> Thanks, >> Miguel >> >> On Jan 19, 2026, at 6:42?PM, Matthew Knepley wrote: >> >> On Mon, Jan 19, 2026 at 12:19?PM MIGUEL MOLINOS PEREZ >> wrote: >> >>> I don?t think so because DMSwarm data is allocated on host. This is >>> right Matt? >>> >> >> I believe there is code in there to move it if you give a device type. >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Miguel >>> >>> On Jan 19, 2026, at 5:44?PM, Junchao Zhang >>> wrote: >>> >>> With VecCreateGhostWithArray, do you already have the array ready on >>> device? >>> >>> --Junchao Zhang >>> >>> >>> On Mon, Jan 19, 2026 at 6:56?AM MIGUEL MOLINOS PEREZ >>> wrote: >>> >>>> Dear all, >>>> >>>> I have a question about using DMSwarm data with Kokkos-enabled vectors. >>>> >>>> My particle data (including ghost particles) are stored in a DMSwarm >>>> and for solver purposes I generate PETSc vectors using >>>> VecCreateGhostWithArray. I would like to use VecMPIKOKKOS for MPI+GPU >>>> computations. Am I correct that DMSwarm field memory cannot be directly >>>> wrapped into a VecMPIKOKKOS using VecCreateGhostWithArray or VecPlaceArray? >>>> Any idea on how to proceed? >>>> >>>> Thanks in advance. >>>> >>>> Best, >>>> >>>> Miguel >>>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eE960jNqKVw0srPisAtE4Y8NlgL2aPXto1rNFpISsZTuFMb2PEIeR2IZFPLH_Jg0MfO8sFXiX8Ohusvaptrj$ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eE960jNqKVw0srPisAtE4Y8NlgL2aPXto1rNFpISsZTuFMb2PEIeR2IZFPLH_Jg0MfO8sFXiX8Ohusvaptrj$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eE960jNqKVw0srPisAtE4Y8NlgL2aPXto1rNFpISsZTuFMb2PEIeR2IZFPLH_Jg0MfO8sFXiX8Ohusvaptrj$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Mon Jan 19 13:31:07 2026 From: liufield at gmail.com (neil liu) Date: Mon, 19 Jan 2026 14:31:07 -0500 Subject: [petsc-users] From explicit matrix using MatSchurComplementComputeExplicitOperator Message-ID: Dear Petsc developers and users, I am exploring Schur complement to reuse the inverse of A, for a block matrix [ A B; C D]. Here A's size is much bigger than D. Therefore it is desirable to reuse the inverse of A. The code is listed below. It works well for 10k tetrahedra elements on 4 ranks . But it shows some error for a coarse mesh (260 tetrahedra cells) on 4 ranks. Do i have to sort *isNonPort and isPort? * Thanks a lot, Xiaodong *MatCreateSubMatrix(A, isNonPort, isNonPort, MAT_INITIAL_MATRIX, &A11);* *MatCreateSubMatrix(A, isNonPort, isPorts, MAT_INITIAL_MATRIX, &A12);* * MatCreateSubMatrix(A, isPorts, isNonPort, MAT_INITIAL_MATRIX, &A21); * * MatCreateSubMatrix(A, isPorts, isPorts, MAT_INITIAL_MATRIX, &A22);* * Mat S; * * MatCreateSchurComplement(A11, A11, A12, A21, A22, &S); * * MatSchurComplementSetKSP(S, kspA11);* * Mat Sdense; * *MatSchurComplementComputeExplicitOperator(S, &Sdense); * * MatAssemblyBegin(Sdense, MAT_FINAL_ASSEMBLY); * * MatAssemblyEnd(Sdense, MAT_FINAL_ASSEMBLY);* [1]PETSC ERROR: Argument out of range [1]PETSC ERROR: Column entry number 2 (actual column 0) in row 74 is not sorted [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!az-Ao1BxQdotvJlqia0i_iZ5sCx1AZNlDZwT2d1YUzJ4TE_esLV_bwZ-lyIBrsVlK3JJu36cOohpqoohAStoSQ$ for trouble shooting. [1]PETSC ERROR: PETSc Release Version 3.23.3, May 30, 2025 [1]PETSC ERROR: ./app with 4 MPI process(es) and PETSC_ARCH arch-linux-c-debug [1]PETSC ERROR: Configure options: -with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --with-debugging=yes --download-parmetis --download-metis --download-hdf5 --download-scalapack --download-mumps - [1]PETSC ERROR: #1 MatCreateSeqAIJWithArrays() at /Documents/petsc-3.23.3/src/mat/impls/aij/seq/aij.c:5275 [1]PETSC ERROR: #2 MatMPIAIJGetLocalMat() at /Documents/petsc-3.23.3/src/mat/impls/aij/mpi/mpiaij.c:5246 [1]PETSC ERROR: #3 MatConvert_MPIAIJ_MPIDense() at /Documents/petsc-3.23.3/src/mat/impls/dense/mpi/mpidense.c:1393 [1]PETSC ERROR: #4 MatConvert() at /Documents/petsc-3.23.3/src/mat/interface/matrix.c:4485 [1]PETSC ERROR: #5 MatSchurComplementComputeExplicitOperator() at /Documents/petsc-3.23.3/src/ksp/ksp/utils/schurm/schurm.c:518 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jan 19 13:34:44 2026 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 19 Jan 2026 13:34:44 -0600 Subject: [petsc-users] From explicit matrix using MatSchurComplementComputeExplicitOperator In-Reply-To: References: Message-ID: <64F95BDB-E90C-4966-86A9-B7F82687E593@petsc.dev> Apparently. Just call ISSort() on the IS after forming it. > On Jan 19, 2026, at 1:31?PM, neil liu wrote: > > Dear Petsc developers and users, > > I am exploring Schur complement to reuse the inverse of A, for a block matrix [ A B; C D]. > Here A's size is much bigger than D. Therefore it is desirable to reuse the inverse of A. The code is listed below. It works well for 10k tetrahedra elements on 4 ranks . But it shows some error for a coarse mesh (260 tetrahedra cells) on 4 ranks. > Do i have to sort isNonPort and isPort? > Thanks a lot, > Xiaodong > MatCreateSubMatrix(A, isNonPort, isNonPort, MAT_INITIAL_MATRIX, &A11); > > MatCreateSubMatrix(A, isNonPort, isPorts, MAT_INITIAL_MATRIX, &A12); > > MatCreateSubMatrix(A, isPorts, isNonPort, MAT_INITIAL_MATRIX, &A21); > > MatCreateSubMatrix(A, isPorts, isPorts, MAT_INITIAL_MATRIX, &A22); > > Mat S; > > MatCreateSchurComplement(A11, A11, A12, A21, A22, &S); > > MatSchurComplementSetKSP(S, kspA11); > > Mat Sdense; > > MatSchurComplementComputeExplicitOperator(S, &Sdense); > > MatAssemblyBegin(Sdense, MAT_FINAL_ASSEMBLY); > > MatAssemblyEnd(Sdense, MAT_FINAL_ASSEMBLY); > > [1]PETSC ERROR: Argument out of range > [1]PETSC ERROR: Column entry number 2 (actual column 0) in row 74 is not sorted > [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fXUit4Ume0pJ59Ww3NtLFk7wqiN1KvrU3kRM15Vb0xZ_2zuoO91UEuvSW3QPW4ieEuamNnC1uYRjn4xD126Bbzs$ for trouble shooting. > [1]PETSC ERROR: PETSc Release Version 3.23.3, May 30, 2025 > [1]PETSC ERROR: ./app with 4 MPI process(es) and PETSC_ARCH arch-linux-c-debug > [1]PETSC ERROR: Configure options: -with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --with-debugging=yes --download-parmetis --download-metis --download-hdf5 --download-scalapack --download-mumps - > [1]PETSC ERROR: #1 MatCreateSeqAIJWithArrays() at /Documents/petsc-3.23.3/src/mat/impls/aij/seq/aij.c:5275 > [1]PETSC ERROR: #2 MatMPIAIJGetLocalMat() at /Documents/petsc-3.23.3/src/mat/impls/aij/mpi/mpiaij.c:5246 > [1]PETSC ERROR: #3 MatConvert_MPIAIJ_MPIDense() at /Documents/petsc-3.23.3/src/mat/impls/dense/mpi/mpidense.c:1393 > [1]PETSC ERROR: #4 MatConvert() at /Documents/petsc-3.23.3/src/mat/interface/matrix.c:4485 > [1]PETSC ERROR: #5 MatSchurComplementComputeExplicitOperator() at /Documents/petsc-3.23.3/src/ksp/ksp/utils/schurm/schurm.c:518 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Mon Jan 19 13:53:28 2026 From: liufield at gmail.com (neil liu) Date: Mon, 19 Jan 2026 14:53:28 -0500 Subject: [petsc-users] From explicit matrix using MatSchurComplementComputeExplicitOperator In-Reply-To: <64F95BDB-E90C-4966-86A9-B7F82687E593@petsc.dev> References: <64F95BDB-E90C-4966-86A9-B7F82687E593@petsc.dev> Message-ID: Thanks a lot, it works. On Mon, Jan 19, 2026 at 2:34?PM Barry Smith wrote: > > Apparently. Just call ISSort() on the IS after forming it. > > On Jan 19, 2026, at 1:31?PM, neil liu wrote: > > Dear Petsc developers and users, > > I am exploring Schur complement to reuse the inverse of A, for a block > matrix [ A B; C D]. > Here A's size is much bigger than D. Therefore it is desirable to reuse > the inverse of A. The code is listed below. It works well for 10k > tetrahedra elements on 4 ranks . But it shows some error for a coarse mesh > (260 tetrahedra cells) on 4 ranks. > Do i have to sort *isNonPort and isPort? * > Thanks a lot, > Xiaodong > > *MatCreateSubMatrix(A, isNonPort, isNonPort, MAT_INITIAL_MATRIX, &A11);* > > *MatCreateSubMatrix(A, isNonPort, isPorts, MAT_INITIAL_MATRIX, &A12);* > > * MatCreateSubMatrix(A, isPorts, isNonPort, MAT_INITIAL_MATRIX, &A21); * > > * MatCreateSubMatrix(A, isPorts, isPorts, MAT_INITIAL_MATRIX, &A22);* > > * Mat S; * > > * MatCreateSchurComplement(A11, A11, A12, A21, A22, &S); * > > * MatSchurComplementSetKSP(S, kspA11);* > > * Mat Sdense; * > > *MatSchurComplementComputeExplicitOperator(S, &Sdense); * > > * MatAssemblyBegin(Sdense, MAT_FINAL_ASSEMBLY); * > > * MatAssemblyEnd(Sdense, MAT_FINAL_ASSEMBLY);* > [1]PETSC ERROR: Argument out of range > [1]PETSC ERROR: Column entry number 2 (actual column 0) in row 74 is not > sorted > [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fevDf1_FSrxchHFOB2OyR662Jc32SOxgbUK8NZB-aekRvxw9KZRaAmfsQcGnrmPlP-GpdDJGrsvpzTD_mHJHSA$ > for > trouble shooting. > [1]PETSC ERROR: PETSc Release Version 3.23.3, May 30, 2025 > [1]PETSC ERROR: ./app with 4 MPI process(es) and PETSC_ARCH > arch-linux-c-debug > [1]PETSC ERROR: Configure options: -with-cc=gcc --with-fc=gfortran > --with-cxx=g++ --download-fblaslapack --download-mpich > --with-scalar-type=complex --with-debugging=yes --download-parmetis > --download-metis --download-hdf5 --download-scalapack --download-mumps - > [1]PETSC ERROR: #1 MatCreateSeqAIJWithArrays() at > /Documents/petsc-3.23.3/src/mat/impls/aij/seq/aij.c:5275 > [1]PETSC ERROR: #2 MatMPIAIJGetLocalMat() at > /Documents/petsc-3.23.3/src/mat/impls/aij/mpi/mpiaij.c:5246 > [1]PETSC ERROR: #3 MatConvert_MPIAIJ_MPIDense() at > /Documents/petsc-3.23.3/src/mat/impls/dense/mpi/mpidense.c:1393 > [1]PETSC ERROR: #4 MatConvert() at > /Documents/petsc-3.23.3/src/mat/interface/matrix.c:4485 > [1]PETSC ERROR: #5 MatSchurComplementComputeExplicitOperator() at > /Documents/petsc-3.23.3/src/ksp/ksp/utils/schurm/schurm.c:518 > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 19 13:53:49 2026 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Jan 2026 14:53:49 -0500 Subject: [petsc-users] From explicit matrix using MatSchurComplementComputeExplicitOperator In-Reply-To: <64F95BDB-E90C-4966-86A9-B7F82687E593@petsc.dev> References: <64F95BDB-E90C-4966-86A9-B7F82687E593@petsc.dev> Message-ID: Note that you can do everything above with options alone. You do not need the code. Thanks, Matt On Mon, Jan 19, 2026 at 2:35?PM Barry Smith wrote: > > Apparently. Just call ISSort() on the IS after forming it. > > On Jan 19, 2026, at 1:31?PM, neil liu wrote: > > Dear Petsc developers and users, > > I am exploring Schur complement to reuse the inverse of A, for a block > matrix [ A B; C D]. > Here A's size is much bigger than D. Therefore it is desirable to reuse > the inverse of A. The code is listed below. It works well for 10k > tetrahedra elements on 4 ranks . But it shows some error for a coarse mesh > (260 tetrahedra cells) on 4 ranks. > Do i have to sort *isNonPort and isPort? * > Thanks a lot, > Xiaodong > > *MatCreateSubMatrix(A, isNonPort, isNonPort, MAT_INITIAL_MATRIX, &A11);* > > *MatCreateSubMatrix(A, isNonPort, isPorts, MAT_INITIAL_MATRIX, &A12);* > > * MatCreateSubMatrix(A, isPorts, isNonPort, MAT_INITIAL_MATRIX, &A21); * > > * MatCreateSubMatrix(A, isPorts, isPorts, MAT_INITIAL_MATRIX, &A22);* > > * Mat S; * > > * MatCreateSchurComplement(A11, A11, A12, A21, A22, &S); * > > * MatSchurComplementSetKSP(S, kspA11);* > > * Mat Sdense; * > > *MatSchurComplementComputeExplicitOperator(S, &Sdense); * > > * MatAssemblyBegin(Sdense, MAT_FINAL_ASSEMBLY); * > > * MatAssemblyEnd(Sdense, MAT_FINAL_ASSEMBLY);* > [1]PETSC ERROR: Argument out of range > [1]PETSC ERROR: Column entry number 2 (actual column 0) in row 74 is not > sorted > [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!YH0t05h5r0tMJy6-pB5RzN_AwWN0iQY4UAOIQVHEfpxNqymQUvBLWIJYW1am1lI2_cmGA7muK6572KeG1ch8$ > for > trouble shooting. > [1]PETSC ERROR: PETSc Release Version 3.23.3, May 30, 2025 > [1]PETSC ERROR: ./app with 4 MPI process(es) and PETSC_ARCH > arch-linux-c-debug > [1]PETSC ERROR: Configure options: -with-cc=gcc --with-fc=gfortran > --with-cxx=g++ --download-fblaslapack --download-mpich > --with-scalar-type=complex --with-debugging=yes --download-parmetis > --download-metis --download-hdf5 --download-scalapack --download-mumps - > [1]PETSC ERROR: #1 MatCreateSeqAIJWithArrays() at > /Documents/petsc-3.23.3/src/mat/impls/aij/seq/aij.c:5275 > [1]PETSC ERROR: #2 MatMPIAIJGetLocalMat() at > /Documents/petsc-3.23.3/src/mat/impls/aij/mpi/mpiaij.c:5246 > [1]PETSC ERROR: #3 MatConvert_MPIAIJ_MPIDense() at > /Documents/petsc-3.23.3/src/mat/impls/dense/mpi/mpidense.c:1393 > [1]PETSC ERROR: #4 MatConvert() at > /Documents/petsc-3.23.3/src/mat/interface/matrix.c:4485 > [1]PETSC ERROR: #5 MatSchurComplementComputeExplicitOperator() at > /Documents/petsc-3.23.3/src/ksp/ksp/utils/schurm/schurm.c:518 > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YH0t05h5r0tMJy6-pB5RzN_AwWN0iQY4UAOIQVHEfpxNqymQUvBLWIJYW1am1lI2_cmGA7muK6572PNFanx7$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Mon Jan 19 13:59:27 2026 From: liufield at gmail.com (neil liu) Date: Mon, 19 Jan 2026 14:59:27 -0500 Subject: [petsc-users] From explicit matrix using MatSchurComplementComputeExplicitOperator In-Reply-To: References: <64F95BDB-E90C-4966-86A9-B7F82687E593@petsc.dev> Message-ID: Thanks a lot, I will explore that. On Mon, Jan 19, 2026 at 2:54?PM Matthew Knepley wrote: > Note that you can do everything above with options alone. You do not need > the code. > > Thanks, > > Matt > > On Mon, Jan 19, 2026 at 2:35?PM Barry Smith wrote: > >> >> Apparently. Just call ISSort() on the IS after forming it. >> >> On Jan 19, 2026, at 1:31?PM, neil liu wrote: >> >> Dear Petsc developers and users, >> >> I am exploring Schur complement to reuse the inverse of A, for a block >> matrix [ A B; C D]. >> Here A's size is much bigger than D. Therefore it is desirable to reuse >> the inverse of A. The code is listed below. It works well for 10k >> tetrahedra elements on 4 ranks . But it shows some error for a coarse mesh >> (260 tetrahedra cells) on 4 ranks. >> Do i have to sort *isNonPort and isPort? * >> Thanks a lot, >> Xiaodong >> >> *MatCreateSubMatrix(A, isNonPort, isNonPort, MAT_INITIAL_MATRIX, &A11);* >> >> *MatCreateSubMatrix(A, isNonPort, isPorts, MAT_INITIAL_MATRIX, &A12);* >> >> * MatCreateSubMatrix(A, isPorts, isNonPort, MAT_INITIAL_MATRIX, &A21); * >> >> * MatCreateSubMatrix(A, isPorts, isPorts, MAT_INITIAL_MATRIX, &A22);* >> >> * Mat S; * >> >> * MatCreateSchurComplement(A11, A11, A12, A21, A22, &S); * >> >> * MatSchurComplementSetKSP(S, kspA11);* >> >> * Mat Sdense; * >> >> *MatSchurComplementComputeExplicitOperator(S, &Sdense); * >> >> * MatAssemblyBegin(Sdense, MAT_FINAL_ASSEMBLY); * >> >> * MatAssemblyEnd(Sdense, MAT_FINAL_ASSEMBLY);* >> [1]PETSC ERROR: Argument out of range >> [1]PETSC ERROR: Column entry number 2 (actual column 0) in row 74 is not >> sorted >> [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!eGEwH40wLcvl22nLXTHwekjljXvdmU7eygoskE54RmzbBZbiVlu_BTk2iVuD8jBoJnMUWOPmjwsUAuxQ6jtp7A$ >> for >> trouble shooting. >> [1]PETSC ERROR: PETSc Release Version 3.23.3, May 30, 2025 >> [1]PETSC ERROR: ./app with 4 MPI process(es) and PETSC_ARCH >> arch-linux-c-debug >> [1]PETSC ERROR: Configure options: -with-cc=gcc --with-fc=gfortran >> --with-cxx=g++ --download-fblaslapack --download-mpich >> --with-scalar-type=complex --with-debugging=yes --download-parmetis >> --download-metis --download-hdf5 --download-scalapack --download-mumps - >> [1]PETSC ERROR: #1 MatCreateSeqAIJWithArrays() at >> /Documents/petsc-3.23.3/src/mat/impls/aij/seq/aij.c:5275 >> [1]PETSC ERROR: #2 MatMPIAIJGetLocalMat() at >> /Documents/petsc-3.23.3/src/mat/impls/aij/mpi/mpiaij.c:5246 >> [1]PETSC ERROR: #3 MatConvert_MPIAIJ_MPIDense() at >> /Documents/petsc-3.23.3/src/mat/impls/dense/mpi/mpidense.c:1393 >> [1]PETSC ERROR: #4 MatConvert() at >> /Documents/petsc-3.23.3/src/mat/interface/matrix.c:4485 >> [1]PETSC ERROR: #5 MatSchurComplementComputeExplicitOperator() at >> /Documents/petsc-3.23.3/src/ksp/ksp/utils/schurm/schurm.c:518 >> >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eGEwH40wLcvl22nLXTHwekjljXvdmU7eygoskE54RmzbBZbiVlu_BTk2iVuD8jBoJnMUWOPmjwsUAuyuoAxY_g$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gabriele.Penazzi at synopsys.com Mon Jan 19 05:52:34 2026 From: Gabriele.Penazzi at synopsys.com (Gabriele Penazzi) Date: Mon, 19 Jan 2026 11:52:34 +0000 Subject: [petsc-users] Performance with GPU and multiple MPI processes per GPU Message-ID: Hi. I am using PETSc conjugate gradient liner solver with GPU acceleration (CUDA), on multiple GPUs and multiple MPI processes. I noticed that the performances degrade significantly when using multiple MPI processes per GPU, compared to using a single process per GPU. For example, 2 GPUs with 2 MPI processes will be about 40% faster than running the same calculation with 2 GPUs and 16 MPI processes. I would assume the natural MPI/GPU affinity would be 1-1, however the rest of my application can benefit from multiple MPI processes driving GPU via nvidia MPS, therefore I am trying to understand if this is expected, if I am possibly missing something in the initialization/setup, or if my best choice is to constrain 1-1 MPI/GPU access especially for the PETSc linear solver step. I could not find explicit information about it in the manual. Is there any user or maintainer who can tell me more about this use case? Best Regards, Gabriele Penazzi -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jan 20 09:14:11 2026 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 20 Jan 2026 09:14:11 -0600 Subject: [petsc-users] Performance with GPU and multiple MPI processes per GPU In-Reply-To: References: Message-ID: Let me try to understand your setup. You have two physical GPUs and a CPU with at least 16 physical cores? You run with 16 MPI processes, each using its own "virtual" GPU (via MPS). Thus, a single physical GPU is shared by 8 MPI processes? What happens if you run with 4 MPI processes, compared with 2? Can you run with -log_view and send the output when using 2, 4, and 8 MPI processes? Barry > On Jan 19, 2026, at 5:52?AM, Gabriele Penazzi via petsc-users wrote: > > Hi. > > I am using PETSc conjugate gradient liner solver with GPU acceleration (CUDA), on multiple GPUs and multiple MPI processes. > > I noticed that the performances degrade significantly when using multiple MPI processes per GPU, compared to using a single process per GPU. > For example, 2 GPUs with 2 MPI processes will be about 40% faster than running the same calculation with 2 GPUs and 16 MPI processes. > > I would assume the natural MPI/GPU affinity would be 1-1, however the rest of my application can benefit from multiple MPI processes driving GPU via nvidia MPS, therefore I am trying to understand if this is expected, if I am possibly missing something in the initialization/setup, or if my best choice is to constrain 1-1 MPI/GPU access especially for the PETSc linear solver step. I could not find explicit information about it in the manual. > > Is there any user or maintainer who can tell me more about this use case? > > Best Regards, > Gabriele Penazzi -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Jan 20 10:17:21 2026 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 20 Jan 2026 10:17:21 -0600 Subject: [petsc-users] Performance with GPU and multiple MPI processes per GPU In-Reply-To: References: Message-ID: Hello Babriele, Maybe you can try CUDA MPS service, to effectively map multiple processes to one GPU. First, I would create a directory $HOME/tmp/nvidia-mps (by default, cuda will use /tmp/nvidia-mps), then use these steps export CUDA_MPS_PIPE_DIRECTORY=$HOME/tmp/nvidia-mps export CUDA_MPS_LOG_DIRECTORY=$HOME/tmp/nvidia-mps # Start MPS nvidia-cuda-mps-control -d # run the test mpiexec -n 16 ./test # shut down MPS echo quit | nvidia-cuda-mps-control I would also like to block-map MPI processes to GPUs manually via manipulating the env var CUDA_VISIBLE_DEVICES. So I have this bash script * set_gpu_device.sh *on my PATH (assume you use OpenMPI) #!/bin/bash GPUS_PER_NODE=2 export CUDA_VISIBLE_DEVICES=$((OMPI_COMM_WORLD_LOCAL_RANK/(OMPI_COMM_WORLD_LOCAL_SIZE/GPUS_PER_NODE))) exec $* In other words, to run the test, I use mpiexec -n 16 set_gpu_device.sh ./test Let us know if it helps so that we can add the instructions to the PETSc doc. Thanks. --Junchao Zhang On Tue, Jan 20, 2026 at 8:21?AM Gabriele Penazzi via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi. > > I am using PETSc conjugate gradient liner solver with GPU acceleration > (CUDA), on multiple GPUs and multiple MPI processes. > > I noticed that the performances degrade significantly when using multiple > MPI processes per GPU, compared to using a single process per GPU. > For example, 2 GPUs with 2 MPI processes will be about 40% faster than > running the same calculation with 2 GPUs and 16 MPI processes. > > I would assume the natural MPI/GPU affinity would be 1-1, however the rest > of my application can benefit from multiple MPI processes driving GPU via > nvidia MPS, therefore I am trying to understand if this is expected, if I > am possibly missing something in the initialization/setup, or if my best > choice is to constrain 1-1 MPI/GPU access especially for the PETSc linear > solver step. I could not find explicit information about it in the manual. > > Is there any user or maintainer who can tell me more about this use case? > > Best Regards, > Gabriele Penazzi > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gabriele.Penazzi at synopsys.com Thu Jan 22 09:18:26 2026 From: Gabriele.Penazzi at synopsys.com (Gabriele Penazzi) Date: Thu, 22 Jan 2026 15:18:26 +0000 Subject: [petsc-users] Performance with GPU and multiple MPI processes per GPU In-Reply-To: References: Message-ID: Hi Barry, yes that's exactly the setup, multiple processes share a single physical GPU via MPS, and the GPUs are assigned upfront to guarantee fair balance. I?ve looked further into this, and the behavior seems to be related to the problem size in my application. When I increase the number of DOFs, I no longer observe any slowdown with multiple MPI processes per GPU. I should also mention that I?m compiling PETSc without GPU?aware MPI. I know this is not recommended, so my results may not be fully representative. Unfortunately, due to constraints in the toolchain I can use, this is the only way I can compile PETSc for the time being. I can also reproduce the issue on a single GPU, but only for relatively small problems. For example, with about 2e6 DOFs, going from 4 to 8 MPI processes introduces a noticeable performance penalty on the GPU (while the same configuration still scales reasonably well on the CPU). I?ve attached the -log_view outputs for the 1?, 4?, and 8?process cases for this setup. Since this degradation only shows up for smaller DOF counts, it sounds more like I?m misusing the library (or operating in a regime where overheads dominate). Based on this, my tentative conclusion is that, in general, using a communicator that maps one MPI process per GPU is a better approach. Would you consider that a fair statement? Thanks, Gabriele ________________________________ From: Barry Smith Sent: Tuesday, January 20, 2026 4:14 PM To: Gabriele Penazzi Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Performance with GPU and multiple MPI processes per GPU Let me try to understand your setup. You have two physical GPUs and a CPU with at least 16 physical cores? You run with 16 MPI processes, each using its own "virtual" GPU (via MPS). Thus, a single physical GPU is shared by 8 MPI processes? What happens if you run with 4 MPI processes, compared with 2? Can you run with -log_view and send the output when using 2, 4, and 8 MPI processes? Barry On Jan 19, 2026, at 5:52?AM, Gabriele Penazzi via petsc-users wrote: Hi. I am using PETSc conjugate gradient liner solver with GPU acceleration (CUDA), on multiple GPUs and multiple MPI processes. I noticed that the performances degrade significantly when using multiple MPI processes per GPU, compared to using a single process per GPU. For example, 2 GPUs with 2 MPI processes will be about 40% faster than running the same calculation with 2 GPUs and 16 MPI processes. I would assume the natural MPI/GPU affinity would be 1-1, however the rest of my application can benefit from multiple MPI processes driving GPU via nvidia MPS, therefore I am trying to understand if this is expected, if I am possibly missing something in the initialization/setup, or if my best choice is to constrain 1-1 MPI/GPU access especially for the PETSc linear solver step. I could not find explicit information about it in the manual. Is there any user or maintainer who can tell me more about this use case? Best Regards, Gabriele Penazzi -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1proc_gpu.log Type: application/octet-stream Size: 12507 bytes Desc: 1proc_gpu.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 4proc_gpu.log Type: application/octet-stream Size: 16433 bytes Desc: 4proc_gpu.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 8proc_gpu.log Type: application/octet-stream Size: 16433 bytes Desc: 8proc_gpu.log URL: From Gabriele.Penazzi at synopsys.com Thu Jan 22 09:21:02 2026 From: Gabriele.Penazzi at synopsys.com (Gabriele Penazzi) Date: Thu, 22 Jan 2026 15:21:02 +0000 Subject: [petsc-users] Performance with GPU and multiple MPI processes per GPU In-Reply-To: References: Message-ID: Hi Junchao, I am already using MPS, but thanks for the suggestion. It does make a large difference indeed, I think in general it'd be a very useful documentation entry Thank you, Gabriele ________________________________ From: Junchao Zhang Sent: Tuesday, January 20, 2026 5:17 PM To: Gabriele Penazzi Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Performance with GPU and multiple MPI processes per GPU Hello Babriele, Maybe you can try CUDA MPS service, to effectively map multiple processes to one GPU. First, I would create a directory $HOME/tmp/nvidia-mps (by default, cuda will use /tmp/nvidia-mps), then use these steps export CUDA_MPS_PIPE_DIRECTORY=$HOME/tmp/nvidia-mps export CUDA_MPS_LOG_DIRECTORY=$HOME/tmp/nvidia-mps # Start MPS nvidia-cuda-mps-control -d # run the test mpiexec -n 16 ./test # shut down MPS echo quit | nvidia-cuda-mps-control I would also like to block-map MPI processes to GPUs manually via manipulating the env var CUDA_VISIBLE_DEVICES. So I have this bash script set_gpu_device.sh on my PATH (assume you use OpenMPI) #!/bin/bash GPUS_PER_NODE=2 export CUDA_VISIBLE_DEVICES=$((OMPI_COMM_WORLD_LOCAL_RANK/(OMPI_COMM_WORLD_LOCAL_SIZE/GPUS_PER_NODE))) exec $* In other words, to run the test, I use mpiexec -n 16 set_gpu_device.sh ./test Let us know if it helps so that we can add the instructions to the PETSc doc. Thanks. --Junchao Zhang On Tue, Jan 20, 2026 at 8:21?AM Gabriele Penazzi via petsc-users > wrote: Hi. I am using PETSc conjugate gradient liner solver with GPU acceleration (CUDA), on multiple GPUs and multiple MPI processes. I noticed that the performances degrade significantly when using multiple MPI processes per GPU, compared to using a single process per GPU. For example, 2 GPUs with 2 MPI processes will be about 40% faster than running the same calculation with 2 GPUs and 16 MPI processes. I would assume the natural MPI/GPU affinity would be 1-1, however the rest of my application can benefit from multiple MPI processes driving GPU via nvidia MPS, therefore I am trying to understand if this is expected, if I am possibly missing something in the initialization/setup, or if my best choice is to constrain 1-1 MPI/GPU access especially for the PETSc linear solver step. I could not find explicit information about it in the manual. Is there any user or maintainer who can tell me more about this use case? Best Regards, Gabriele Penazzi -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Tue Jan 27 14:33:52 2026 From: liufield at gmail.com (neil liu) Date: Tue, 27 Jan 2026 15:33:52 -0500 Subject: [petsc-users] Mapping between local, partition and global with dmplex. Message-ID: Dear Petsc users and developers, I am exploring the mapping between local, partition and global dofs. The following is my pseudo code, dof2Partitionmapping denotes the mapping between the local dof (20 local dofs each tetrahedra) and partition dof. Is this mapping determined by Petsc itself under the hood (PetscSectionGetOffset)? For now, I am coding this mapping (local to partition) myself just based on the edge and face number in the partition. It seems the results are reasonable. But with this kind of self-defined mapping, the owned dofs and ghost dofs are interleaved. Will this bring bad results for the communication of MatStash? Thanks, Xiaodong *1. set 2 dofs for each edge and 2 dofs for each edge face respectively* PetscSectionSetChart(s, faceStart, edgeEnd); PetscSectionSetDof(s, faceIndex, 2); PetscSectionSetFieldDof(s, faceIndex, 0, 1); PetscSectionSetDof(s, edgeIndex, 2); PetscSectionSetFieldDof(s, edgeIndex, 0, 1); PetscSectionsetup(s) *2. Create matrix based on DMPlex* DMSetMatType(dm, MATAIJ); DMCreateMatrix(dm, &A); *3. loop over elements to set local values for Matrix* MatSetValuesLocal(A, dof2Partitionmapping.size(), dof2Partitionmapping.data(), dof2Partitionmapping.size(), dof2Partitionmapping.data(), Matrix_Local.data(), ADD_VALUES); -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 27 17:13:47 2026 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 27 Jan 2026 18:13:47 -0500 Subject: [petsc-users] Mapping between local, partition and global with dmplex. In-Reply-To: References: Message-ID: On Tue, Jan 27, 2026 at 3:34?PM neil liu wrote: > Dear Petsc users and developers, > > I am exploring the mapping between local, partition and global dofs. > The following is my pseudo code, dof2Partitionmapping denotes the mapping > between the local dof (20 local dofs each tetrahedra) and partition dof. > We usually say cell, local, and global dofs. > Is this mapping determined by Petsc itself under the hood > (PetscSectionGetOffset)? > Plex just iterates over the points in the canonical numbering (cells, vertices, faces, edges). You can change the iteration order using https://urldefense.us/v3/__https://petsc.org/main/manualpages/PetscSection/PetscSectionSetPermutation/__;!!G_uCfscf7eWS!d7O-UstRpLu6SVXJg9iUKL61tBvbPTcV2U07ko4ptaMdZ71evm1SJ5h_BdOA30VCeeLZU1IXr3PKorHeZBEJ$ You can use that, for example, to place all ghost dofs at the end. Thanks, Matt > For now, I am coding this mapping (local to partition) myself just based > on the edge and face number in the partition. It seems the results are > reasonable. But with this kind of self-defined mapping, the owned dofs and > ghost dofs are interleaved. Will this bring bad results for the > communication of MatStash? > > Thanks, > Xiaodong > > *1. set 2 dofs for each edge and 2 dofs for each edge face respectively* > PetscSectionSetChart(s, faceStart, edgeEnd); > PetscSectionSetDof(s, faceIndex, 2); > PetscSectionSetFieldDof(s, faceIndex, 0, 1); > PetscSectionSetDof(s, edgeIndex, 2); > PetscSectionSetFieldDof(s, edgeIndex, 0, 1); > PetscSectionsetup(s) > > *2. Create matrix based on DMPlex* > DMSetMatType(dm, MATAIJ); > DMCreateMatrix(dm, &A); > > *3. loop over elements to set local values for Matrix* > MatSetValuesLocal(A, dof2Partitionmapping.size(), > dof2Partitionmapping.data(), dof2Partitionmapping.size(), > dof2Partitionmapping.data(), Matrix_Local.data(), ADD_VALUES); > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d7O-UstRpLu6SVXJg9iUKL61tBvbPTcV2U07ko4ptaMdZ71evm1SJ5h_BdOA30VCeeLZU1IXr3PKotlKhVX5$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Tue Jan 27 22:39:55 2026 From: liufield at gmail.com (neil liu) Date: Tue, 27 Jan 2026 23:39:55 -0500 Subject: [petsc-users] Mapping between local, partition and global with dmplex. In-Reply-To: References: Message-ID: Thanks a lot, Matt. I assemble a matrix using MatSetValuesLocal(), where the cell-to-local mapping comes from PetscSectionGetOffset(). With 2 MPI ranks, Valgrind (rank 0) reports uninitialized data being passed to MPI_Isend during matrix assembly (inside matstash.c). Two observations: - If I hard-code the local mapping to always be 1, the Valgrind warning disappears. - The warning occurs with PETSc *3.23.3*, but disappears when switching to *3.24.3* under the same setup. This suggests either uninitialized local indices/values on my side, or a change in PETSc between these versions affecting stash packing or index handling. Does this type of Valgrind warning usually point to user-provided local indices being uninitialized, and were there relevant changes in this area between 3.23.3 and 3.24.x? Thanks, Xiaodong ==683913== Syscall param write(buf) points to uninitialised byte(s) ==683913== at 0xCAD15A8: write (in /usr/lib64/libc-2.28.so) ==683913== by 0xBB9B0A9: MPIDI_CH3I_Sock_write (sock.c:2622) ==683913== by 0xBBA12DF: MPIDI_CH3_iStartMsg (ch3_istartmsg.c:68) ==683913== by 0xBB55F82: MPIDI_CH3_RndvSend (ch3u_rndv.c:48) ==683913== by 0xBB70905: MPID_Isend (mpid_isend.c:159) ==683913== by 0xB83EDB7: internal_Isend_c (isend.c:273) ==683913== by 0xB83EFE9: PMPI_Isend_c (isend.c:363) ==683913== by 0x82CAC1A: MatStashBTSSend_Private (matstash.c:793) ==683913== by 0x6AA2BF8: PetscCommBuildTwoSidedFReq_Reference (mpits.c:311) ==683913== by 0x6AA9DAD: PetscCommBuildTwoSidedFReq (mpits.c:515) ==683913== by 0x82CEB46: MatStashScatterBegin_BTS (matstash.c:920) ==683913== by 0x82C009D: MatStashScatterBegin_Private (matstash.c:440) ==683913== by 0x73E77BF: MatAssemblyBegin_MPIAIJ (mpiaij.c:768) ==683913== by 0x81F0777: MatAssemblyBegin (matrix.c:5807) On Tue, Jan 27, 2026 at 6:13?PM Matthew Knepley wrote: > On Tue, Jan 27, 2026 at 3:34?PM neil liu wrote: > >> Dear Petsc users and developers, >> >> I am exploring the mapping between local, partition and global dofs. >> The following is my pseudo code, dof2Partitionmapping denotes the mapping >> between the local dof (20 local dofs each tetrahedra) and partition dof. >> > > We usually say cell, local, and global dofs. > > >> Is this mapping determined by Petsc itself under the hood >> (PetscSectionGetOffset)? >> > > Plex just iterates over the points in the canonical numbering (cells, > vertices, faces, edges). You can change the iteration order using > > > https://urldefense.us/v3/__https://petsc.org/main/manualpages/PetscSection/PetscSectionSetPermutation/__;!!G_uCfscf7eWS!Z7m5oJFaTt_KCIUQJxJx-4SEoHIv5k1rWuDb3ClOur8MISuVMaOzs6WTo6lqZ2OWUBvBAKwK4hyKszJ1HUAjOw$ > > You can use that, for example, to place all ghost dofs at the end. > > Thanks, > > Matt > > >> For now, I am coding this mapping (local to partition) myself just based >> on the edge and face number in the partition. It seems the results are >> reasonable. But with this kind of self-defined mapping, the owned dofs and >> ghost dofs are interleaved. Will this bring bad results for the >> communication of MatStash? >> >> Thanks, >> Xiaodong >> >> *1. set 2 dofs for each edge and 2 dofs for each edge face respectively* >> PetscSectionSetChart(s, faceStart, edgeEnd); >> PetscSectionSetDof(s, faceIndex, 2); >> PetscSectionSetFieldDof(s, faceIndex, 0, 1); >> PetscSectionSetDof(s, edgeIndex, 2); >> PetscSectionSetFieldDof(s, edgeIndex, 0, 1); >> PetscSectionsetup(s) >> >> *2. Create matrix based on DMPlex* >> DMSetMatType(dm, MATAIJ); >> DMCreateMatrix(dm, &A); >> >> *3. loop over elements to set local values for Matrix* >> MatSetValuesLocal(A, dof2Partitionmapping.size(), >> dof2Partitionmapping.data(), dof2Partitionmapping.size(), >> dof2Partitionmapping.data(), Matrix_Local.data(), ADD_VALUES); >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z7m5oJFaTt_KCIUQJxJx-4SEoHIv5k1rWuDb3ClOur8MISuVMaOzs6WTo6lqZ2OWUBvBAKwK4hyKszJveN_t0Q$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 28 07:54:04 2026 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 28 Jan 2026 08:54:04 -0500 Subject: [petsc-users] Mapping between local, partition and global with dmplex. In-Reply-To: References: Message-ID: On Tue, Jan 27, 2026 at 11:40?PM neil liu wrote: > Thanks a lot, Matt. > > I assemble a matrix using MatSetValuesLocal(), where the cell-to-local > mapping comes from PetscSectionGetOffset(). > > With 2 MPI ranks, Valgrind (rank 0) reports uninitialized data being > passed to MPI_Isend during matrix assembly (inside matstash.c). > > Two observations: > > - > > If I hard-code the local mapping to always be 1, the Valgrind warning > disappears. > - > > The warning occurs with PETSc *3.23.3*, but disappears when switching > to *3.24.3* under the same setup. > > MPICH plays games with memory for efficiency reasons. We often see Valgrind warnings. Could you check if the default MPICH version changed between 3.23.3 and 3.24.3. That would be my first guess. Thanks, Matt > This suggests either uninitialized local indices/values on my side, or a > change in PETSc between these versions affecting stash packing or index > handling. > > Does this type of Valgrind warning usually point to user-provided local > indices being uninitialized, and were there relevant changes in this area > between 3.23.3 and 3.24.x? > > Thanks, > Xiaodong > > ==683913== Syscall param write(buf) points to uninitialised byte(s) > ==683913== at 0xCAD15A8: write (in /usr/lib64/libc-2.28.so) > ==683913== by 0xBB9B0A9: MPIDI_CH3I_Sock_write (sock.c:2622) > ==683913== by 0xBBA12DF: MPIDI_CH3_iStartMsg (ch3_istartmsg.c:68) > ==683913== by 0xBB55F82: MPIDI_CH3_RndvSend (ch3u_rndv.c:48) > ==683913== by 0xBB70905: MPID_Isend (mpid_isend.c:159) > ==683913== by 0xB83EDB7: internal_Isend_c (isend.c:273) > ==683913== by 0xB83EFE9: PMPI_Isend_c (isend.c:363) > ==683913== by 0x82CAC1A: MatStashBTSSend_Private (matstash.c:793) > ==683913== by 0x6AA2BF8: PetscCommBuildTwoSidedFReq_Reference > (mpits.c:311) > ==683913== by 0x6AA9DAD: PetscCommBuildTwoSidedFReq (mpits.c:515) > ==683913== by 0x82CEB46: MatStashScatterBegin_BTS (matstash.c:920) > ==683913== by 0x82C009D: MatStashScatterBegin_Private (matstash.c:440) > ==683913== by 0x73E77BF: MatAssemblyBegin_MPIAIJ (mpiaij.c:768) > ==683913== by 0x81F0777: MatAssemblyBegin (matrix.c:5807) > > On Tue, Jan 27, 2026 at 6:13?PM Matthew Knepley wrote: > >> On Tue, Jan 27, 2026 at 3:34?PM neil liu wrote: >> >>> Dear Petsc users and developers, >>> >>> I am exploring the mapping between local, partition and global dofs. >>> The following is my pseudo code, dof2Partitionmapping denotes the >>> mapping between the local dof (20 local dofs each tetrahedra) and partition >>> dof. >>> >> >> We usually say cell, local, and global dofs. >> >> >>> Is this mapping determined by Petsc itself under the hood >>> (PetscSectionGetOffset)? >>> >> >> Plex just iterates over the points in the canonical numbering (cells, >> vertices, faces, edges). You can change the iteration order using >> >> >> https://urldefense.us/v3/__https://petsc.org/main/manualpages/PetscSection/PetscSectionSetPermutation/__;!!G_uCfscf7eWS!axRnfOFmHcGL3iKKS0tSlsif0QKDXBdDwXWrPhWj9HPh6Td-Xx10T5eHPX4YORD_ITemssQl-KasmwS3V3kG$ >> >> You can use that, for example, to place all ghost dofs at the end. >> >> Thanks, >> >> Matt >> >> >>> For now, I am coding this mapping (local to partition) myself just based >>> on the edge and face number in the partition. It seems the results are >>> reasonable. But with this kind of self-defined mapping, the owned dofs and >>> ghost dofs are interleaved. Will this bring bad results for the >>> communication of MatStash? >>> >>> Thanks, >>> Xiaodong >>> >>> *1. set 2 dofs for each edge and 2 dofs for each edge face respectively* >>> PetscSectionSetChart(s, faceStart, edgeEnd); >>> PetscSectionSetDof(s, faceIndex, 2); >>> PetscSectionSetFieldDof(s, faceIndex, 0, 1); >>> PetscSectionSetDof(s, edgeIndex, 2); >>> PetscSectionSetFieldDof(s, edgeIndex, 0, 1); >>> PetscSectionsetup(s) >>> >>> *2. Create matrix based on DMPlex* >>> DMSetMatType(dm, MATAIJ); >>> DMCreateMatrix(dm, &A); >>> >>> *3. loop over elements to set local values for Matrix* >>> MatSetValuesLocal(A, dof2Partitionmapping.size(), >>> dof2Partitionmapping.data(), dof2Partitionmapping.size(), >>> dof2Partitionmapping.data(), Matrix_Local.data(), ADD_VALUES); >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!axRnfOFmHcGL3iKKS0tSlsif0QKDXBdDwXWrPhWj9HPh6Td-Xx10T5eHPX4YORD_ITemssQl-KasmzDTmp34$ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!axRnfOFmHcGL3iKKS0tSlsif0QKDXBdDwXWrPhWj9HPh6Td-Xx10T5eHPX4YORD_ITemssQl-KasmzDTmp34$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From srinivas.kirthy at gmail.com Fri Jan 30 23:13:56 2026 From: srinivas.kirthy at gmail.com (Srinivas Kirthy K) Date: Sat, 31 Jan 2026 10:43:56 +0530 Subject: [petsc-users] Problem running ex50 after compilation Message-ID: Hi, I'm new to PETSc. I was able to compile ex50.c using: *cd petsc/src/ksp/ksp/tutorials* *make ex50* When I run: *mpiexec -n 1 ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view* I get the error message in *PETSC_mpiexec_error.txt *(attached) I've defined these in the ~/.bashrc: *export PETSC_DIR=$HOME/petsc # added on 31/01/2026 to define PETSC locationexport PETSC_ARCH=arch-linux-c-debug* Can you help me fix this? I've attached the *configure.log* file as well. Thanks in advance. -- Regards, Srinivas Kirthy K -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- [unset]: launcher not compatible with PMI1 client Fatal error in internal_Init_thread: Other MPI error, error stack: internal_Init_thread(71): MPI_Init_thread(argc=0x7fff90afe3dc, argv=0x7fff90afe3d0, required=1, provided=0x7fff90afdf7c) failed MPII_Init_thread(203)...: MPIR_pmi_init(150)......: pmi1_init(14)...........: PMI_Init returned -1 [unset]: PMIU_write error; fd=-1 buf=:cmd=abort exitcode=807004687 message=abort : system msg for write_line failure : Bad file descriptor -------------------------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpiexec detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[29885,1],0] Exit code: 15 -------------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 3074401 bytes Desc: not available URL: From balay.anl at fastmail.org Fri Jan 30 23:24:36 2026 From: balay.anl at fastmail.org (Satish Balay) Date: Fri, 30 Jan 2026 23:24:36 -0600 (CST) Subject: [petsc-users] Problem running ex50 after compilation In-Reply-To: References: Message-ID: <7171dd06-da39-d4cd-aeaf-b3b42db7be88@fastmail.org> >>>> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --download-mpich --download-fblaslapack <<<< Ok - so you've installed MPI via --download-mpich - so you need to use: >>> mpiexec: /home/skirthy/petsc/arch-linux-c-debug/bin/mpiexec <<< [likely mpiexec used below is a different one] Satish On Sat, 31 Jan 2026, Srinivas Kirthy K wrote: > Hi, > > I'm new to PETSc. > > I was able to compile ex50.c using: > *cd petsc/src/ksp/ksp/tutorials* > *make ex50* > > When I run: *mpiexec -n 1 ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view* > > I get the error message in *PETSC_mpiexec_error.txt *(attached) > > I've defined these in the ~/.bashrc: > > *export PETSC_DIR=$HOME/petsc # added on 31/01/2026 to define PETSC > locationexport PETSC_ARCH=arch-linux-c-debug* > > Can you help me fix this? > > I've attached the *configure.log* file as well. > > Thanks in advance. > > From srinivas.kirthy at gmail.com Sat Jan 31 01:48:23 2026 From: srinivas.kirthy at gmail.com (Srinivas Kirthy K) Date: Sat, 31 Jan 2026 13:18:23 +0530 Subject: [petsc-users] Problem running ex50 after compilation In-Reply-To: <7171dd06-da39-d4cd-aeaf-b3b42db7be88@fastmail.org> References: <7171dd06-da39-d4cd-aeaf-b3b42db7be88@fastmail.org> Message-ID: Hi Satish, I included this line in the makefile: *mpiexec: "${PETSC_DIR}/${PETSC_ARCH}/bin/mpiexec" * Was able to compile, still getting the same error when I run *mpiexec -n 1 ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view* Anything that I'm doing wrong? Thanks. On Sat, Jan 31, 2026 at 10:54?AM Satish Balay wrote: > >>>> > Configure Options: --configModules=PETSc.Configure > --optionsModule=config.compilerOptions --download-mpich > --download-fblaslapack > <<<< > > Ok - so you've installed MPI via --download-mpich - so you need to use: > > >>> > mpiexec: /home/skirthy/petsc/arch-linux-c-debug/bin/mpiexec > <<< > > [likely mpiexec used below is a different one] > > Satish > > On Sat, 31 Jan 2026, Srinivas Kirthy K wrote: > > > Hi, > > > > I'm new to PETSc. > > > > I was able to compile ex50.c using: > > *cd petsc/src/ksp/ksp/tutorials* > > *make ex50* > > > > When I run: *mpiexec -n 1 ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view* > > > > I get the error message in *PETSC_mpiexec_error.txt *(attached) > > > > I've defined these in the ~/.bashrc: > > > > *export PETSC_DIR=$HOME/petsc # added on 31/01/2026 to define PETSC > > locationexport PETSC_ARCH=arch-linux-c-debug* > > > > Can you help me fix this? > > > > I've attached the *configure.log* file as well. > > > > Thanks in advance. > > > > > > -- Regards, Srinivas Kirthy K -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Sat Jan 31 01:59:30 2026 From: balay.anl at fastmail.org (Satish Balay) Date: Sat, 31 Jan 2026 01:59:30 -0600 (CST) Subject: [petsc-users] Problem running ex50 after compilation In-Reply-To: References: <7171dd06-da39-d4cd-aeaf-b3b42db7be88@fastmail.org> Message-ID: <55ee8018-c2b9-54c0-85e6-28ec18109241@fastmail.org> On Sat, 31 Jan 2026, Srinivas Kirthy K wrote: > Hi Satish, > > I included this line in the makefile: > > *mpiexec: "${PETSC_DIR}/${PETSC_ARCH}/bin/mpiexec" * > > Was able to compile, still getting the same error when I run *mpiexec -n 1 > ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view* > > Anything that I'm doing wrong? Use: /home/skirthy/petsc/arch-linux-c-debug/bin/mpiexec -n 1 ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view Satish > > Thanks. > > > On Sat, Jan 31, 2026 at 10:54?AM Satish Balay > wrote: > > > >>>> > > Configure Options: --configModules=PETSc.Configure > > --optionsModule=config.compilerOptions --download-mpich > > --download-fblaslapack > > <<<< > > > > Ok - so you've installed MPI via --download-mpich - so you need to use: > > > > >>> > > mpiexec: /home/skirthy/petsc/arch-linux-c-debug/bin/mpiexec > > <<< > > > > [likely mpiexec used below is a different one] > > > > Satish > > > > On Sat, 31 Jan 2026, Srinivas Kirthy K wrote: > > > > > Hi, > > > > > > I'm new to PETSc. > > > > > > I was able to compile ex50.c using: > > > *cd petsc/src/ksp/ksp/tutorials* > > > *make ex50* > > > > > > When I run: *mpiexec -n 1 ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view* > > > > > > I get the error message in *PETSC_mpiexec_error.txt *(attached) > > > > > > I've defined these in the ~/.bashrc: > > > > > > *export PETSC_DIR=$HOME/petsc # added on 31/01/2026 to define PETSC > > > locationexport PETSC_ARCH=arch-linux-c-debug* > > > > > > Can you help me fix this? > > > > > > I've attached the *configure.log* file as well. > > > > > > Thanks in advance. > > > > > > > > > > > > From srinivas.kirthy at gmail.com Sat Jan 31 02:04:12 2026 From: srinivas.kirthy at gmail.com (Srinivas Kirthy K) Date: Sat, 31 Jan 2026 13:34:12 +0530 Subject: [petsc-users] Problem running ex50 after compilation In-Reply-To: <55ee8018-c2b9-54c0-85e6-28ec18109241@fastmail.org> References: <7171dd06-da39-d4cd-aeaf-b3b42db7be88@fastmail.org> <55ee8018-c2b9-54c0-85e6-28ec18109241@fastmail.org> Message-ID: Works now. Thank you. On Sat, Jan 31, 2026 at 1:29?PM Satish Balay wrote: > On Sat, 31 Jan 2026, Srinivas Kirthy K wrote: > > > Hi Satish, > > > > I included this line in the makefile: > > > > *mpiexec: "${PETSC_DIR}/${PETSC_ARCH}/bin/mpiexec" * > > > > Was able to compile, still getting the same error when I run *mpiexec -n > 1 > > ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view* > > > > Anything that I'm doing wrong? > > Use: > > /home/skirthy/petsc/arch-linux-c-debug/bin/mpiexec -n 1 ./ex50 -da_grid_x > 4 -da_grid_y 4 -mat_view > > Satish > > > > > Thanks. > > > > > > On Sat, Jan 31, 2026 at 10:54?AM Satish Balay > > wrote: > > > > > >>>> > > > Configure Options: --configModules=PETSc.Configure > > > --optionsModule=config.compilerOptions --download-mpich > > > --download-fblaslapack > > > <<<< > > > > > > Ok - so you've installed MPI via --download-mpich - so you need to use: > > > > > > >>> > > > mpiexec: /home/skirthy/petsc/arch-linux-c-debug/bin/mpiexec > > > <<< > > > > > > [likely mpiexec used below is a different one] > > > > > > Satish > > > > > > On Sat, 31 Jan 2026, Srinivas Kirthy K wrote: > > > > > > > Hi, > > > > > > > > I'm new to PETSc. > > > > > > > > I was able to compile ex50.c using: > > > > *cd petsc/src/ksp/ksp/tutorials* > > > > *make ex50* > > > > > > > > When I run: *mpiexec -n 1 ./ex50 -da_grid_x 4 -da_grid_y 4 > -mat_view* > > > > > > > > I get the error message in *PETSC_mpiexec_error.txt *(attached) > > > > > > > > I've defined these in the ~/.bashrc: > > > > > > > > *export PETSC_DIR=$HOME/petsc # added on 31/01/2026 to define PETSC > > > > locationexport PETSC_ARCH=arch-linux-c-debug* > > > > > > > > Can you help me fix this? > > > > > > > > I've attached the *configure.log* file as well. > > > > > > > > Thanks in advance. > > > > > > > > > > > > > > > > > > -- Regards, Srinivas Kirthy K -------------- next part -------------- An HTML attachment was scrubbed... URL: From srinivas.kirthy at gmail.com Sat Jan 31 03:02:19 2026 From: srinivas.kirthy at gmail.com (Srinivas Kirthy K) Date: Sat, 31 Jan 2026 14:32:19 +0530 Subject: [petsc-users] Problem running ex50 after compilation In-Reply-To: References: <7171dd06-da39-d4cd-aeaf-b3b42db7be88@fastmail.org> <55ee8018-c2b9-54c0-85e6-28ec18109241@fastmail.org> Message-ID: Hi, I'm trying to run src/ts/tutorials/ex2.c using: */home/skirthy/petsc/arch-linux-c-debug/bin/mpiexec -n 1 ./ex2 -ts_max_steps 10 -ts_monitor* Instead of the expected output , I get this: *Norm of error 0.000156044 iterations 6WARNING! There are options you set that were not used!WARNING! could be spelling mistake, etc!There are 2 unused database options. They are:Option left: name:-ts_max_steps value: 10 source: command lineOption left: name:-ts_monitor (no value) source: command line* Trying to troubleshoot this, can anyone help? Thanks. On Sat, Jan 31, 2026 at 1:34?PM Srinivas Kirthy K wrote: > Works now. Thank you. > > On Sat, Jan 31, 2026 at 1:29?PM Satish Balay > wrote: > >> On Sat, 31 Jan 2026, Srinivas Kirthy K wrote: >> >> > Hi Satish, >> > >> > I included this line in the makefile: >> > >> > *mpiexec: "${PETSC_DIR}/${PETSC_ARCH}/bin/mpiexec" * >> > >> > Was able to compile, still getting the same error when I run *mpiexec >> -n 1 >> > ./ex50 -da_grid_x 4 -da_grid_y 4 -mat_view* >> > >> > Anything that I'm doing wrong? >> >> Use: >> >> /home/skirthy/petsc/arch-linux-c-debug/bin/mpiexec -n 1 ./ex50 >> -da_grid_x 4 -da_grid_y 4 -mat_view >> >> Satish >> >> > >> > Thanks. >> > >> > >> > On Sat, Jan 31, 2026 at 10:54?AM Satish Balay >> > wrote: >> > >> > > >>>> >> > > Configure Options: --configModules=PETSc.Configure >> > > --optionsModule=config.compilerOptions --download-mpich >> > > --download-fblaslapack >> > > <<<< >> > > >> > > Ok - so you've installed MPI via --download-mpich - so you need to >> use: >> > > >> > > >>> >> > > mpiexec: /home/skirthy/petsc/arch-linux-c-debug/bin/mpiexec >> > > <<< >> > > >> > > [likely mpiexec used below is a different one] >> > > >> > > Satish >> > > >> > > On Sat, 31 Jan 2026, Srinivas Kirthy K wrote: >> > > >> > > > Hi, >> > > > >> > > > I'm new to PETSc. >> > > > >> > > > I was able to compile ex50.c using: >> > > > *cd petsc/src/ksp/ksp/tutorials* >> > > > *make ex50* >> > > > >> > > > When I run: *mpiexec -n 1 ./ex50 -da_grid_x 4 -da_grid_y 4 >> -mat_view* >> > > > >> > > > I get the error message in *PETSC_mpiexec_error.txt *(attached) >> > > > >> > > > I've defined these in the ~/.bashrc: >> > > > >> > > > *export PETSC_DIR=$HOME/petsc # added on 31/01/2026 to define PETSC >> > > > locationexport PETSC_ARCH=arch-linux-c-debug* >> > > > >> > > > Can you help me fix this? >> > > > >> > > > I've attached the *configure.log* file as well. >> > > > >> > > > Thanks in advance. >> > > > >> > > > >> > > >> > > >> > >> > > > > > -- > Regards, > > Srinivas Kirthy K > -- Regards, Srinivas Kirthy K -------------- next part -------------- An HTML attachment was scrubbed... URL: