From david at coreform.com Thu Jan 2 12:24:42 2025 From: david at coreform.com (David Kamensky) Date: Thu, 2 Jan 2025 10:24:42 -0800 Subject: [petsc-users] Reproducibility when restarting generalized-alpha Message-ID: Hi, I've recently been helping some co-workers with restarting PETSc time integrators from saved solution data. It looks like the only supported path for restarting the generalized-alpha integrator for 2nd-order-in-time systems (`TSALPHA2`) is to follow the same procedure as initialization, in which two first-order-accurate half-steps are used to estimate an acceleration from the given displacement and velocity. However, the resulting acceleration is not exactly equivalent to the intermediate one that would have been used by the integrator if the integration simply proceeded without restarting. This prevents exact reproducibility of computations from saved intermediate results. (An analogous issue would also affect `TSALPHA` for first-order-in-time problems, where velocity is estimated on initialization/restart.) Am I misunderstanding this, or missing a better method of restarting the 2nd-order generalized-alpha integrator? If not, would there be interest in adding an alternate initialization/restart option to the `TSALPHA2` integrator that takes a user-provided `Vec` for the initial/intermediate acceleration, and skips over the half-step estimation procedure? Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Thu Jan 2 12:41:45 2025 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Thu, 2 Jan 2025 19:41:45 +0100 Subject: [petsc-users] Reproducibility when restarting generalized-alpha In-Reply-To: References: Message-ID: Note that BDF has the same issue. I think the correct way to handle this is to support storing/loading these extra vectors via TSView()/TSLoad(). How are you currently restarting the simulation? Il giorno gio 2 gen 2025 alle ore 19:25 David Kamensky via petsc-users < petsc-users at mcs.anl.gov> ha scritto: > Hi, > > I've recently been helping some co-workers with restarting PETSc time > integrators from saved solution data. > > It looks like the only supported path for restarting the generalized-alpha > integrator for 2nd-order-in-time systems (`TSALPHA2`) is to follow the same > procedure as initialization, in which two first-order-accurate half-steps > are used to estimate an acceleration from the given displacement and > velocity. However, the resulting acceleration is not exactly equivalent to > the intermediate one that would have been used by the integrator if the > integration simply proceeded without restarting. This prevents exact > reproducibility of computations from saved intermediate results. (An > analogous issue would also affect `TSALPHA` for first-order-in-time > problems, where velocity is estimated on initialization/restart.) > > Am I misunderstanding this, or missing a better method of restarting the > 2nd-order generalized-alpha integrator? If not, would there be interest in > adding an alternate initialization/restart option to the `TSALPHA2` > integrator that takes a user-provided `Vec` for the initial/intermediate > acceleration, and skips over the half-step estimation procedure? > > Thanks, David > -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at coreform.com Thu Jan 2 13:34:19 2025 From: david at coreform.com (David Kamensky) Date: Thu, 2 Jan 2025 11:34:19 -0800 Subject: [petsc-users] Reproducibility when restarting generalized-alpha In-Reply-To: References: Message-ID: > > How are you currently restarting the simulation? I just reviewed the code, and we're not currently using the `TSView/Load` functions. We're just manually (de)serializing displacement, velocity, and acceleration data using a neutral format, populating PETSc `Vec`s with this data, and associating them with a new `TS` object via `TS2SetSolution` (and setting other relevant data, like time, time step size, etc.). However, `TS2SetSolution` only accepts displacement and velocity. I think the correct way to handle this is to support storing/loading these > extra vectors via TSView()/TSLoad(). I took a quick look at the implementations of `TSView/Load`, and it looks like the "base class" (to borrow some OOP terminology) implementation in `ts/interface/ts.c` only saves/loads the solution vector, while the subclass-specific logic from `TSView_Alpha` in `ts/impls/implicit/alpha/alpha2.c` only adds some additional output writing the generalized-alpha parameters to ASCII viewers (and similar for BDF). So, following the `TSView/Load` path, I don't see where it would even save/load the velocity vector for 2nd-order-in-time integrators. Is it the case that this functionality is known to be incomplete, and you're suggesting that the best path forward would be to update it? Thanks, David On Thu, Jan 2, 2025 at 10:41?AM Stefano Zampini wrote: > Note that BDF has the same issue. I think the correct way to handle this > is to support storing/loading these extra vectors via TSView()/TSLoad(). > How are you currently restarting the simulation? > > Il giorno gio 2 gen 2025 alle ore 19:25 David Kamensky via petsc-users < > petsc-users at mcs.anl.gov> ha scritto: > >> Hi, >> >> I've recently been helping some co-workers with restarting PETSc time >> integrators from saved solution data. >> >> It looks like the only supported path for restarting the >> generalized-alpha integrator for 2nd-order-in-time systems (`TSALPHA2`) is >> to follow the same procedure as initialization, in which two >> first-order-accurate half-steps are used to estimate an acceleration from >> the given displacement and velocity. However, the resulting acceleration >> is not exactly equivalent to the intermediate one that would have been used >> by the integrator if the integration simply proceeded without restarting. >> This prevents exact reproducibility of computations from saved intermediate >> results. (An analogous issue would also affect `TSALPHA` for >> first-order-in-time problems, where velocity is estimated on >> initialization/restart.) >> >> Am I misunderstanding this, or missing a better method of restarting the >> 2nd-order generalized-alpha integrator? If not, would there be interest in >> adding an alternate initialization/restart option to the `TSALPHA2` >> integrator that takes a user-provided `Vec` for the initial/intermediate >> acceleration, and skips over the half-step estimation procedure? >> >> Thanks, David >> > > > -- > Stefano > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glenn.hammond at pnnl.gov Thu Jan 2 15:28:04 2025 From: glenn.hammond at pnnl.gov (Hammond, Glenn E) Date: Thu, 2 Jan 2025 21:28:04 +0000 Subject: [petsc-users] Element by element comparison of matrices Message-ID: PETSc Users, I want to compare two Jacobians matrices (one with derivatives calculated analytically; the other numerically). I want to apply relatives and/or absolute tolerances. Does anyone know if such capability is built into PETSc? I cannot find anything other the MatEqual() with compares down to the bit. Otherwise, I plan to use MatGetValues() and compare the elements individually. Just hoping there is something more convenient hidden somewhere. Thanks, Glenn -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Thu Jan 2 15:51:22 2025 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Thu, 2 Jan 2025 22:51:22 +0100 Subject: [petsc-users] Element by element comparison of matrices In-Reply-To: References: Message-ID: MatAXPY for the difference, MatNorm for the relative error On Thu, Jan 2, 2025, 22:32 Hammond, Glenn E via petsc-users < petsc-users at mcs.anl.gov> wrote: > PETSc Users, > > > > I want to compare two Jacobians matrices (one with derivatives calculated > analytically; the other numerically). I want to apply relatives and/or > absolute tolerances. Does anyone know if such capability is built into > PETSc? I cannot find anything other the MatEqual() with compares down to > the bit. Otherwise, I plan to use MatGetValues() and compare the elements > individually. Just hoping there is something more convenient hidden > somewhere. > > > > Thanks, > > > > Glenn > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jan 2 16:16:27 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 2 Jan 2025 17:16:27 -0500 Subject: [petsc-users] Reproducibility when restarting generalized-alpha In-Reply-To: References: Message-ID: David, I think Stefano was saying the TSView/Load approach should be improved to save the additional vector(s) and use them in the restart. Are you up to trying this by adding this functionality to TSView_*/TSLoad_*, or should we try to fit in time to add this (needed) support? Barry > On Jan 2, 2025, at 2:34?PM, David Kamensky via petsc-users wrote: > >> How are you currently restarting the simulation? > > I just reviewed the code, and we're not currently using the `TSView/Load` functions. We're just manually (de)serializing displacement, velocity, and acceleration data using a neutral format, populating PETSc `Vec`s with this data, and associating them with a new `TS` object via `TS2SetSolution` (and setting other relevant data, like time, time step size, etc.). However, `TS2SetSolution` only accepts displacement and velocity. > >> I think the correct way to handle this is to support storing/loading these extra vectors via TSView()/TSLoad(). > > I took a quick look at the implementations of `TSView/Load`, and it looks like the "base class" (to borrow some OOP terminology) implementation in `ts/interface/ts.c` only saves/loads the solution vector, while the subclass-specific logic from `TSView_Alpha` in `ts/impls/implicit/alpha/alpha2.c` only adds some additional output writing the generalized-alpha parameters to ASCII viewers (and similar for BDF). So, following the `TSView/Load` path, I don't see where it would even save/load the velocity vector for 2nd-order-in-time integrators. Is it the case that this functionality is known to be incomplete, and you're suggesting that the best path forward would be to update it? > > Thanks, David > > > On Thu, Jan 2, 2025 at 10:41?AM Stefano Zampini > wrote: >> Note that BDF has the same issue. I think the correct way to handle this is to support storing/loading these extra vectors via TSView()/TSLoad(). >> How are you currently restarting the simulation? >> >> Il giorno gio 2 gen 2025 alle ore 19:25 David Kamensky via petsc-users > ha scritto: >>> Hi, >>> >>> I've recently been helping some co-workers with restarting PETSc time integrators from saved solution data. >>> >>> It looks like the only supported path for restarting the generalized-alpha integrator for 2nd-order-in-time systems (`TSALPHA2`) is to follow the same procedure as initialization, in which two first-order-accurate half-steps are used to estimate an acceleration from the given displacement and velocity. However, the resulting acceleration is not exactly equivalent to the intermediate one that would have been used by the integrator if the integration simply proceeded without restarting. This prevents exact reproducibility of computations from saved intermediate results. (An analogous issue would also affect `TSALPHA` for first-order-in-time problems, where velocity is estimated on initialization/restart.) >>> >>> Am I misunderstanding this, or missing a better method of restarting the 2nd-order generalized-alpha integrator? If not, would there be interest in adding an alternate initialization/restart option to the `TSALPHA2` integrator that takes a user-provided `Vec` for the initial/intermediate acceleration, and skips over the half-step estimation procedure? >>> >>> Thanks, David >> >> >> >> -- >> Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jan 2 16:24:51 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 2 Jan 2025 17:24:51 -0500 Subject: [petsc-users] Element by element comparison of matrices In-Reply-To: References: Message-ID: We also have support for this built into SNES. For example, you provide the analytic to SNES which then compute via differencing, mostly to check if the analytic implementation was correct. You can run an entire set of SNESSolve with this turned on and it will check at all vectors the Jacobian is computed how closely the match is (that is it does not just compare the two for a single vector). -snes_test_jacobian see the routine SNESTestJacobian() which as the code that compares the matrices element by element etc. See also https://urldefense.us/v3/__https://petsc.org/release/manual/snes/*checking-accuracy-of-derivatives__;Iw!!G_uCfscf7eWS!ZMikDTtuiW1VhZZbem68GDU-ADoEf8_Mok1dQ18Er-vr4DStxjq-Ula5ZHd0tLvVSXoLbI-NNFYPuV20Gu08oDI$ Barry > On Jan 2, 2025, at 4:51?PM, Stefano Zampini wrote: > > MatAXPY for the difference, MatNorm for the relative error > > > On Thu, Jan 2, 2025, 22:32 Hammond, Glenn E via petsc-users > wrote: >> PETSc Users, >> >> >> >> I want to compare two Jacobians matrices (one with derivatives calculated analytically; the other numerically). I want to apply relatives and/or absolute tolerances. Does anyone know if such capability is built into PETSc? I cannot find anything other the MatEqual() with compares down to the bit. Otherwise, I plan to use MatGetValues() and compare the elements individually. Just hoping there is something more convenient hidden somewhere. >> >> >> >> Thanks, >> >> >> >> Glenn >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From glenn.hammond at pnnl.gov Thu Jan 2 16:34:35 2025 From: glenn.hammond at pnnl.gov (Hammond, Glenn E) Date: Thu, 2 Jan 2025 22:34:35 +0000 Subject: [petsc-users] Element by element comparison of matrices In-Reply-To: References: Message-ID: <2C3C252D-3B39-4D9B-BCCA-9FB23C6778FA@pnnl.gov> I need to element info (which row/col combo) to pinpoint which crossterm may be defective. This is important when debugging analytical derivatives for chemical reactions. I will look into thr SNES approach. Thanks, Glenn On Jan 2, 2025, at 2:25?PM, Barry Smith wrote: ? Check twice before you click! This email originated from outside PNNL. We also have support for this built into SNES. For example, you provide the analytic to SNES which then compute via differencing, mostly to check if the analytic implementation was correct. You can run an entire set of SNESSolve with this turned on and it will check at all vectors the Jacobian is computed how closely the match is (that is it does not just compare the two for a single vector). -snes_test_jacobian see the routine SNESTestJacobian() which as the code that compares the matrices element by element etc. See also https://urldefense.us/v3/__https://petsc.org/release/manual/snes/*checking-accuracy-of-derivatives__;Iw!!G_uCfscf7eWS!ZoaRDcJH_dSH7xTDVlbvY-63elK9LdoGNbUNQJjBmvefqNO0zhpqzpbfTb6TDGKWr_omqNnkE359z37VW1z4usPPRD5q2Ls$ Barry On Jan 2, 2025, at 4:51?PM, Stefano Zampini wrote: MatAXPY for the difference, MatNorm for the relative error On Thu, Jan 2, 2025, 22:32 Hammond, Glenn E via petsc-users > wrote: PETSc Users, I want to compare two Jacobians matrices (one with derivatives calculated analytically; the other numerically). I want to apply relatives and/or absolute tolerances. Does anyone know if such capability is built into PETSc? I cannot find anything other the MatEqual() with compares down to the bit. Otherwise, I plan to use MatGetValues() and compare the elements individually. Just hoping there is something more convenient hidden somewhere. Thanks, Glenn -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at coreform.com Thu Jan 2 16:47:38 2025 From: david at coreform.com (David Kamensky) Date: Thu, 2 Jan 2025 14:47:38 -0800 Subject: [petsc-users] Reproducibility when restarting generalized-alpha In-Reply-To: References: Message-ID: > > Are you up to trying this by adding this functionality to > TSView_*/TSLoad_*, or should we try to fit in time to add this (needed) > support? I'll have to consult with the team to decide what direction we want to go. We may want to keep our restart data in a neutral format to support other uses of it (e.g., solution postprocessing), or to maintain consistency and interoperability with some of our non-PETSc time steppers (so that we can, e.g., restart from an intermediate step of a PETSc `TS` using a non-PETSc integrator). If we do stick with our more manual re-initialization of the `TS`, it might be preferable for us to implement an initialization option that allows us to provide an acceleration. (This would be useful even for purposes other than restarting, if a user has a cheaper problem-specific method for computing the initial acceleration, e.g., inverting a mass matrix against the initial configuration's force vector.) In any case, getting consistent view/load behavior across all time integrators and testing it thoroughly would be significantly larger in scope than what we need. Best, David On Thu, Jan 2, 2025 at 2:16?PM Barry Smith wrote: > > David, > > I think Stefano was saying the TSView/Load approach should be improved > to save the additional vector(s) and use them in the restart. > > Are you up to trying this by adding this functionality to > TSView_*/TSLoad_*, or should we try to fit in time to add this (needed) > support? > > Barry > > > On Jan 2, 2025, at 2:34?PM, David Kamensky via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > How are you currently restarting the simulation? > > > I just reviewed the code, and we're not currently using the `TSView/Load` > functions. We're just manually (de)serializing displacement, velocity, and > acceleration data using a neutral format, populating PETSc `Vec`s with this > data, and associating them with a new `TS` object via `TS2SetSolution` (and > setting other relevant data, like time, time step size, etc.). However, > `TS2SetSolution` only accepts displacement and velocity. > > I think the correct way to handle this is to support storing/loading these >> extra vectors via TSView()/TSLoad(). > > > I took a quick look at the implementations of `TSView/Load`, and it looks > like the "base class" (to borrow some OOP terminology) implementation in > `ts/interface/ts.c` only saves/loads the solution vector, while the > subclass-specific logic from `TSView_Alpha` in > `ts/impls/implicit/alpha/alpha2.c` only adds some additional output writing > the generalized-alpha parameters to ASCII viewers (and similar for BDF). > So, following the `TSView/Load` path, I don't see where it would even > save/load the velocity vector for 2nd-order-in-time integrators. Is it the > case that this functionality is known to be incomplete, and you're > suggesting that the best path forward would be to update it? > > Thanks, David > > > On Thu, Jan 2, 2025 at 10:41?AM Stefano Zampini > wrote: > >> Note that BDF has the same issue. I think the correct way to handle this >> is to support storing/loading these extra vectors via TSView()/TSLoad(). >> How are you currently restarting the simulation? >> >> Il giorno gio 2 gen 2025 alle ore 19:25 David Kamensky via petsc-users < >> petsc-users at mcs.anl.gov> ha scritto: >> >>> Hi, >>> >>> I've recently been helping some co-workers with restarting PETSc time >>> integrators from saved solution data. >>> >>> It looks like the only supported path for restarting the >>> generalized-alpha integrator for 2nd-order-in-time systems (`TSALPHA2`) is >>> to follow the same procedure as initialization, in which two >>> first-order-accurate half-steps are used to estimate an acceleration from >>> the given displacement and velocity. However, the resulting acceleration >>> is not exactly equivalent to the intermediate one that would have been used >>> by the integrator if the integration simply proceeded without restarting. >>> This prevents exact reproducibility of computations from saved intermediate >>> results. (An analogous issue would also affect `TSALPHA` for >>> first-order-in-time problems, where velocity is estimated on >>> initialization/restart.) >>> >>> Am I misunderstanding this, or missing a better method of restarting the >>> 2nd-order generalized-alpha integrator? If not, would there be interest in >>> adding an alternate initialization/restart option to the `TSALPHA2` >>> integrator that takes a user-provided `Vec` for the initial/intermediate >>> acceleration, and skips over the half-step estimation procedure? >>> >>> Thanks, David >>> >> >> >> -- >> Stefano >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jan 2 21:02:32 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 2 Jan 2025 22:02:32 -0500 Subject: [petsc-users] Reproducibility when restarting generalized-alpha In-Reply-To: References: Message-ID: Sure, I was suggesting you might implement it for your one-needed TSView_*/TSLoad_*. With a single template done we can easily add the rest. I agree having an API to start with multiple solutions is a good idea and would be needed for any TSView_*/TSLoad_*. so that may be the simplest way for you to get started. Or we can look at providing the new API if it is out of scope for you. Barry > On Jan 2, 2025, at 5:47?PM, David Kamensky wrote: > >> Are you up to trying this by adding this functionality to TSView_*/TSLoad_*, or should we try to fit in time to add this (needed) support? > > I'll have to consult with the team to decide what direction we want to go. We may want to keep our restart data in a neutral format to support other uses of it (e.g., solution postprocessing), or to maintain consistency and interoperability with some of our non-PETSc time steppers (so that we can, e.g., restart from an intermediate step of a PETSc `TS` using a non-PETSc integrator). If we do stick with our more manual re-initialization of the `TS`, it might be preferable for us to implement an initialization option that allows us to provide an acceleration. (This would be useful even for purposes other than restarting, if a user has a cheaper problem-specific method for computing the initial acceleration, e.g., inverting a mass matrix against the initial configuration's force vector.) > > In any case, getting consistent view/load behavior across all time integrators and testing it thoroughly would be significantly larger in scope than what we need. > > Best, David > > On Thu, Jan 2, 2025 at 2:16?PM Barry Smith > wrote: >> >> David, >> >> I think Stefano was saying the TSView/Load approach should be improved to save the additional vector(s) and use them in the restart. >> >> Are you up to trying this by adding this functionality to TSView_*/TSLoad_*, or should we try to fit in time to add this (needed) support? >> >> Barry >> >> >>> On Jan 2, 2025, at 2:34?PM, David Kamensky via petsc-users > wrote: >>> >>>> How are you currently restarting the simulation? >>> >>> I just reviewed the code, and we're not currently using the `TSView/Load` functions. We're just manually (de)serializing displacement, velocity, and acceleration data using a neutral format, populating PETSc `Vec`s with this data, and associating them with a new `TS` object via `TS2SetSolution` (and setting other relevant data, like time, time step size, etc.). However, `TS2SetSolution` only accepts displacement and velocity. >>> >>>> I think the correct way to handle this is to support storing/loading these extra vectors via TSView()/TSLoad(). >>> >>> I took a quick look at the implementations of `TSView/Load`, and it looks like the "base class" (to borrow some OOP terminology) implementation in `ts/interface/ts.c` only saves/loads the solution vector, while the subclass-specific logic from `TSView_Alpha` in `ts/impls/implicit/alpha/alpha2.c` only adds some additional output writing the generalized-alpha parameters to ASCII viewers (and similar for BDF). So, following the `TSView/Load` path, I don't see where it would even save/load the velocity vector for 2nd-order-in-time integrators. Is it the case that this functionality is known to be incomplete, and you're suggesting that the best path forward would be to update it? >>> >>> Thanks, David >>> >>> >>> On Thu, Jan 2, 2025 at 10:41?AM Stefano Zampini > wrote: >>>> Note that BDF has the same issue. I think the correct way to handle this is to support storing/loading these extra vectors via TSView()/TSLoad(). >>>> How are you currently restarting the simulation? >>>> >>>> Il giorno gio 2 gen 2025 alle ore 19:25 David Kamensky via petsc-users > ha scritto: >>>>> Hi, >>>>> >>>>> I've recently been helping some co-workers with restarting PETSc time integrators from saved solution data. >>>>> >>>>> It looks like the only supported path for restarting the generalized-alpha integrator for 2nd-order-in-time systems (`TSALPHA2`) is to follow the same procedure as initialization, in which two first-order-accurate half-steps are used to estimate an acceleration from the given displacement and velocity. However, the resulting acceleration is not exactly equivalent to the intermediate one that would have been used by the integrator if the integration simply proceeded without restarting. This prevents exact reproducibility of computations from saved intermediate results. (An analogous issue would also affect `TSALPHA` for first-order-in-time problems, where velocity is estimated on initialization/restart.) >>>>> >>>>> Am I misunderstanding this, or missing a better method of restarting the 2nd-order generalized-alpha integrator? If not, would there be interest in adding an alternate initialization/restart option to the `TSALPHA2` integrator that takes a user-provided `Vec` for the initial/intermediate acceleration, and skips over the half-step estimation procedure? >>>>> >>>>> Thanks, David >>>> >>>> >>>> >>>> -- >>>> Stefano >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jan 2 21:07:05 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 2 Jan 2025 22:07:05 -0500 Subject: [petsc-users] Element by element comparison of matrices In-Reply-To: <2C3C252D-3B39-4D9B-BCCA-9FB23C6778FA@pnnl.gov> References: <2C3C252D-3B39-4D9B-BCCA-9FB23C6778FA@pnnl.gov> Message-ID: > -snes_test_jacobian -snes_test_jacobian_view > On Jan 2, 2025, at 5:34?PM, Hammond, Glenn E wrote: > > I need to element info (which row/col combo) to pinpoint which crossterm may be defective. This is important when debugging analytical derivatives for chemical reactions. I will look into thr SNES approach. > > Thanks, > > Glenn > >> On Jan 2, 2025, at 2:25?PM, Barry Smith wrote: >> >> ? >> Check twice before you click! This email originated from outside PNNL. >> >> >> We also have support for this built into SNES. For example, you provide the analytic to SNES which then compute via differencing, mostly to check if the analytic implementation was correct. You can run an entire set of SNESSolve with this turned on and it will check at all vectors the Jacobian is computed how closely the match is (that is it does not just compare the two for a single vector). >> >> -snes_test_jacobian >> >> see the routine SNESTestJacobian() which as the code that compares the matrices element by element etc. See also https://urldefense.us/v3/__https://petsc.org/release/manual/snes/*checking-accuracy-of-derivatives__;Iw!!G_uCfscf7eWS!epphS19ORHlk58PNC52ioymjvLFv5HFHm_mpJQspG90ykgyw5aaUFtJAkxbYOyhTZ1Jag-BOMLM50u6QJ_9rhhQ$ >> >> >> Barry >> >> >> >> >>> On Jan 2, 2025, at 4:51?PM, Stefano Zampini wrote: >>> >>> MatAXPY for the difference, MatNorm for the relative error >>> >>> >>> On Thu, Jan 2, 2025, 22:32 Hammond, Glenn E via petsc-users > wrote: >>>> PETSc Users, >>>> >>>> >>>> >>>> I want to compare two Jacobians matrices (one with derivatives calculated analytically; the other numerically). I want to apply relatives and/or absolute tolerances. Does anyone know if such capability is built into PETSc? I cannot find anything other the MatEqual() with compares down to the bit. Otherwise, I plan to use MatGetValues() and compare the elements individually. Just hoping there is something more convenient hidden somewhere. >>>> >>>> >>>> >>>> Thanks, >>>> >>>> >>>> >>>> Glenn >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From yc17470 at connect.um.edu.mo Fri Jan 3 09:11:04 2025 From: yc17470 at connect.um.edu.mo (Gong Yujie) Date: Fri, 3 Jan 2025 15:11:04 +0000 Subject: [petsc-users] Inquiry about PetscDS for discretization of second order system Message-ID: Dear PETSc developer group, I'd like to inquire that if PetscDSSetResidual can be used for second order system? Are there a DS function that is related to solve a system that contains second order time derivative such as time dependent linear elasticity problem. I found there is tutorial for elastostatics problem using PetscDS but this doesn't involve time discretization. In TS, there is alpha method for solving second order system, but I haven't found a tutorial using DS yet. Is there a way to combine PetscDS with the alpha method in TS? Best Regards, Yujie -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 3 13:09:21 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 3 Jan 2025 09:09:21 -1000 Subject: [petsc-users] Inquiry about PetscDS for discretization of second order system In-Reply-To: References: Message-ID: On Fri, Jan 3, 2025 at 5:11?AM Gong Yujie wrote: > Dear PETSc developer group, > > I'd like to inquire that if PetscDSSetResidual can be used for second > order system? Are there a DS function that is related to solve a system > that contains second order time derivative such as time dependent linear > elasticity problem. > > I found there is tutorial for elastostatics problem using PetscDS but this > doesn't involve time discretization. In TS, there is alpha method for > solving second order system, but I haven't found a tutorial using DS yet. > Is there a way to combine PetscDS with the alpha method in TS? > I did not include the second derivative in the interface. Instead what I do is rewrite it as a first order system using v = \dot u. I hope to augment the interface sometime to add the second derivative, but it really only applies to that alpha integrator. Thanks, Matt > Best Regards, > Yujie > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aJ9bb4A4GeC3g-mMF6JrF-h30KdOz02aD1BUS889Xw2dK6lyHAe3QWUEfIcoiymARmU9ozG5CzsQXVlpi3c-$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at coreform.com Fri Jan 3 18:46:23 2025 From: david at coreform.com (David Kamensky) Date: Fri, 3 Jan 2025 16:46:23 -0800 Subject: [petsc-users] Reproducibility when restarting generalized-alpha In-Reply-To: References: Message-ID: Just to provide an update here, the conclusion of our internal discussion was to backlog this on our end for now, in favor of more urgent tasks. Perhaps we can open an issue for this on the PETSc issue tracker (or comment on an existing issue for BDF, if it exists), and I can check in there if/when we come back to it. Thanks, David On Thu, Jan 2, 2025 at 7:02?PM Barry Smith wrote: > > Sure, I was suggesting you might implement it for your one-needed > TSView_*/TSLoad_*. With a single template done we can easily add the rest. > > I agree having an API to start with multiple solutions is a good idea > and would be needed for any TSView_*/TSLoad_*. so that may be the simplest > way for you to get started. Or we can look at providing the new API if it > is out of scope for you. > > Barry > > On Jan 2, 2025, at 5:47?PM, David Kamensky wrote: > > Are you up to trying this by adding this functionality to >> TSView_*/TSLoad_*, or should we try to fit in time to add this (needed) >> support? > > > I'll have to consult with the team to decide what direction we want to > go. We may want to keep our restart data in a neutral format to support > other uses of it (e.g., solution postprocessing), or to maintain > consistency and interoperability with some of our non-PETSc time steppers > (so that we can, e.g., restart from an intermediate step of a PETSc `TS` > using a non-PETSc integrator). If we do stick with our more manual > re-initialization of the `TS`, it might be preferable for us to implement > an initialization option that allows us to provide an acceleration. (This > would be useful even for purposes other than restarting, if a user has a > cheaper problem-specific method for computing the initial acceleration, > e.g., inverting a mass matrix against the initial configuration's force > vector.) > > In any case, getting consistent view/load behavior across all time > integrators and testing it thoroughly would be significantly larger in > scope than what we need. > > Best, David > > On Thu, Jan 2, 2025 at 2:16?PM Barry Smith wrote: > >> >> David, >> >> I think Stefano was saying the TSView/Load approach should be >> improved to save the additional vector(s) and use them in the restart. >> >> Are you up to trying this by adding this functionality to >> TSView_*/TSLoad_*, or should we try to fit in time to add this (needed) >> support? >> >> Barry >> >> >> On Jan 2, 2025, at 2:34?PM, David Kamensky via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >> How are you currently restarting the simulation? >> >> >> I just reviewed the code, and we're not currently using the `TSView/Load` >> functions. We're just manually (de)serializing displacement, velocity, and >> acceleration data using a neutral format, populating PETSc `Vec`s with this >> data, and associating them with a new `TS` object via `TS2SetSolution` (and >> setting other relevant data, like time, time step size, etc.). However, >> `TS2SetSolution` only accepts displacement and velocity. >> >> I think the correct way to handle this is to support storing/loading >>> these extra vectors via TSView()/TSLoad(). >> >> >> I took a quick look at the implementations of `TSView/Load`, and it looks >> like the "base class" (to borrow some OOP terminology) implementation in >> `ts/interface/ts.c` only saves/loads the solution vector, while the >> subclass-specific logic from `TSView_Alpha` in >> `ts/impls/implicit/alpha/alpha2.c` only adds some additional output writing >> the generalized-alpha parameters to ASCII viewers (and similar for BDF). >> So, following the `TSView/Load` path, I don't see where it would even >> save/load the velocity vector for 2nd-order-in-time integrators. Is it the >> case that this functionality is known to be incomplete, and you're >> suggesting that the best path forward would be to update it? >> >> Thanks, David >> >> >> On Thu, Jan 2, 2025 at 10:41?AM Stefano Zampini < >> stefano.zampini at gmail.com> wrote: >> >>> Note that BDF has the same issue. I think the correct way to handle this >>> is to support storing/loading these extra vectors via TSView()/TSLoad(). >>> How are you currently restarting the simulation? >>> >>> Il giorno gio 2 gen 2025 alle ore 19:25 David Kamensky via petsc-users < >>> petsc-users at mcs.anl.gov> ha scritto: >>> >>>> Hi, >>>> >>>> I've recently been helping some co-workers with restarting PETSc time >>>> integrators from saved solution data. >>>> >>>> It looks like the only supported path for restarting the >>>> generalized-alpha integrator for 2nd-order-in-time systems (`TSALPHA2`) is >>>> to follow the same procedure as initialization, in which two >>>> first-order-accurate half-steps are used to estimate an acceleration from >>>> the given displacement and velocity. However, the resulting acceleration >>>> is not exactly equivalent to the intermediate one that would have been used >>>> by the integrator if the integration simply proceeded without restarting. >>>> This prevents exact reproducibility of computations from saved intermediate >>>> results. (An analogous issue would also affect `TSALPHA` for >>>> first-order-in-time problems, where velocity is estimated on >>>> initialization/restart.) >>>> >>>> Am I misunderstanding this, or missing a better method of restarting >>>> the 2nd-order generalized-alpha integrator? If not, would there be >>>> interest in adding an alternate initialization/restart option to the >>>> `TSALPHA2` integrator that takes a user-provided `Vec` for the >>>> initial/intermediate acceleration, and skips over the half-step estimation >>>> procedure? >>>> >>>> Thanks, David >>>> >>> >>> >>> -- >>> Stefano >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sblondel at utk.edu Wed Jan 8 15:32:43 2025 From: sblondel at utk.edu (Blondel, Sophie) Date: Wed, 8 Jan 2025 21:32:43 +0000 Subject: [petsc-users] "-ts_exact_final_time matchstep" leads to DIVERGED_STEP_REJECTED In-Reply-To: References: <1E4CF02D-32EB-482B-ACFC-9AE1C7F9102E@petsc.dev> <61974628-9044-45AB-9F9A-3E426D021C93@petsc.dev> Message-ID: Hi everyone and happy new year, I finally tracked down the issue and it is not related to PETSc: in that specific way of using Xolotl we set the max time to 0 s at the very beginning, so with the matchstep option that sets the dt to 0 s and it diverges. Thank you for your help, Sophie ________________________________ From: Barry Smith Sent: Tuesday, December 17, 2024 18:33 To: Blondel, Sophie Cc: Jed Brown ; Zhang, Hong ; Emil Constantinescu ; petsc-users at mcs.anl.gov ; xolotl-psi-development at lists.sourceforge.net Subject: Re: [petsc-users] "-ts_exact_final_time matchstep" leads to DIVERGED_STEP_REJECTED This output is odd and seems wrong 0 TS dt 0. time 0. This is printing the initial timestep and time before it does any timestepping. We expect dt to be 1e-12 as your other case does print 0 TS dt 1e-12 time 0. What the exact final time flag is set to shouldn't affect the timestep this early in the computation. You could try in the debugger to trace the variable ts->time_step to see when it is being set from 1.e-12 to 0. On Dec 13, 2024, at 4:40?PM, Blondel, Sophie wrote: Barry, The short output is "SNESSolve has not converged due to Nan or Inf norm", the full one is attached. Cheers, Sophie ________________________________ From: Barry Smith > Sent: Friday, December 13, 2024 14:56 To: Blondel, Sophie > Cc: Jed Brown >; Zhang, Hong >; Emil Constantinescu >; petsc-users at mcs.anl.gov >; xolotl-psi-development at lists.sourceforge.net > Subject: Re: [petsc-users] "-ts_exact_final_time matchstep" leads to DIVERGED_STEP_REJECTED There is a bit of complicated logic to determine the "adjusted" timestep in TSAdaptChoose() when if (*accept && ts->exact_final_time == TS_EXACTFINALTIME_MATCHSTEP) { Is it possible that hmax = tmax - t; is exactly zero, and the logic below does not correctly handle that case? 0 TS dt 0. time 0. 0 TS dt 0. time 0. 0 TS dt 0. time 0. 0 TS dt 0. time 0. TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 TSAdapt basic step 0 stage rejected (SNES reason DIVERGED_FNORM_NAN) t=0 + 0.000e+00 retrying with dt=0.000e+00 Sophie, Any idea why SNES reason DIVERGED_FNORM_NAN? Could you run with -snes_error_if_not_converged? On Dec 13, 2024, at 2:34?PM, Blondel, Sophie > wrote: Hi everyone, The first max time it is trying to reach is 1.0e-12 s, and the initial dt is set to 1.0e-12 s from the commandline options. I believe it's not a formatting issue and that the dt is actually set somewhere to 0 s because that's why the step is rejected. Best, Sophie ________________________________ From: Barry Smith > Sent: Friday, December 13, 2024 14:21 To: Blondel, Sophie >; Jed Brown >; Zhang, Hong >; Emil Constantinescu > Cc: petsc-users at mcs.anl.gov >; xolotl-psi-development at lists.sourceforge.net > Subject: Re: [petsc-users] "-ts_exact_final_time matchstep" leads to DIVERGED_STEP_REJECTED Hm, what is the final time you are stepping towards in this run? There is something wrong with the adapt code since it seems to start with a dt of 0 but then tries "adapting" several times, but it could be the monitor function does not correctly format numbers smaller than 1.e-12 and it is just using truly small dt. Jed, Hong, Emil? Barry On Dec 10, 2024, at 11:08?AM, Blondel, Sophie > wrote: Good morning Barry, Attached are the updated files, there is more useful information in them. Cheers, Sophie ________________________________ From: Blondel, Sophie via Xolotl-psi-development > Sent: Monday, December 9, 2024 17:29 To: Barry Smith > Cc: petsc-users at mcs.anl.gov >; xolotl-psi-development at lists.sourceforge.net > Subject: Re: [Xolotl-psi-development] [petsc-users] "-ts_exact_final_time matchstep" leads to DIVERGED_STEP_REJECTED Hi Barry, I hope you are doing well. Attached are the output. To give a little more context, this is a "new" way of running the code where multiple instances are created and communicate together every few time steps (like coupling the code with itself in memory). Here there are 3 instances that each have a separate TS object, plus one "main" instance that doesn't solve anything but compute rates to exchange between the other instances. Cheers, Sophie ________________________________ From: Barry Smith > Sent: Monday, December 9, 2024 15:12 To: Blondel, Sophie > Cc: petsc-users at mcs.anl.gov >; xolotl-psi-development at lists.sourceforge.net > Subject: Re: [petsc-users] "-ts_exact_final_time matchstep" leads to DIVERGED_STEP_REJECTED On Dec 9, 2024, at 2:56?PM, Blondel, Sophie via petsc-users > wrote: Hi, I am trying to understand a strange behavior I'm encountering: when running my application with "-ts_exact_final_time stepover" everything goes well, but when I switch to "matchstep" I get DIVERGED_STEP_REJECTED before the first time step is finished. This is in the very first time-step in TSSolve? Please run with -ts_monitor and send all the output (best for a short time interval and do it twice once with -ts_exact_final_time stepover and once with exact. Barry I tried increasing the maximum number of rejections and it just takes longer to diverge, and if I set the value to "unlimited" it is basically an infinite loop. Is there a way to check why is the step rejected? Could the "matchstep" option change tolerances somewhere that would cause that behavior? Let me know if I should provide more information. Best, Sophie Blondel -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Thu Jan 9 11:23:22 2025 From: rlmackie862 at gmail.com (Randall Mackie) Date: Thu, 9 Jan 2025 09:23:22 -0800 Subject: [petsc-users] How to pick up MPI implementation in PETSc Message-ID: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> Dear PETSc team: At the bottom of the configuration file, various things are printed out, like the MPI implementation: MPI: Version: 3 mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec Implementation: openmpi OMPI_VERSION: 5.0.3 We would like to pick these up and write them to our own output files. What PETSc variables have this information? Especially the implementation. Thank you, Randy M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Thu Jan 9 12:44:11 2025 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 9 Jan 2025 13:44:11 -0500 Subject: [petsc-users] How to pick up MPI implementation in PETSc In-Reply-To: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> References: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> Message-ID: I would probably add: 'grep Implementation: ${PETSC_DIR}/${PETSC_ARCH}/llib/petsc/conf/configure.log' to your run script. Or something that greps on MPI: and prints the next 4 lines. Would that work? Mark On Thu, Jan 9, 2025 at 12:23?PM Randall Mackie wrote: > Dear PETSc team: > > At the bottom of the configuration file, various things are printed out, > like the MPI implementation: > > MPI: > > Version: 3 > > mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec > > Implementation: openmpi > > OMPI_VERSION: 5.0.3 > > > > We would like to pick these up and write them to our own output files. > > > What PETSc variables have this information? > > > Especially the implementation. > > > > Thank you, > > > Randy M. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Thu Jan 9 12:58:31 2025 From: rlmackie862 at gmail.com (Randall Mackie) Date: Thu, 9 Jan 2025 10:58:31 -0800 Subject: [petsc-users] How to pick up MPI implementation in PETSc In-Reply-To: References: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> Message-ID: <93550984-96AA-424B-81FE-081C9D6FA86D@gmail.com> Thanks Mark - yes that works. Randy > On Jan 9, 2025, at 10:44?AM, Mark Adams wrote: > > I would probably add: 'grep Implementation: ${PETSC_DIR}/${PETSC_ARCH}/llib/petsc/conf/configure.log' > to your run script. > Or something that greps on MPI: and prints the next 4 lines. > > Would that work? > Mark > > On Thu, Jan 9, 2025 at 12:23?PM Randall Mackie > wrote: >> Dear PETSc team: >> >> At the bottom of the configuration file, various things are printed out, like the MPI implementation: >> >> MPI: >> Version: 3 >> mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec >> Implementation: openmpi >> OMPI_VERSION: 5.0.3 >> >> We would like to pick these up and write them to our own output files. >> >> What PETSc variables have this information? >> >> Especially the implementation. >> >> >> Thank you, >> >> Randy M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Thu Jan 9 12:58:31 2025 From: rlmackie862 at gmail.com (Randall Mackie) Date: Thu, 9 Jan 2025 10:58:31 -0800 Subject: [petsc-users] How to pick up MPI implementation in PETSc In-Reply-To: References: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> Message-ID: <93550984-96AA-424B-81FE-081C9D6FA86D@gmail.com> Thanks Mark - yes that works. Randy > On Jan 9, 2025, at 10:44?AM, Mark Adams wrote: > > I would probably add: 'grep Implementation: ${PETSC_DIR}/${PETSC_ARCH}/llib/petsc/conf/configure.log' > to your run script. > Or something that greps on MPI: and prints the next 4 lines. > > Would that work? > Mark > > On Thu, Jan 9, 2025 at 12:23?PM Randall Mackie > wrote: >> Dear PETSc team: >> >> At the bottom of the configuration file, various things are printed out, like the MPI implementation: >> >> MPI: >> Version: 3 >> mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec >> Implementation: openmpi >> OMPI_VERSION: 5.0.3 >> >> We would like to pick these up and write them to our own output files. >> >> What PETSc variables have this information? >> >> Especially the implementation. >> >> >> Thank you, >> >> Randy M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 9 13:15:20 2025 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Jan 2025 14:15:20 -0500 Subject: [petsc-users] How to pick up MPI implementation in PETSc In-Reply-To: References: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> Message-ID: On Thu, Jan 9, 2025 at 1:44?PM Mark Adams wrote: > I would probably add: 'grep Implementation: > ${PETSC_DIR}/${PETSC_ARCH}/llib/petsc/conf/configure.log' > to your run script. > Or something that greps on MPI: and prints the next 4 lines. > Hi Randy, PETSc gets this info from ${PETSC_ARCH}/lib/petsc/conf/RDict.db which is a pickled Python dictionary of all the configure output. I wrote an example of pulling that data out, and _someone_ deleted it: https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/v3.0.0/bin/configVars.py?ref_type=tags__;!!G_uCfscf7eWS!YHzfvx7POQpHGF3pPcyxIv7p4vYMjCf5LGpy_6AcyuuK8zKOIFBdFBi6ZTqOdqxqQiIA-tAS7HGoZaoqRNeq$ Thanks, Matt > Would that work? > Mark > > On Thu, Jan 9, 2025 at 12:23?PM Randall Mackie > wrote: > >> Dear PETSc team: >> >> At the bottom of the configuration file, various things are printed out, >> like the MPI implementation: >> >> MPI: >> >> Version: 3 >> >> mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec >> >> Implementation: openmpi >> >> OMPI_VERSION: 5.0.3 >> >> >> >> We would like to pick these up and write them to our own output files. >> >> >> What PETSc variables have this information? >> >> >> Especially the implementation. >> >> >> >> Thank you, >> >> >> Randy M. >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YHzfvx7POQpHGF3pPcyxIv7p4vYMjCf5LGpy_6AcyuuK8zKOIFBdFBi6ZTqOdqxqQiIA-tAS7HGoZeCnHwzx$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Thu Jan 9 13:26:50 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Thu, 9 Jan 2025 13:26:50 -0600 (CST) Subject: [petsc-users] How to pick up MPI implementation in PETSc In-Reply-To: References: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> Message-ID: <3b338d09-0412-5226-0642-adb619701b25@fastmail.org> On Thu, 9 Jan 2025, Matthew Knepley wrote: > On Thu, Jan 9, 2025 at 1:44?PM Mark Adams wrote: > > > I would probably add: 'grep Implementation: > > ${PETSC_DIR}/${PETSC_ARCH}/llib/petsc/conf/configure.log' > > to your run script. > > Or something that greps on MPI: and prints the next 4 lines. > > > > Hi Randy, > > PETSc gets this info from > > ${PETSC_ARCH}/lib/petsc/conf/RDict.db > > which is a pickled Python dictionary of all the configure output. I wrote > an example > of pulling that data out, and _someone_ deleted it: > > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/v3.0.0/bin/configVars.py?ref_type=tags__;!!G_uCfscf7eWS!YHzfvx7POQpHGF3pPcyxIv7p4vYMjCf5LGpy_6AcyuuK8zKOIFBdFBi6ZTqOdqxqQiIA-tAS7HGoZaoqRNeq$ rdict might be broken due to changes for python-3.13 https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7790__;!!G_uCfscf7eWS!c_-5Epc3IAK8_BPh93_6ptggEhLaET8BJuATxS4i93keUU9jR_OW7v_7lRCnwShL_NEe4FolruHscu0DiLNPvrt36aY$ Satish > > Thanks, > > Matt > > > > Would that work? > > Mark > > > > On Thu, Jan 9, 2025 at 12:23?PM Randall Mackie > > wrote: > > > >> Dear PETSc team: > >> > >> At the bottom of the configuration file, various things are printed out, > >> like the MPI implementation: > >> > >> MPI: > >> > >> Version: 3 > >> > >> mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec > >> > >> Implementation: openmpi > >> > >> OMPI_VERSION: 5.0.3 > >> > >> > >> > >> We would like to pick these up and write them to our own output files. > >> > >> > >> What PETSc variables have this information? > >> > >> > >> Especially the implementation. > >> > >> > >> > >> Thank you, > >> > >> > >> Randy M. > >> > > > > From knepley at gmail.com Thu Jan 9 13:28:55 2025 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Jan 2025 14:28:55 -0500 Subject: [petsc-users] How to pick up MPI implementation in PETSc In-Reply-To: <3b338d09-0412-5226-0642-adb619701b25@fastmail.org> References: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> <3b338d09-0412-5226-0642-adb619701b25@fastmail.org> Message-ID: On Thu, Jan 9, 2025 at 2:26?PM Satish Balay wrote: > On Thu, 9 Jan 2025, Matthew Knepley wrote: > > > On Thu, Jan 9, 2025 at 1:44?PM Mark Adams wrote: > > > > > I would probably add: 'grep Implementation: > > > ${PETSC_DIR}/${PETSC_ARCH}/llib/petsc/conf/configure.log' > > > to your run script. > > > Or something that greps on MPI: and prints the next 4 lines. > > > > > > > Hi Randy, > > > > PETSc gets this info from > > > > ${PETSC_ARCH}/lib/petsc/conf/RDict.db > > > > which is a pickled Python dictionary of all the configure output. I wrote > > an example > > of pulling that data out, and _someone_ deleted it: > > > > > > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/v3.0.0/bin/configVars.py?ref_type=tags__;!!G_uCfscf7eWS!YHzfvx7POQpHGF3pPcyxIv7p4vYMjCf5LGpy_6AcyuuK8zKOIFBdFBi6ZTqOdqxqQiIA-tAS7HGoZaoqRNeq$ > > > rdict might be broken due to changes for python-3.13 > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7790__;!!G_uCfscf7eWS!Zzxf5zgMg2pRxgrsOeSMwCQKTlQbgzDbbYpCJibNMnMkAJfww_66UBSQn0rv7ShPrpJG42hoXuHcfXHQRFsv$ Hopefully only busted on arches that need XDR. Python aesthetics stick again. Thanks, Matt > > Satish > > > > > Thanks, > > > > Matt > > > > > > > Would that work? > > > Mark > > > > > > On Thu, Jan 9, 2025 at 12:23?PM Randall Mackie > > > wrote: > > > > > >> Dear PETSc team: > > >> > > >> At the bottom of the configuration file, various things are printed > out, > > >> like the MPI implementation: > > >> > > >> MPI: > > >> > > >> Version: 3 > > >> > > >> mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec > > >> > > >> Implementation: openmpi > > >> > > >> OMPI_VERSION: 5.0.3 > > >> > > >> > > >> > > >> We would like to pick these up and write them to our own output files. > > >> > > >> > > >> What PETSc variables have this information? > > >> > > >> > > >> Especially the implementation. > > >> > > >> > > >> > > >> Thank you, > > >> > > >> > > >> Randy M. > > >> > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Zzxf5zgMg2pRxgrsOeSMwCQKTlQbgzDbbYpCJibNMnMkAJfww_66UBSQn0rv7ShPrpJG42hoXuHcfRM3WGfE$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jan 9 14:05:54 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 9 Jan 2025 15:05:54 -0500 Subject: [petsc-users] How to pick up MPI implementation in PETSc In-Reply-To: References: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> Message-ID: Matt, Don't pickle them up just json them up and they will be portable to all tools. Barry > On Jan 9, 2025, at 2:15?PM, Matthew Knepley wrote: > > On Thu, Jan 9, 2025 at 1:44?PM Mark Adams > wrote: >> I would probably add: 'grep Implementation: ${PETSC_DIR}/${PETSC_ARCH}/llib/petsc/conf/configure.log' >> to your run script. >> Or something that greps on MPI: and prints the next 4 lines. > > Hi Randy, > > PETSc gets this info from > > ${PETSC_ARCH}/lib/petsc/conf/RDict.db > > which is a pickled Python dictionary of all the configure output. I wrote an example > of pulling that data out, and _someone_ deleted it: > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/v3.0.0/bin/configVars.py?ref_type=tags__;!!G_uCfscf7eWS!b1k1pgJX4H1uOHy4TcFuL4oglueaBVrhpS6YZLQYhGVtTU70Ll3NTY9lm2x3WXvaRZDGg4Id3Aw2n3l_X6ZiKFo$ > > Thanks, > > Matt > >> Would that work? >> Mark >> >> On Thu, Jan 9, 2025 at 12:23?PM Randall Mackie > wrote: >>> Dear PETSc team: >>> >>> At the bottom of the configuration file, various things are printed out, like the MPI implementation: >>> >>> MPI: >>> Version: 3 >>> mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec >>> Implementation: openmpi >>> OMPI_VERSION: 5.0.3 >>> >>> We would like to pick these up and write them to our own output files. >>> >>> What PETSc variables have this information? >>> >>> Especially the implementation. >>> >>> >>> Thank you, >>> >>> Randy M. > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b1k1pgJX4H1uOHy4TcFuL4oglueaBVrhpS6YZLQYhGVtTU70Ll3NTY9lm2x3WXvaRZDGg4Id3Aw2n3l_mlRHFe0$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 9 15:32:10 2025 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Jan 2025 16:32:10 -0500 Subject: [petsc-users] How to pick up MPI implementation in PETSc In-Reply-To: References: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> Message-ID: On Thu, Jan 9, 2025 at 3:06?PM Barry Smith wrote: > > Matt, > > Don't pickle them up just json them up and they will be portable to > all tools. > Does Python have a JSON out for all objects? Thanks, Mat > Barry > > > On Jan 9, 2025, at 2:15?PM, Matthew Knepley wrote: > > On Thu, Jan 9, 2025 at 1:44?PM Mark Adams wrote: > >> I would probably add: 'grep Implementation: >> ${PETSC_DIR}/${PETSC_ARCH}/llib/petsc/conf/configure.log' >> to your run script. >> Or something that greps on MPI: and prints the next 4 lines. >> > > Hi Randy, > > PETSc gets this info from > > ${PETSC_ARCH}/lib/petsc/conf/RDict.db > > which is a pickled Python dictionary of all the configure output. I wrote > an example > of pulling that data out, and _someone_ deleted it: > > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/v3.0.0/bin/configVars.py?ref_type=tags__;!!G_uCfscf7eWS!e7hY7J6hbQd8pTXvi3W93uYZkk4uUudtP8x56TedYsjwq0UAInG3GEb0vy9q2KW576bEbzdvt9DwgwaX9idd$ > > > Thanks, > > Matt > > >> Would that work? >> Mark >> >> On Thu, Jan 9, 2025 at 12:23?PM Randall Mackie >> wrote: >> >>> Dear PETSc team: >>> >>> At the bottom of the configuration file, various things are printed out, >>> like the MPI implementation: >>> >>> MPI: >>> Version: 3 >>> mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec >>> Implementation: openmpi >>> OMPI_VERSION: 5.0.3 >>> >>> >>> We would like to pick these up and write them to our own output files. >>> >>> What PETSc variables have this information? >>> >>> Especially the implementation. >>> >>> >>> Thank you, >>> >>> Randy M. >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e7hY7J6hbQd8pTXvi3W93uYZkk4uUudtP8x56TedYsjwq0UAInG3GEb0vy9q2KW576bEbzdvt9Dwg4W1BK-J$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e7hY7J6hbQd8pTXvi3W93uYZkk4uUudtP8x56TedYsjwq0UAInG3GEb0vy9q2KW576bEbzdvt9Dwg4W1BK-J$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jan 9 15:56:07 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 9 Jan 2025 16:56:07 -0500 Subject: [petsc-users] How to pick up MPI implementation in PETSc In-Reply-To: References: <7F0A76D5-3BDF-4BEA-B8D4-D42F7DF802AE@gmail.com> Message-ID: <01C04277-6F1C-40D2-82E4-5CC07227048A@petsc.dev> There is a python JSON package that accepts many things, so I think it depends on what particular object you want to save. Note that the JSON thing could be in addition to pickle and not save strange internal PETSc BuildSystem objects. Barry > On Jan 9, 2025, at 4:32?PM, Matthew Knepley wrote: > > On Thu, Jan 9, 2025 at 3:06?PM Barry Smith > wrote: >> >> Matt, >> >> Don't pickle them up just json them up and they will be portable to all tools. > > Does Python have a JSON out for all objects? > > Thanks, > > Mat > >> Barry >> >> >>> On Jan 9, 2025, at 2:15?PM, Matthew Knepley > wrote: >>> >>> On Thu, Jan 9, 2025 at 1:44?PM Mark Adams > wrote: >>>> I would probably add: 'grep Implementation: ${PETSC_DIR}/${PETSC_ARCH}/llib/petsc/conf/configure.log' >>>> to your run script. >>>> Or something that greps on MPI: and prints the next 4 lines. >>> >>> Hi Randy, >>> >>> PETSc gets this info from >>> >>> ${PETSC_ARCH}/lib/petsc/conf/RDict.db >>> >>> which is a pickled Python dictionary of all the configure output. I wrote an example >>> of pulling that data out, and _someone_ deleted it: >>> >>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/v3.0.0/bin/configVars.py?ref_type=tags__;!!G_uCfscf7eWS!fi0gF2MPDJUgBdXt9ScHTRZ1K4OaHjMjj-bel1fwKstSXS8fJ1eHShbIMQsYPGRi4AEw1XcJjh2B0G-Al1GO918$ >>> >>> Thanks, >>> >>> Matt >>> >>>> Would that work? >>>> Mark >>>> >>>> On Thu, Jan 9, 2025 at 12:23?PM Randall Mackie > wrote: >>>>> Dear PETSc team: >>>>> >>>>> At the bottom of the configuration file, various things are printed out, like the MPI implementation: >>>>> >>>>> MPI: >>>>> Version: 3 >>>>> mpiexec: /state/std2/openmpi-5.0.3-oneapi/bin/mpiexec >>>>> Implementation: openmpi >>>>> OMPI_VERSION: 5.0.3 >>>>> >>>>> We would like to pick these up and write them to our own output files. >>>>> >>>>> What PETSc variables have this information? >>>>> >>>>> Especially the implementation. >>>>> >>>>> >>>>> Thank you, >>>>> >>>>> Randy M. >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fi0gF2MPDJUgBdXt9ScHTRZ1K4OaHjMjj-bel1fwKstSXS8fJ1eHShbIMQsYPGRi4AEw1XcJjh2B0G-A-j1Vbqs$ >> > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fi0gF2MPDJUgBdXt9ScHTRZ1K4OaHjMjj-bel1fwKstSXS8fJ1eHShbIMQsYPGRi4AEw1XcJjh2B0G-A-j1Vbqs$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhavala at udel.edu Tue Jan 14 07:20:19 2025 From: dhavala at udel.edu (Venkata Narayana Sarma Dhavala) Date: Tue, 14 Jan 2025 18:50:19 +0530 Subject: [petsc-users] Question Regarding Duplicate Indices in VecSetValues(). Message-ID: Dear PETSc Development Team, I hope this message finds you well. I am currently working with the VecSetValues() function in PETSc, and I have a question regarding its behavior when there are duplicate indices in the input array. PetscErrorCode VecSetValues(Vec x, PetscInt ni, const PetscInt ix[], const PetscScalar y[], InsertMode iora) Specifically, what is the expected behavior if the ix[] array passed to VecSetValues() contains duplicate indices? Does the function overwrite the value at those indices, or does it handle duplicates differently depending on the insert mode (INSERT_VALUES or ADD_VALUES)? I would greatly appreciate any clarification on this matter, as it will help me better understand how to manage values when working with potentially repeated indices in the input. Thank you for your time and assistance. Best regards, Narayana Dhavala. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jan 14 08:05:57 2025 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 14 Jan 2025 09:05:57 -0500 Subject: [petsc-users] Question Regarding Duplicate Indices in VecSetValues(). In-Reply-To: References: Message-ID: INSERT_VALUES is clearly an error, but we can't check that easily so you will probably just get the last value, and ADD_VALUES should work. Mark On Tue, Jan 14, 2025 at 8:20?AM Venkata Narayana Sarma Dhavala < dhavala at udel.edu> wrote: > Dear PETSc Development Team, > > I hope this message finds you well. I am currently working with the > VecSetValues() function in PETSc, and I have a question regarding its > behavior when there are duplicate indices in the input array. > > PetscErrorCode VecSetValues(Vec x, PetscInt ni, const PetscInt ix[], const > PetscScalar y[], InsertMode iora) > > Specifically, what is the expected behavior if the ix[] array passed to > VecSetValues() contains duplicate indices? Does the function overwrite the > value at those indices, or does it handle duplicates differently depending > on the insert mode (INSERT_VALUES or ADD_VALUES)? > > I would greatly appreciate any clarification on this matter, as it will > help me better understand how to manage values when working with > potentially repeated indices in the input. > > Thank you for your time and assistance. > > Best regards, > > Narayana Dhavala. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 14 08:31:39 2025 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Jan 2025 09:31:39 -0500 Subject: [petsc-users] Question Regarding Duplicate Indices in VecSetValues(). In-Reply-To: References: Message-ID: On Tue, Jan 14, 2025 at 9:06?AM Mark Adams wrote: > INSERT_VALUES is clearly an error, but we can't check that easily so you > will probably just get the last value, and ADD_VALUES should work. > For INSERT_VALUES, it will overwrite with the last value. For ADD_VALUES, it will add. Thanks, Matt > Mark > > On Tue, Jan 14, 2025 at 8:20?AM Venkata Narayana Sarma Dhavala < > dhavala at udel.edu> wrote: > >> Dear PETSc Development Team, >> >> I hope this message finds you well. I am currently working with the >> VecSetValues() function in PETSc, and I have a question regarding its >> behavior when there are duplicate indices in the input array. >> >> PetscErrorCode VecSetValues(Vec x, PetscInt ni, const PetscInt ix[], >> const PetscScalar y[], InsertMode iora) >> >> Specifically, what is the expected behavior if the ix[] array passed to >> VecSetValues() contains duplicate indices? Does the function overwrite the >> value at those indices, or does it handle duplicates differently depending >> on the insert mode (INSERT_VALUES or ADD_VALUES)? >> >> I would greatly appreciate any clarification on this matter, as it will >> help me better understand how to manage values when working with >> potentially repeated indices in the input. >> >> Thank you for your time and assistance. >> >> Best regards, >> >> Narayana Dhavala. >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZrWqm3ouVJY_jjPcpNhmzIsTmWA-OY60MzKAlKcDJ5o_jUPAhxhaQ3iueKObVE-ZlFN2Au1qH1z2PtGbINMn$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Jan 14 09:10:56 2025 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 14 Jan 2025 09:10:56 -0600 Subject: [petsc-users] Question Regarding Duplicate Indices in VecSetValues(). In-Reply-To: References: Message-ID: Add Matt mentioned, with ADD_VALUES, the values will be added. But with INSERT_VALUES, if it is a sequential vector (VECSEQ), it will overwrite with the last value; but if it is an MPI parallel vector (VECMPI), it is undetermined, as you could not even define "which is the last" (assuming duplicate indices over processes) On Tue, Jan 14, 2025 at 8:32?AM Matthew Knepley wrote: > On Tue, Jan 14, 2025 at 9:06?AM Mark Adams wrote: > >> INSERT_VALUES is clearly an error, but we can't check that easily so you >> will probably just get the last value, and ADD_VALUES should work. >> > > For INSERT_VALUES, it will overwrite with the last value. For ADD_VALUES, > it will add. > > Thanks, > > Matt > > >> Mark >> >> On Tue, Jan 14, 2025 at 8:20?AM Venkata Narayana Sarma Dhavala < >> dhavala at udel.edu> wrote: >> >>> Dear PETSc Development Team, >>> >>> I hope this message finds you well. I am currently working with the >>> VecSetValues() function in PETSc, and I have a question regarding its >>> behavior when there are duplicate indices in the input array. >>> >>> PetscErrorCode VecSetValues(Vec x, PetscInt ni, const PetscInt ix[], >>> const PetscScalar y[], InsertMode iora) >>> >>> Specifically, what is the expected behavior if the ix[] array passed to >>> VecSetValues() contains duplicate indices? Does the function overwrite the >>> value at those indices, or does it handle duplicates differently depending >>> on the insert mode (INSERT_VALUES or ADD_VALUES)? >>> >>> I would greatly appreciate any clarification on this matter, as it will >>> help me better understand how to manage values when working with >>> potentially repeated indices in the input. >>> >>> Thank you for your time and assistance. >>> >>> Best regards, >>> >>> Narayana Dhavala. >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z95-0RaOBkv4OPWEhDVpjsxVHdbBYAaeEveINBQxauPWYvZFu10IaTecVDMfntqt_aYIsy095Bdr3LGU8PE44g8AsyS3$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From baagaard at usgs.gov Tue Jan 14 09:47:25 2025 From: baagaard at usgs.gov (Aagaard, Brad T) Date: Tue, 14 Jan 2025 15:47:25 +0000 Subject: [petsc-users] configure: Include urlopen exception string in download failure message Message-ID: The current implementation of Retrival.tarballRetrieve() does not capture the error string from an exception raised in a call to urlopen(). This makes it difficult to diagnose download failures. Can you add the error string from the exception to the download failure message (maybe as an optional argument to getDownloadFailureMessage())? This is at lines 205-208 of config/BuildSystem/retrieval.py. Thanks, Brad -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Tue Jan 14 09:51:18 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Tue, 14 Jan 2025 09:51:18 -0600 (CST) Subject: [petsc-users] configure: Include urlopen exception string in download failure message In-Reply-To: References: Message-ID: Last I visited this issue - its a python version issue. i.e newer versions [3.8+? 3.10+?] don't suppress this message - so it comes up in configure printed message. Perhaps you can verify if this true for the issue you are seeing. Satish On Tue, 14 Jan 2025, Aagaard, Brad T via petsc-users wrote: > The current implementation of Retrival.tarballRetrieve() does not capture the error string from an exception raised in a call to urlopen(). This makes it difficult to diagnose download failures. Can you add the error string from the exception to the download failure message (maybe as an optional argument to getDownloadFailureMessage())? This is at lines 205-208 of config/BuildSystem/retrieval.py. > > Thanks, > Brad > > From baagaard at usgs.gov Tue Jan 14 10:09:50 2025 From: baagaard at usgs.gov (Aagaard, Brad T) Date: Tue, 14 Jan 2025 16:09:50 +0000 Subject: [petsc-users] configure: Include urlopen exception string in download failure message In-Reply-To: References: Message-ID: I am using Python 3.12. The exception (SSL certificate issue) error string is not showing up in stdout (below) or the configure.log (attached). Relying on the error string to just show up seems fragile. ============================================================================================= Trying to download https://web.cels.anl.gov/projects/petsc/download/externalpackages/f2cblaslapack-3.8.0.q2.tar.gz for F2CBLASLAPACK ============================================================================================= ********************************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): --------------------------------------------------------------------------------------------- Error during download/extract/detection of F2CBLASLAPACK: Unable to download package F2CBLASLAPACK from: https://web.cels.anl.gov/projects/petsc/download/externalpackages/f2cblaslapack-3.8.0.q2.tar.gz * If URL specified manually - perhaps there is a typo? * If your network is disconnected - please reconnect and rerun ./configure * Or perhaps you have a firewall blocking the download * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually * or you can download the above URL manually, to /yourselectedlocation/f2cblaslapack-3.8.0.q2.tar.gz and use the configure option: --download-f2cblaslapack=/yourselectedlocation/f2cblaslapack-3.8.0.q2.tar.gz ********************************************************************************************* From: Satish Balay Date: Tuesday, January 14, 2025 at 8:51 AM To: Aagaard, Brad T Cc: petsc-users at mcs.anl.gov Subject: [EXTERNAL] Re: [petsc-users] configure: Include urlopen exception string in download failure message This email has been received from outside of DOI - Use caution before clicking on links, opening attachments, or responding. Last I visited this issue - its a python version issue. i.e newer versions [3.8+? 3.10+?] don't suppress this message - so it comes up in configure printed message. Perhaps you can verify if this true for the issue you are seeing. Satish On Tue, 14 Jan 2025, Aagaard, Brad T via petsc-users wrote: > The current implementation of Retrival.tarballRetrieve() does not capture the error string from an exception raised in a call to urlopen(). This makes it difficult to diagnose download failures. Can you add the error string from the exception to the download failure message (maybe as an optional argument to getDownloadFailureMessage())? This is at lines 205-208 of config/BuildSystem/retrieval.py. > > Thanks, > Brad > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log.gz Type: application/x-gzip Size: 244681 bytes Desc: configure.log.gz URL: From mmolinos at us.es Wed Jan 15 06:08:13 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Wed, 15 Jan 2025 12:08:13 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM Message-ID: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> Dear all, It seems that we can not modify the coordinates of the background mesh attached to a DMSwarm simulation. This is a feature? Best, Miguel From mfadams at lbl.gov Wed Jan 15 09:08:39 2025 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Jan 2025 10:08:39 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> Message-ID: Are you getting the "CellDM" from the DMSwarm? Mark On Wed, Jan 15, 2025 at 7:09?AM MIGUEL MOLINOS PEREZ wrote: > Dear all, > > It seems that we can not modify the coordinates of the background mesh > attached to a DMSwarm simulation. This is a feature? > > Best, > Miguel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 15 09:15:17 2025 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2025 10:15:17 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> Message-ID: On Wed, Jan 15, 2025 at 7:09?AM MIGUEL MOLINOS PEREZ wrote: > Dear all, > > It seems that we can not modify the coordinates of the background mesh > attached to a DMSwarm simulation. This is a feature? > 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. 2. I assume you are using release 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). Thanks, Matt > Best, > Miguel > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cdgp5NQ-b9QT06ZmtiTG_RF4aHEh-BbhF98ki0OxlmokLpj-BhS7z-T2SPNseiak-j0ONZu1GltJZO82s0KR$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Wed Jan 15 09:41:46 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Wed, 15 Jan 2025 15:41:46 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> Message-ID: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eUTInZmcoFy9twTFZpjoHI_oclLg6kxFkxESVFuz7ksTiA5wLrEyRiDRDm6z991kmoCDYegFCzfkcXOWBI6zXw$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Wed Jan 15 09:42:29 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Wed, 15 Jan 2025 15:42:29 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> Message-ID: <7710B837-542F-4B10-A8B2-A304914621DE@us.es> Yes, I am getting the "CellDM" from the DMSwarm. Miguel On 15 Jan 2025, at 16:08, Mark Adams wrote: Are you getting the "CellDM" from the DMSwarm? Mark On Wed, Jan 15, 2025 at 7:09?AM MIGUEL MOLINOS PEREZ > wrote: Dear all, It seems that we can not modify the coordinates of the background mesh attached to a DMSwarm simulation. This is a feature? Best, Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 15 09:48:52 2025 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2025 10:48:52 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> Message-ID: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ wrote: > Thank you Matt. > > Yes, I am getting the "CellDM" from the DMSwarm. > > 1. I have recently overhauled this functionality because it was not > flexible enough for the plasma simulation we do. Thus main and release work > differently. > > > Nice to hear that. Should I move to main? > The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. > 2. I assume you are using release > > > You are correct. > > 3. In both main and release, if you change the coordinates of your CellDM > mesh, you need to rebin the particles. The easiest way to do this is to > call DMSwarmMigrate(sw, PETSC_FALSE). > > > What do you mean by rebin? > When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt > Miguel > > > Thanks, > > Matt > > >> Best, >> Miguel >> > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cs7VQs2Vvv3qqKf-bK_QjlnDN84TOjBUPDxytpXpB_UrrHMrPPxBGKpOPYbAimaxa64C4W_zvOSRIFCoE9Uo$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cs7VQs2Vvv3qqKf-bK_QjlnDN84TOjBUPDxytpXpB_UrrHMrPPxBGKpOPYbAimaxa64C4W_zvOSRIFCoE9Uo$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Wed Jan 15 09:56:13 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Wed, 15 Jan 2025 15:56:13 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> Message-ID: <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ftMLihgZiKVFAR-BRnPC50J7_Albtps8mwCGwL9BAp2zqgrP8u4UmHO3nzVtXrHq-OZuegO2vaFRFL2XAB3JcA$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ftMLihgZiKVFAR-BRnPC50J7_Albtps8mwCGwL9BAp2zqgrP8u4UmHO3nzVtXrHq-OZuegO2vaFRFL2XAB3JcA$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From donaldrexplanalpjr at outlook.com Thu Jan 16 10:00:21 2025 From: donaldrexplanalpjr at outlook.com (Donald Planalp) Date: Thu, 16 Jan 2025 16:00:21 +0000 Subject: [petsc-users] KSP with large sparse kronecker product Message-ID: Hello, I am inquiring to see what the best approach for solving a problem I am encountering in my quantum mechanics research. For some context, the structure of my problem is solving a linear system Ax=b where b=By in parallel. In this case A and B can be written as the sum of 4-5 kronecker products. Specifically, A and B are formed by the same terms but with a few flipped signs in the sum, and some of the scalings are time dependent so currently the sum is performed at each step. The issue I'm having is balancing GMRES solving speed versus memory usage. The sparse structure of my matrices is such that A and B (after summing) is equivalent in structure to a kronecker product between a matrix of size 1000x1000 with 5 nonzeros per row (concentrated along diagonal), and a matrix of 2000x2000 with 13 nonzeros per row (banded along diagonal). In this case, the memory usage for explicitly storing the kronecker product can be quite large. This seems inefficient since the kronecker product contains a lot of repeated information. Further, adding the matrices together at each step due to the scaling of time dependence as well as dimensionality makes the solver quickly become much slower. I've looked into various matrix types. MATMPIKAIJ seems close, but I would require more terms, and all of my matrices are sparse not some of them. I've also looked into MATCOMPOSITE to avoid explicit sums at least, however the solver time actually becomes slower. I was just curious if there is a better way of handling this using petsc that can achieve the best of both worlds, low memory usage and avoiding explicit kronecker product evaluation while keeping similar speeds of matrix vector products for GMRES. Thank you for your time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 16 11:15:31 2025 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Jan 2025 12:15:31 -0500 Subject: [petsc-users] configure: Include urlopen exception string in download failure message In-Reply-To: References: Message-ID: I have it now in https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/8084__;!!G_uCfscf7eWS!cKrZN0dJCyXrvXadbuLpdlFXwX7jY7azDUF16w935Yv2owAqaOnimvOe-1N7eQJERKCYiStKgwzXrG-B_wbg$ Thanks, Matt On Tue, Jan 14, 2025 at 11:10?AM Aagaard, Brad T via petsc-users < petsc-users at mcs.anl.gov> wrote: > I am using Python 3.12. The exception (SSL certificate issue) error string > is not showing up in stdout (below) or the configure.log (attached). > Relying on the error string to just show up seems fragile. > > > > > > > ============================================================================================= > > Trying to download > > > https://web.cels.anl.gov/projects/petsc/download/externalpackages/f2cblaslapack-3.8.0.q2.tar.gz > > for F2CBLASLAPACK > > > ============================================================================================= > > > > > ********************************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > > --------------------------------------------------------------------------------------------- > > Error during download/extract/detection of F2CBLASLAPACK: > > Unable to download package F2CBLASLAPACK from: > > > https://web.cels.anl.gov/projects/petsc/download/externalpackages/f2cblaslapack-3.8.0.q2.tar.gz > > * If URL specified manually - perhaps there is a typo? > > * If your network is disconnected - please reconnect and rerun > ./configure > > * Or perhaps you have a firewall blocking the download > > * You can run with --with-packages-download-dir=/adirectory and > ./configure will instruct > > you what packages to download manually > > * or you can download the above URL manually, to > > /yourselectedlocation/f2cblaslapack-3.8.0.q2.tar.gz > > and use the configure option: > > > --download-f2cblaslapack=/yourselectedlocation/f2cblaslapack-3.8.0.q2.tar.gz > > > ********************************************************************************************* > > > > > > > > > > *From: *Satish Balay > *Date: *Tuesday, January 14, 2025 at 8:51 AM > *To: *Aagaard, Brad T > *Cc: *petsc-users at mcs.anl.gov > *Subject: *[EXTERNAL] Re: [petsc-users] configure: Include urlopen > exception string in download failure message > > > > This email has been received from outside of DOI - Use caution before > clicking on links, opening attachments, or responding. > > > > Last I visited this issue - its a python version issue. i.e newer versions > [3.8+? 3.10+?] don't suppress this message - so it comes up in configure > printed message. > > Perhaps you can verify if this true for the issue you are seeing. > > Satish > > On Tue, 14 Jan 2025, Aagaard, Brad T via petsc-users wrote: > > > The current implementation of Retrival.tarballRetrieve() does not > capture the error string from an exception raised in a call to urlopen(). > This makes it difficult to diagnose download failures. Can you add the > error string from the exception to the download failure message (maybe as > an optional argument to getDownloadFailureMessage())? This is at lines > 205-208 of config/BuildSystem/retrieval.py. > > > > Thanks, > > Brad > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cKrZN0dJCyXrvXadbuLpdlFXwX7jY7azDUF16w935Yv2owAqaOnimvOe-1N7eQJERKCYiStKgwzXrM0RqqSx$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jan 16 17:25:40 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 16 Jan 2025 18:25:40 -0500 Subject: [petsc-users] KSP with large sparse kronecker product In-Reply-To: References: Message-ID: I don't think we have good code for this case. But it is a good case and we should definitely provide support so it would be great to talk about. Possibly start with the name :-) MATSKAIJ :-) Barry > On Jan 16, 2025, at 11:00?AM, Donald Planalp wrote: > > > Hello, > > I am inquiring to see what the best approach for solving a problem I am encountering in my quantum mechanics research. > > For some context, the structure of my problem is solving a linear system Ax=b where b=By in parallel. In this case A and B can be written as the sum of 4-5 kronecker products. Specifically, A and B are formed by the same terms but with a few flipped signs in the sum, and some of the scalings are time dependent so currently the sum is performed at each step. > > The issue I'm having is balancing GMRES solving speed versus memory usage. The sparse structure of my matrices is such that A and B (after summing) is equivalent in structure to a kronecker product between a matrix of size 1000x1000 with 5 nonzeros per row (concentrated along diagonal), and a matrix of 2000x2000 with 13 nonzeros per row (banded along diagonal). > > In this case, the memory usage for explicitly storing the kronecker product can be quite large. This seems inefficient since the kronecker product contains a lot of repeated information. Further, adding the matrices together at each step due to the scaling of time dependence as well as dimensionality makes the solver quickly become much slower. > > I've looked into various matrix types. MATMPIKAIJ seems close, but I would require more terms, and all of my matrices are sparse not some of them. I've also looked into MATCOMPOSITE to avoid explicit sums at least, however the solver time actually becomes slower. > > I was just curious if there is a better way of handling this using petsc that can achieve the best of both worlds, low memory usage and avoiding explicit kronecker product evaluation while keeping similar speeds of matrix vector products for GMRES. > > Thank you for your time. > > From knepley at gmail.com Thu Jan 16 17:50:10 2025 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Jan 2025 18:50:10 -0500 Subject: [petsc-users] KSP with large sparse kronecker product In-Reply-To: References: Message-ID: On Thu, Jan 16, 2025 at 6:26?PM Barry Smith wrote: > > I don't think we have good code for this case. But it is a good case > and we should definitely provide support so it would be great to talk > about. > > Possibly start with the name :-) MATSKAIJ :-) > Are you using any preconditioner? If not, you could just use a MATSHELL, and compute the action of your Kronecker matrix. Thanks, Matt > Barry > > > > On Jan 16, 2025, at 11:00?AM, Donald Planalp < > donaldrexplanalpjr at outlook.com> wrote: > > > > > > Hello, > > > > I am inquiring to see what the best approach for solving a problem I am > encountering in my quantum mechanics research. > > > > For some context, the structure of my problem is solving a linear system > Ax=b where b=By in parallel. In this case A and B can be written as the sum > of 4-5 kronecker products. Specifically, A and B are formed by the same > terms but with a few flipped signs in the sum, and some of the scalings are > time dependent so currently the sum is performed at each step. > > > > The issue I'm having is balancing GMRES solving speed versus memory > usage. The sparse structure of my matrices is such that A and B (after > summing) is equivalent in structure to a kronecker product between a matrix > of size 1000x1000 with 5 nonzeros per row (concentrated along diagonal), > and a matrix of 2000x2000 with 13 nonzeros per row (banded along diagonal). > > > > In this case, the memory usage for explicitly storing the kronecker > product can be quite large. This seems inefficient since the kronecker > product contains a lot of repeated information. Further, adding the > matrices together at each step due to the scaling of time dependence as > well as dimensionality makes the solver quickly become much slower. > > > > I've looked into various matrix types. MATMPIKAIJ seems close, but I > would require more terms, and all of my matrices are sparse not some of > them. I've also looked into MATCOMPOSITE to avoid explicit sums at least, > however the solver time actually becomes slower. > > > > I was just curious if there is a better way of handling this using petsc > that can achieve the best of both worlds, low memory usage and avoiding > explicit kronecker product evaluation while keeping similar speeds of > matrix vector products for GMRES. > > > > Thank you for your time. > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YFNC8KWLCPgn6pqKdbfqPr-QioeqYFGrRkTUQJh5BIA8qMaVG9chS49OzvBQRTw12XJfzE_FJIa365FfOxig$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Jan 16 18:10:54 2025 From: jed at jedbrown.org (Jed Brown) Date: Thu, 16 Jan 2025 17:10:54 -0700 Subject: [petsc-users] KSP with large sparse kronecker product In-Reply-To: References: Message-ID: <87msfqwb8x.fsf@jedbrown.org> Matthew Knepley writes: > On Thu, Jan 16, 2025 at 6:26?PM Barry Smith wrote: > >> >> I don't think we have good code for this case. But it is a good case >> and we should definitely provide support so it would be great to talk >> about. >> >> Possibly start with the name :-) MATSKAIJ :-) >> > > Are you using any preconditioner? If not, you could just use a MATSHELL, > and compute the action of your Kronecker matrix. Also note that if each panel of the Kronecker product is small enough to solve an eigenproblem, you may want to use fast diagonalization to compute an exact inverse. The eigenproblem will have a dense solution (even though the matrix is sparse), but a few dense eigenproblems of size a hundred or thousand could be faster than solving iteratively with systems in the millions. From donaldrexplanalpjr at outlook.com Thu Jan 16 19:33:55 2025 From: donaldrexplanalpjr at outlook.com (Donald Planalp) Date: Fri, 17 Jan 2025 01:33:55 +0000 Subject: [petsc-users] KSP with large sparse kronecker product In-Reply-To: <87msfqwb8x.fsf@jedbrown.org> References: <87msfqwb8x.fsf@jedbrown.org> Message-ID: Hello, Currently I am using the block Jacobi preconditioner, and it's the only one which seems to give fast convergence for this problem so far. I actually did implement a matshell which redundantly stored each matrix on each rank (which was far less memory than explicitly storing the Kronecker product) and computed the action of the Kronecker product in a semi-matrix-free way, however without the preconditioner the solver time was far too slow. Unfortunately, I forgot to make a backup of that matshell otherwise I would provide the code for it. As far as the inverse goes, do you mean inverting the small matrices and then embed them in the larger Kronecker structure such that we have the inverse matrix of the linear system? I'm a bit confused because wouldn't the inversion of the entire large matrix not only break the sparsity pattern inside the blocks, but also the block sparsity? I imagine this would also still use quite a bit of memory. However if I am mistaken I apologize. I appreciate the quick replies, Rex Planalp ________________________________ From: Jed Brown Sent: Friday, January 17, 2025 12:10 AM To: Matthew Knepley ; Barry Smith Cc: petsc-users at mcs.anl.gov ; Donald Planalp Subject: Re: [petsc-users] KSP with large sparse kronecker product Matthew Knepley writes: > On Thu, Jan 16, 2025 at 6:26?PM Barry Smith wrote: > >> >> I don't think we have good code for this case. But it is a good case >> and we should definitely provide support so it would be great to talk >> about. >> >> Possibly start with the name :-) MATSKAIJ :-) >> > > Are you using any preconditioner? If not, you could just use a MATSHELL, > and compute the action of your Kronecker matrix. Also note that if each panel of the Kronecker product is small enough to solve an eigenproblem, you may want to use fast diagonalization to compute an exact inverse. The eigenproblem will have a dense solution (even though the matrix is sparse), but a few dense eigenproblems of size a hundred or thousand could be faster than solving iteratively with systems in the millions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lzou at anl.gov Thu Jan 16 20:50:34 2025 From: lzou at anl.gov (Zou, Ling) Date: Fri, 17 Jan 2025 02:50:34 +0000 Subject: [petsc-users] Auto sparsity detection? Message-ID: Hi all, Does PETSc has some automatic matrix sparsity detection algorithm available? Something like: https://urldefense.us/v3/__https://docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!ccEx6zmuNrVADqtN50hO2N0k4Qs-A70nztAjMLu-JElnjhK5w84BpYC8CAINd6KihSxaS2rx_LgpqUVM49U$ The background is that I use finite differencing plus matrix coloring to (efficiently) get the Jacobian. For the matrix coloring part, I color the matrix based on mesh connectivity and variable dependencies, which is not bad, but just try to be lazy to even eliminating this part. A related but different question, how much does PETSc support automatic differentiation? I see some old paper: https://ftp.mcs.anl.gov/pub/tech_reports/reports/P922.pdf and discussion in the roadmap: https://urldefense.us/v3/__https://petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!ccEx6zmuNrVADqtN50hO2N0k4Qs-A70nztAjMLu-JElnjhK5w84BpYC8CAINd6KihSxaS2rx_Lgpw6v6hKE$ I am thinking that if AD works so I don?t even need to do finite differencing Jacobian, or have it as another option. Best, -Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 16 21:01:26 2025 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Jan 2025 22:01:26 -0500 Subject: [petsc-users] Auto sparsity detection? In-Reply-To: References: Message-ID: On Thu, Jan 16, 2025 at 9:50?PM Zou, Ling via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hi all, > > > > Does PETSc has some automatic matrix sparsity detection algorithm > available? > > Something like: > https://urldefense.us/v3/__https://docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!dvR7knTNgJjVsN8MbdkzqHYGMeFeJA5KRmWbLart8sNlv5MD6vaocv12fo_IltMmctrA04DuDQsTXfRFd2UH$ > > Sparsity detection would rely on introspection of the user code for ComputeFunction(), which is not possible in C (unless you were to code up your evaluation in some symbolic framework). > The background is that I use finite differencing plus matrix coloring to > (efficiently) get the Jacobian. > > For the matrix coloring part, I color the matrix based on mesh > connectivity and variable dependencies, which is not bad, but just try to > be lazy to even eliminating this part. > This is how the automatic frameworks also work. This is how we compute the sparsity pattern for PetscFE and PetscFV. > A related but different question, how much does PETSc support automatic > differentiation? > > I see some old paper: > > https://ftp.mcs.anl.gov/pub/tech_reports/reports/P922.pdf > > and discussion in the roadmap: > > https://urldefense.us/v3/__https://petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!dvR7knTNgJjVsN8MbdkzqHYGMeFeJA5KRmWbLart8sNlv5MD6vaocv12fo_IltMmctrA04DuDQsTXRy5JWAw$ > > > I am thinking that if AD works so I don?t even need to do finite > differencing Jacobian, or have it as another option. > Other people understand that better than I do. Thanks, Matt > Best, > > > > -Ling > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dvR7knTNgJjVsN8MbdkzqHYGMeFeJA5KRmWbLart8sNlv5MD6vaocv12fo_IltMmctrA04DuDQsTXfZqhc9B$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lzou at anl.gov Thu Jan 16 21:43:14 2025 From: lzou at anl.gov (Zou, Ling) Date: Fri, 17 Jan 2025 03:43:14 +0000 Subject: [petsc-users] Auto sparsity detection? In-Reply-To: References: Message-ID: Thank you, Matt. Seems that at least the matrix coloring part I am following the ?best practice?. -Ling From: Matthew Knepley Date: Thursday, January 16, 2025 at 9:01?PM To: Zou, Ling Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Auto sparsity detection? On Thu, Jan 16, 2025 at 9:?50 PM Zou, Ling via petsc-users wrote: Hi all, Does PETSc has some automatic matrix sparsity detection algorithm available? Something like: https:?//docs.?sciml.?ai/NonlinearSolve/stable/basics/sparsity_detection/ ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. ZjQcmQRYFpfptBannerEnd On Thu, Jan 16, 2025 at 9:50?PM Zou, Ling via petsc-users > wrote: Hi all, Does PETSc has some automatic matrix sparsity detection algorithm available? Something like: https://urldefense.us/v3/__https://docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!e2UJSeesJYrQcCcqAr_ecKtOzfunVxto3kBxGHMZUSLdwstXEZhtFmKA8_fBeRE19FoFChtexPD3ya-IFZs$ Sparsity detection would rely on introspection of the user code for ComputeFunction(), which is not possible in C (unless you were to code up your evaluation in some symbolic framework). The background is that I use finite differencing plus matrix coloring to (efficiently) get the Jacobian. For the matrix coloring part, I color the matrix based on mesh connectivity and variable dependencies, which is not bad, but just try to be lazy to even eliminating this part. This is how the automatic frameworks also work. This is how we compute the sparsity pattern for PetscFE and PetscFV. A related but different question, how much does PETSc support automatic differentiation? I see some old paper: https://ftp.mcs.anl.gov/pub/tech_reports/reports/P922.pdf and discussion in the roadmap: https://urldefense.us/v3/__https://petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!e2UJSeesJYrQcCcqAr_ecKtOzfunVxto3kBxGHMZUSLdwstXEZhtFmKA8_fBeRE19FoFChtexPD37cms1Jc$ I am thinking that if AD works so I don?t even need to do finite differencing Jacobian, or have it as another option. Other people understand that better than I do. Thanks, Matt Best, -Ling -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e2UJSeesJYrQcCcqAr_ecKtOzfunVxto3kBxGHMZUSLdwstXEZhtFmKA8_fBeRE19FoFChtexPD34IEVG4s$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Jan 17 01:07:40 2025 From: jed at jedbrown.org (Jed Brown) Date: Fri, 17 Jan 2025 00:07:40 -0700 Subject: [petsc-users] KSP with large sparse kronecker product In-Reply-To: References: <87msfqwb8x.fsf@jedbrown.org> Message-ID: <87jzatx6ir.fsf@jedbrown.org> The "fast diagonalization" or tensor product approach has been heavily used over the years. It's described algebraically in Lynch, Rice, and Thomas (1964) https://urldefense.us/v3/__https://doi.org/10.1007/BF01386067__;!!G_uCfscf7eWS!ZCELMydz5gWkx2Gt3I4wkzt44DArUNpXYqhODuQ4wF6sPEQDSsFeVGdkUQp_vqllE7Z_Z75RnAb-qgfoGFQ$ (see eq 3.8). Inverses of anything interesting are dense, and scalable preconditioners generally need to provide a dense action (though they can be constructed from sparse parts). If the pieces of the Kronecker product are of size 1000, then the dense matrix of eigenvectors is only a few MB. If you lots bigger, dense matrices of eigenvectors become intractable, though sometimes you can approximate such things or use fast transforms. Kronecker products are fun and powerful. Forming them into a naive sparse matrix can be useful for checking correctness and investigating properties, but I think it's rarely the most efficient algorithm. Donald Planalp writes: > Hello, > > Currently I am using the block Jacobi preconditioner, and it's the only one which seems to give fast convergence for this problem so far. I actually did implement a matshell which redundantly stored each matrix on each rank (which was far less memory than explicitly storing the Kronecker product) and computed the action of the Kronecker product in a semi-matrix-free way, however without the preconditioner the solver time was far too slow. Unfortunately, I forgot to make a backup of that matshell otherwise I would provide the code for it. > > As far as the inverse goes, do you mean inverting the small matrices and then embed them in the larger Kronecker structure such that we have the inverse matrix of the linear system? I'm a bit confused because wouldn't the inversion of the entire large matrix not only break the sparsity pattern inside the blocks, but also the block sparsity? I imagine this would also still use quite a bit of memory. However if I am mistaken I apologize. > > I appreciate the quick replies, > Rex Planalp > ________________________________ > From: Jed Brown > Sent: Friday, January 17, 2025 12:10 AM > To: Matthew Knepley ; Barry Smith > Cc: petsc-users at mcs.anl.gov ; Donald Planalp > Subject: Re: [petsc-users] KSP with large sparse kronecker product > > Matthew Knepley writes: > >> On Thu, Jan 16, 2025 at 6:26?PM Barry Smith wrote: >> >>> >>> I don't think we have good code for this case. But it is a good case >>> and we should definitely provide support so it would be great to talk >>> about. >>> >>> Possibly start with the name :-) MATSKAIJ :-) >>> >> >> Are you using any preconditioner? If not, you could just use a MATSHELL, >> and compute the action of your Kronecker matrix. > > Also note that if each panel of the Kronecker product is small enough to solve an eigenproblem, you may want to use fast diagonalization to compute an exact inverse. The eigenproblem will have a dense solution (even though the matrix is sparse), but a few dense eigenproblems of size a hundred or thousand could be faster than solving iteratively with systems in the millions. From knepley at gmail.com Fri Jan 17 06:22:46 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Jan 2025 07:22:46 -0500 Subject: [petsc-users] Auto sparsity detection? In-Reply-To: References: Message-ID: On Thu, Jan 16, 2025 at 10:43?PM Zou, Ling wrote: > Thank you, Matt. > > Seems that at least the matrix coloring part I am following the ?best > practice?. > Yes, for FD approximations of the Jacobian. If you have a stencil operation (like FEM or FVM), then AD can be very useful because you only have to differentiate the kernel to get the Jacobian kernel. Thanks, Matt > > > -Ling > > > > *From: *Matthew Knepley > *Date: *Thursday, January 16, 2025 at 9:01?PM > *To: *Zou, Ling > *Cc: *petsc-users at mcs.anl.gov > *Subject: *Re: [petsc-users] Auto sparsity detection? > > On Thu, Jan 16, 2025 at 9: 50 PM Zou, Ling via petsc-users mcs. anl. gov> wrote: Hi all, Does PETSc has some automatic matrix > sparsity detection algorithm available? Something like: https: //docs. > sciml. ai/NonlinearSolve/stable/basics/sparsity_detection/ > > ZjQcmQRYFpfptBannerStart > > *This Message Is From an External Sender * > > This message came from outside your organization. > > > > ZjQcmQRYFpfptBannerEnd > > On Thu, Jan 16, 2025 at 9:50?PM Zou, Ling via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Hi all, > > > > Does PETSc has some automatic matrix sparsity detection algorithm > available? > > Something like: > https://urldefense.us/v3/__https://docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!cWyHnKq-Gzasz3ooIUAgTl0RTGrzg0fW8jwVOi0AHE_Ydv4dnayXiG06EPQYvp6guWhXYTv8DMnOu2Z-6riZ$ > > > > > Sparsity detection would rely on introspection of the user code for > ComputeFunction(), which is not > > possible in C (unless you were to code up your evaluation in some symbolic > framework). > > > > The background is that I use finite differencing plus matrix coloring to > (efficiently) get the Jacobian. > > For the matrix coloring part, I color the matrix based on mesh > connectivity and variable dependencies, which is not bad, but just try to > be lazy to even eliminating this part. > > > > This is how the automatic frameworks also work. This is how we compute the > sparsity pattern for PetscFE and PetscFV. > > > > A related but different question, how much does PETSc support automatic > differentiation? > > I see some old paper: > > https://ftp.mcs.anl.gov/pub/tech_reports/reports/P922.pdf > > and discussion in the roadmap: > > https://urldefense.us/v3/__https://petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!cWyHnKq-Gzasz3ooIUAgTl0RTGrzg0fW8jwVOi0AHE_Ydv4dnayXiG06EPQYvp6guWhXYTv8DMnOu9i64gyw$ > > > I am thinking that if AD works so I don?t even need to do finite > differencing Jacobian, or have it as another option. > > > > Other people understand that better than I do. > > > > Thanks, > > > > Matt > > > > Best, > > > > -Ling > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cWyHnKq-Gzasz3ooIUAgTl0RTGrzg0fW8jwVOi0AHE_Ydv4dnayXiG06EPQYvp6guWhXYTv8DMnOu7qfipNL$ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cWyHnKq-Gzasz3ooIUAgTl0RTGrzg0fW8jwVOi0AHE_Ydv4dnayXiG06EPQYvp6guWhXYTv8DMnOu7qfipNL$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Fri Jan 17 08:45:38 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Fri, 17 Jan 2025 14:45:38 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> Message-ID: I tried what you suggested, but still I got this error message. Maybe I should use main release? Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!YrUHr1PU20w6z6VkKBOf7TNjPjt8NekVtn7ijZJTiIOQ8yK4I75rVaVVkvpz8we0M_2KTIrM8W_8MDnZzrHxLw$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!YrUHr1PU20w6z6VkKBOf7TNjPjt8NekVtn7ijZJTiIOQ8yK4I75rVaVVkvpz8we0M_2KTIrM8W_8MDnOgk0GYg$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YrUHr1PU20w6z6VkKBOf7TNjPjt8NekVtn7ijZJTiIOQ8yK4I75rVaVVkvpz8we0M_2KTIrM8W_8MDmMQFOU4Q$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YrUHr1PU20w6z6VkKBOf7TNjPjt8NekVtn7ijZJTiIOQ8yK4I75rVaVVkvpz8we0M_2KTIrM8W_8MDmMQFOU4Q$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 17 09:00:46 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Jan 2025 10:00:46 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> Message-ID: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ wrote: > I tried what you suggested, but still I got this error message. Maybe I > should use main release? > No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt > Miguel > > [4]PETSC ERROR: > ------------------------------------------------------------------------ > [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!fRsK-PFNSlripVuSruNIQ68cC07TBw84XzXS8GiZFt5Und2Bn4pktbwZWLLMUnYQSUJWt9KgT-B7uNoaftse$ and > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fRsK-PFNSlripVuSruNIQ68cC07TBw84XzXS8GiZFt5Und2Bn4pktbwZWLLMUnYQSUJWt9KgT-B7uPrfhqQG$ > [4]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [4]PETSC ERROR: The line numbers in the error traceback are not always > exact. > [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at > /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 > [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at > /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 > [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at > /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 > [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at > /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 > [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at > /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 > [4]PETSC ERROR: #6 VecScatterBegin_Internal() at > /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 > [4]PETSC ERROR: #7 VecScatterBegin() at > /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 > [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at > /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 > [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at > /Users/migmolper/petsc/src/dm/interface/dm.c:2844 > [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at > /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 > [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at > /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 > [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at > /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 > [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at > /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 > [4]PETSC ERROR: #14 DMLocatePoints() at > /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 > [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at > /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 > [4]PETSC ERROR: #16 DMSwarmMigrate() at > /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 > [4]PETSC ERROR: #17 main() at > /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 > > > > On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ wrote: > > Thank you Matt for the useful info. I?ll try your idea. > > Miguel > > On 15 Jan 2025, at 16:48, Matthew Knepley wrote: > > On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: > >> Thank you Matt. >> >> Yes, I am getting the "CellDM" from the DMSwarm. >> >> 1. I have recently overhauled this functionality because it was not >> flexible enough for the plasma simulation we do. Thus main and release work >> differently. >> >> >> Nice to hear that. Should I move to main? >> > > The changes allow you to have several cell DMs. I want to bin particles in > space, but also in velocity, and then in the tensor product of space and > velocity. Moreover, sometimes I want to use different Swarm fields as the > DM field for the solver. You can do all that with main now. If you just > need a single DM with the same DM fields, release is fine. > > >> 2. I assume you are using release >> >> >> You are correct. >> >> 3. In both main and release, if you change the coordinates of your CellDM >> mesh, you need to rebin the particles. The easiest way to do this is to >> call DMSwarmMigrate(sw, PETSC_FALSE). >> >> >> What do you mean by rebin? >> > > When you provide the cell DM, Swrm makes a "sort context" that bins the > particles into DM cells. If you change the coordinates, this binning will > change, so you need it to "rebin" or recreate the sort context. > > Thanks, > > Matt > > >> Miguel >> >> >> Thanks, >> >> Matt >> >> >>> Best, >>> Miguel >>> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fRsK-PFNSlripVuSruNIQ68cC07TBw84XzXS8GiZFt5Und2Bn4pktbwZWLLMUnYQSUJWt9KgT-B7uECqyoJk$ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fRsK-PFNSlripVuSruNIQ68cC07TBw84XzXS8GiZFt5Und2Bn4pktbwZWLLMUnYQSUJWt9KgT-B7uECqyoJk$ > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fRsK-PFNSlripVuSruNIQ68cC07TBw84XzXS8GiZFt5Und2Bn4pktbwZWLLMUnYQSUJWt9KgT-B7uECqyoJk$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Fri Jan 17 09:08:01 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Fri, 17 Jan 2025 15:08:01 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> Message-ID: <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> Thank you Matt, this the piece of code I use to change the coordinates of the DM obtained using: DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); DMGetApplicationContext(bounding_cell, &background_mesh); Thanks, Miguel /************************************************************************/ PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { PetscErrorCode ierr; Vec coordinates; PetscScalar* coordArray; PetscInt xs, ys, zs, xm, ym, zm, i, j, k; PetscInt dim, M, N, P; PetscFunctionBegin; // Get DMDA information ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); CHKERRQ(ierr); ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); CHKERRQ(ierr); // Get the coordinates vector ierr = DMGetCoordinates(dm, &coordinates); CHKERRQ(ierr); ierr = VecGetArray(coordinates, &coordArray); CHKERRQ(ierr); // Update the coordinates based on the desired transformation for (k = zs; k < zs + zm; k++) { for (j = ys; j < ys + ym; j++) { for (i = xs; i < xs + xm; i++) { PetscInt idx = ((k * N + j) * M + i) * dim; // Index for the i, j, k point coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate } } } // Restore the coordinates vector ierr = VecRestoreArray(coordinates, &coordArray); CHKERRQ(ierr); // Set the updated coordinates back to the DMDA ierr = DMSetCoordinates(dm, coordinates); CHKERRQ(ierr); PetscFunctionReturn(0); } /************************************************************************/ On 17 Jan 2025, at 16:00, Matthew Knepley wrote: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: I tried what you suggested, but still I got this error message. Maybe I should use main release? No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!cq8-dbhGkAoUlRmeyBA4LZDR6RJ7QEwzEkPUeJPqWMgL6DOMmz8OhspQbEWoFVAqMqtOgxKgmlAbeVGWEGPfDg$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cq8-dbhGkAoUlRmeyBA4LZDR6RJ7QEwzEkPUeJPqWMgL6DOMmz8OhspQbEWoFVAqMqtOgxKgmlAbeVF1YavFZA$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley > wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cq8-dbhGkAoUlRmeyBA4LZDR6RJ7QEwzEkPUeJPqWMgL6DOMmz8OhspQbEWoFVAqMqtOgxKgmlAbeVGDW8_YSw$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cq8-dbhGkAoUlRmeyBA4LZDR6RJ7QEwzEkPUeJPqWMgL6DOMmz8OhspQbEWoFVAqMqtOgxKgmlAbeVGDW8_YSw$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cq8-dbhGkAoUlRmeyBA4LZDR6RJ7QEwzEkPUeJPqWMgL6DOMmz8OhspQbEWoFVAqMqtOgxKgmlAbeVGDW8_YSw$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 17 09:22:50 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Jan 2025 10:22:50 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> Message-ID: On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ wrote: > Thank you Matt, this the piece of code I use to change the coordinates of > the DM obtained using: > You do not need the call to DMSetCoordinates(). What happens when you remove it? Thanks, Matt > > DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); > DMGetApplicationContext(bounding_cell, &background_mesh); > > Thanks, > Miguel > > /************************************************************************/ > > PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { > PetscErrorCode ierr; > Vec coordinates; > PetscScalar* coordArray; > PetscInt xs, ys, zs, xm, ym, zm, i, j, k; > PetscInt dim, M, N, P; > > PetscFunctionBegin; > // Get DMDA information > ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, > NULL, > NULL, NULL, NULL); > CHKERRQ(ierr); > ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); > CHKERRQ(ierr); > > // Get the coordinates vector > ierr = DMGetCoordinates(dm, &coordinates); > CHKERRQ(ierr); > ierr = VecGetArray(coordinates, &coordArray); > CHKERRQ(ierr); > > // Update the coordinates based on the desired transformation > for (k = zs; k < zs + zm; k++) { > for (j = ys; j < ys + ym; j++) { > for (i = xs; i < xs + xm; i++) { > PetscInt idx = > ((k * N + j) * M + i) * dim; // Index for the i, j, k point > coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate > coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate > coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate > } > } > } > > // Restore the coordinates vector > ierr = VecRestoreArray(coordinates, &coordArray); > CHKERRQ(ierr); > > // Set the updated coordinates back to the DMDA > ierr = DMSetCoordinates(dm, coordinates); > CHKERRQ(ierr); > > PetscFunctionReturn(0); > } > > /************************************************************************/ > > On 17 Jan 2025, at 16:00, Matthew Knepley wrote: > > On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: > >> I tried what you suggested, but still I got this error message. Maybe I >> should use main release? >> > > No. I suspect something is wrong with the way you are setting coordinates. > Can you share the code? > > Thanks, > > Matt > > >> Miguel >> >> [4]PETSC ERROR: >> ------------------------------------------------------------------------ >> [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!dJKWPsc1_mVgyNyxpcFQO1Nl87DPYNbTLxZ9_kNPDWyv49krXGZa3Oba51VGWtFGBsHYWP6pM9S2njbVhFRk$ and >> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!dJKWPsc1_mVgyNyxpcFQO1Nl87DPYNbTLxZ9_kNPDWyv49krXGZa3Oba51VGWtFGBsHYWP6pM9S2nm1YDaq3$ >> [4]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [4]PETSC ERROR: The line numbers in the error traceback are not always >> exact. >> [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at >> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 >> [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at >> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 >> [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at >> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 >> [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at >> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 >> [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at >> /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 >> [4]PETSC ERROR: #6 VecScatterBegin_Internal() at >> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 >> [4]PETSC ERROR: #7 VecScatterBegin() at >> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 >> [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at >> /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 >> [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at >> /Users/migmolper/petsc/src/dm/interface/dm.c:2844 >> [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at >> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 >> [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at >> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 >> [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at >> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 >> [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at >> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 >> [4]PETSC ERROR: #14 DMLocatePoints() at >> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 >> [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at >> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 >> [4]PETSC ERROR: #16 DMSwarmMigrate() at >> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >> [4]PETSC ERROR: #17 main() at >> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >> >> >> >> On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ wrote: >> >> Thank you Matt for the useful info. I?ll try your idea. >> >> Miguel >> >> On 15 Jan 2025, at 16:48, Matthew Knepley wrote: >> >> On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ >> wrote: >> >>> Thank you Matt. >>> >>> Yes, I am getting the "CellDM" from the DMSwarm. >>> >>> 1. I have recently overhauled this functionality because it was not >>> flexible enough for the plasma simulation we do. Thus main and release work >>> differently. >>> >>> >>> Nice to hear that. Should I move to main? >>> >> >> The changes allow you to have several cell DMs. I want to bin particles >> in space, but also in velocity, and then in the tensor product of space and >> velocity. Moreover, sometimes I want to use different Swarm fields as the >> DM field for the solver. You can do all that with main now. If you just >> need a single DM with the same DM fields, release is fine. >> >> >>> 2. I assume you are using release >>> >>> >>> You are correct. >>> >>> 3. In both main and release, if you change the coordinates of your >>> CellDM mesh, you need to rebin the particles. The easiest way to do this is >>> to call DMSwarmMigrate(sw, PETSC_FALSE). >>> >>> >>> What do you mean by rebin? >>> >> >> When you provide the cell DM, Swrm makes a "sort context" that bins the >> particles into DM cells. If you change the coordinates, this binning will >> change, so you need it to "rebin" or recreate the sort context. >> >> Thanks, >> >> Matt >> >> >>> Miguel >>> >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Best, >>>> Miguel >>>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dJKWPsc1_mVgyNyxpcFQO1Nl87DPYNbTLxZ9_kNPDWyv49krXGZa3Oba51VGWtFGBsHYWP6pM9S2ns1RezIu$ >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dJKWPsc1_mVgyNyxpcFQO1Nl87DPYNbTLxZ9_kNPDWyv49krXGZa3Oba51VGWtFGBsHYWP6pM9S2ns1RezIu$ >> >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dJKWPsc1_mVgyNyxpcFQO1Nl87DPYNbTLxZ9_kNPDWyv49krXGZa3Oba51VGWtFGBsHYWP6pM9S2ns1RezIu$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dJKWPsc1_mVgyNyxpcFQO1Nl87DPYNbTLxZ9_kNPDWyv49krXGZa3Oba51VGWtFGBsHYWP6pM9S2ns1RezIu$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Fri Jan 17 09:49:56 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Fri, 17 Jan 2025 15:49:56 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> Message-ID: <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> Now the error is in the call to DMSwarmMigrate Miguel [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!ad4_CYmqVLwmuIANP7o2NXvoqCPGFSZpW_gVLp8pQweudrYWPdqW4De1CaYkE5a1sPrs7JUO_n5fznRPbEaqpg$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ad4_CYmqVLwmuIANP7o2NXvoqCPGFSZpW_gVLp8pQweudrYWPdqW4De1CaYkE5a1sPrs7JUO_n5fznTEGnkpAg$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 [0]PETSC ERROR: #3 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [0]PETSC ERROR: #4 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 On Jan 17, 2025, at 4:22?PM, Matthew Knepley wrote: On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt, this the piece of code I use to change the coordinates of the DM obtained using: You do not need the call to DMSetCoordinates(). What happens when you remove it? Thanks, Matt DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); DMGetApplicationContext(bounding_cell, &background_mesh); Thanks, Miguel /************************************************************************/ PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { PetscErrorCode ierr; Vec coordinates; PetscScalar* coordArray; PetscInt xs, ys, zs, xm, ym, zm, i, j, k; PetscInt dim, M, N, P; PetscFunctionBegin; // Get DMDA information ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); CHKERRQ(ierr); ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); CHKERRQ(ierr); // Get the coordinates vector ierr = DMGetCoordinates(dm, &coordinates); CHKERRQ(ierr); ierr = VecGetArray(coordinates, &coordArray); CHKERRQ(ierr); // Update the coordinates based on the desired transformation for (k = zs; k < zs + zm; k++) { for (j = ys; j < ys + ym; j++) { for (i = xs; i < xs + xm; i++) { PetscInt idx = ((k * N + j) * M + i) * dim; // Index for the i, j, k point coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate } } } // Restore the coordinates vector ierr = VecRestoreArray(coordinates, &coordArray); CHKERRQ(ierr); // Set the updated coordinates back to the DMDA ierr = DMSetCoordinates(dm, coordinates); CHKERRQ(ierr); PetscFunctionReturn(0); } /************************************************************************/ On 17 Jan 2025, at 16:00, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: I tried what you suggested, but still I got this error message. Maybe I should use main release? No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!ad4_CYmqVLwmuIANP7o2NXvoqCPGFSZpW_gVLp8pQweudrYWPdqW4De1CaYkE5a1sPrs7JUO_n5fznRPbEaqpg$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ad4_CYmqVLwmuIANP7o2NXvoqCPGFSZpW_gVLp8pQweudrYWPdqW4De1CaYkE5a1sPrs7JUO_n5fznTEGnkpAg$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley > wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ad4_CYmqVLwmuIANP7o2NXvoqCPGFSZpW_gVLp8pQweudrYWPdqW4De1CaYkE5a1sPrs7JUO_n5fznQzmH4m9w$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ad4_CYmqVLwmuIANP7o2NXvoqCPGFSZpW_gVLp8pQweudrYWPdqW4De1CaYkE5a1sPrs7JUO_n5fznQzmH4m9w$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ad4_CYmqVLwmuIANP7o2NXvoqCPGFSZpW_gVLp8pQweudrYWPdqW4De1CaYkE5a1sPrs7JUO_n5fznQzmH4m9w$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ad4_CYmqVLwmuIANP7o2NXvoqCPGFSZpW_gVLp8pQweudrYWPdqW4De1CaYkE5a1sPrs7JUO_n5fznQzmH4m9w$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 17 10:18:42 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Jan 2025 11:18:42 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> Message-ID: On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ wrote: > Now the error is in the call to DMSwarmMigrate > You have almost certainly overwritten memory somewhere. Can you use vlagrind or Address Sanitizer? Thanks, Matt > Miguel > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!ZR4qEUJFauPbLf2xjFCCNpzP81yruUvFjLgv04TV-z8lOzwKA--hL3qV6DsYz5Y5lMba6urrC0JQwfeJkbHf$ and > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ZR4qEUJFauPbLf2xjFCCNpzP81yruUvFjLgv04TV-z8lOzwKA--hL3qV6DsYz5Y5lMba6urrC0JQwcFiO800$ > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: The line numbers in the error traceback are not always > exact. > [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at > /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 > [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at > /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 > [0]PETSC ERROR: #3 DMSwarmMigrate() at > /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 > [0]PETSC ERROR: #4 main() at > /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > On Jan 17, 2025, at 4:22?PM, Matthew Knepley wrote: > > On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ > wrote: > >> Thank you Matt, this the piece of code I use to change the coordinates of >> the DM obtained using: >> > > You do not need the call to DMSetCoordinates(). What happens when you > remove it? > > Thanks, > > Matt > > >> >> DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); >> DMGetApplicationContext(bounding_cell, &background_mesh); >> >> Thanks, >> Miguel >> >> /************************************************************************/ >> >> PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { >> PetscErrorCode ierr; >> Vec coordinates; >> PetscScalar* coordArray; >> PetscInt xs, ys, zs, xm, ym, zm, i, j, k; >> PetscInt dim, M, N, P; >> >> PetscFunctionBegin; >> // Get DMDA information >> ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, >> NULL, >> NULL, NULL, NULL); >> CHKERRQ(ierr); >> ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); >> CHKERRQ(ierr); >> >> // Get the coordinates vector >> ierr = DMGetCoordinates(dm, &coordinates); >> CHKERRQ(ierr); >> ierr = VecGetArray(coordinates, &coordArray); >> CHKERRQ(ierr); >> >> // Update the coordinates based on the desired transformation >> for (k = zs; k < zs + zm; k++) { >> for (j = ys; j < ys + ym; j++) { >> for (i = xs; i < xs + xm; i++) { >> PetscInt idx = >> ((k * N + j) * M + i) * dim; // Index for the i, j, k point >> coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate >> coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update >> y-coordinate >> coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update >> z-coordinate >> } >> } >> } >> >> // Restore the coordinates vector >> ierr = VecRestoreArray(coordinates, &coordArray); >> CHKERRQ(ierr); >> >> // Set the updated coordinates back to the DMDA >> ierr = DMSetCoordinates(dm, coordinates); >> CHKERRQ(ierr); >> >> PetscFunctionReturn(0); >> } >> >> /************************************************************************/ >> >> On 17 Jan 2025, at 16:00, Matthew Knepley wrote: >> >> On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ >> wrote: >> >>> I tried what you suggested, but still I got this error message. Maybe I >>> should use main release? >>> >> >> No. I suspect something is wrong with the way you are setting >> coordinates. Can you share the code? >> >> Thanks, >> >> Matt >> >> >>> Miguel >>> >>> [4]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [4]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!ZR4qEUJFauPbLf2xjFCCNpzP81yruUvFjLgv04TV-z8lOzwKA--hL3qV6DsYz5Y5lMba6urrC0JQwfeJkbHf$ and >>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ZR4qEUJFauPbLf2xjFCCNpzP81yruUvFjLgv04TV-z8lOzwKA--hL3qV6DsYz5Y5lMba6urrC0JQwcFiO800$ >>> [4]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [4]PETSC ERROR: The line numbers in the error traceback are not always >>> exact. >>> [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at >>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 >>> [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at >>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 >>> [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at >>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 >>> [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at >>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 >>> [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at >>> /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 >>> [4]PETSC ERROR: #6 VecScatterBegin_Internal() at >>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 >>> [4]PETSC ERROR: #7 VecScatterBegin() at >>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 >>> [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at >>> /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 >>> [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at >>> /Users/migmolper/petsc/src/dm/interface/dm.c:2844 >>> [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at >>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 >>> [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at >>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 >>> [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at >>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 >>> [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at >>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 >>> [4]PETSC ERROR: #14 DMLocatePoints() at >>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 >>> [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at >>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 >>> [4]PETSC ERROR: #16 DMSwarmMigrate() at >>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >>> [4]PETSC ERROR: #17 main() at >>> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >>> >>> >>> >>> On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ >>> wrote: >>> >>> Thank you Matt for the useful info. I?ll try your idea. >>> >>> Miguel >>> >>> On 15 Jan 2025, at 16:48, Matthew Knepley wrote: >>> >>> On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ >>> wrote: >>> >>>> Thank you Matt. >>>> >>>> Yes, I am getting the "CellDM" from the DMSwarm. >>>> >>>> 1. I have recently overhauled this functionality because it was not >>>> flexible enough for the plasma simulation we do. Thus main and release work >>>> differently. >>>> >>>> >>>> Nice to hear that. Should I move to main? >>>> >>> >>> The changes allow you to have several cell DMs. I want to bin particles >>> in space, but also in velocity, and then in the tensor product of space and >>> velocity. Moreover, sometimes I want to use different Swarm fields as the >>> DM field for the solver. You can do all that with main now. If you just >>> need a single DM with the same DM fields, release is fine. >>> >>> >>>> 2. I assume you are using release >>>> >>>> >>>> You are correct. >>>> >>>> 3. In both main and release, if you change the coordinates of your >>>> CellDM mesh, you need to rebin the particles. The easiest way to do this is >>>> to call DMSwarmMigrate(sw, PETSC_FALSE). >>>> >>>> >>>> What do you mean by rebin? >>>> >>> >>> When you provide the cell DM, Swrm makes a "sort context" that bins the >>> particles into DM cells. If you change the coordinates, this binning will >>> change, so you need it to "rebin" or recreate the sort context. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Miguel >>>> >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Best, >>>>> Miguel >>>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZR4qEUJFauPbLf2xjFCCNpzP81yruUvFjLgv04TV-z8lOzwKA--hL3qV6DsYz5Y5lMba6urrC0JQwZtiWUPK$ >>>> >>>> >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZR4qEUJFauPbLf2xjFCCNpzP81yruUvFjLgv04TV-z8lOzwKA--hL3qV6DsYz5Y5lMba6urrC0JQwZtiWUPK$ >>> >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZR4qEUJFauPbLf2xjFCCNpzP81yruUvFjLgv04TV-z8lOzwKA--hL3qV6DsYz5Y5lMba6urrC0JQwZtiWUPK$ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZR4qEUJFauPbLf2xjFCCNpzP81yruUvFjLgv04TV-z8lOzwKA--hL3qV6DsYz5Y5lMba6urrC0JQwZtiWUPK$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZR4qEUJFauPbLf2xjFCCNpzP81yruUvFjLgv04TV-z8lOzwKA--hL3qV6DsYz5Y5lMba6urrC0JQwZtiWUPK$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Fri Jan 17 11:01:24 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Fri, 17 Jan 2025 17:01:24 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> Message-ID: <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> You are right!! Thank you again! Miguel On Jan 17, 2025, at 5:18?PM, Matthew Knepley wrote: On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ > wrote: Now the error is in the call to DMSwarmMigrate You have almost certainly overwritten memory somewhere. Can you use vlagrind or Address Sanitizer? Thanks, Matt Miguel [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!dIh3Nh36I2mFs0Uj6sO28k7GW_SuLyiCdc-r1AZdldlQ6OehqfmcIavRU26sN7GwTF9MIIATLntNNLLXtUq8jA$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!dIh3Nh36I2mFs0Uj6sO28k7GW_SuLyiCdc-r1AZdldlQ6OehqfmcIavRU26sN7GwTF9MIIATLntNNLKY4JucaQ$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 [0]PETSC ERROR: #3 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [0]PETSC ERROR: #4 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 On Jan 17, 2025, at 4:22?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt, this the piece of code I use to change the coordinates of the DM obtained using: You do not need the call to DMSetCoordinates(). What happens when you remove it? Thanks, Matt DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); DMGetApplicationContext(bounding_cell, &background_mesh); Thanks, Miguel /************************************************************************/ PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { PetscErrorCode ierr; Vec coordinates; PetscScalar* coordArray; PetscInt xs, ys, zs, xm, ym, zm, i, j, k; PetscInt dim, M, N, P; PetscFunctionBegin; // Get DMDA information ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); CHKERRQ(ierr); ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); CHKERRQ(ierr); // Get the coordinates vector ierr = DMGetCoordinates(dm, &coordinates); CHKERRQ(ierr); ierr = VecGetArray(coordinates, &coordArray); CHKERRQ(ierr); // Update the coordinates based on the desired transformation for (k = zs; k < zs + zm; k++) { for (j = ys; j < ys + ym; j++) { for (i = xs; i < xs + xm; i++) { PetscInt idx = ((k * N + j) * M + i) * dim; // Index for the i, j, k point coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate } } } // Restore the coordinates vector ierr = VecRestoreArray(coordinates, &coordArray); CHKERRQ(ierr); // Set the updated coordinates back to the DMDA ierr = DMSetCoordinates(dm, coordinates); CHKERRQ(ierr); PetscFunctionReturn(0); } /************************************************************************/ On 17 Jan 2025, at 16:00, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: I tried what you suggested, but still I got this error message. Maybe I should use main release? No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!dIh3Nh36I2mFs0Uj6sO28k7GW_SuLyiCdc-r1AZdldlQ6OehqfmcIavRU26sN7GwTF9MIIATLntNNLLXtUq8jA$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!dIh3Nh36I2mFs0Uj6sO28k7GW_SuLyiCdc-r1AZdldlQ6OehqfmcIavRU26sN7GwTF9MIIATLntNNLKY4JucaQ$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley > wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dIh3Nh36I2mFs0Uj6sO28k7GW_SuLyiCdc-r1AZdldlQ6OehqfmcIavRU26sN7GwTF9MIIATLntNNLLmNtn4Ow$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dIh3Nh36I2mFs0Uj6sO28k7GW_SuLyiCdc-r1AZdldlQ6OehqfmcIavRU26sN7GwTF9MIIATLntNNLLmNtn4Ow$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dIh3Nh36I2mFs0Uj6sO28k7GW_SuLyiCdc-r1AZdldlQ6OehqfmcIavRU26sN7GwTF9MIIATLntNNLLmNtn4Ow$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dIh3Nh36I2mFs0Uj6sO28k7GW_SuLyiCdc-r1AZdldlQ6OehqfmcIavRU26sN7GwTF9MIIATLntNNLLmNtn4Ow$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dIh3Nh36I2mFs0Uj6sO28k7GW_SuLyiCdc-r1AZdldlQ6OehqfmcIavRU26sN7GwTF9MIIATLntNNLLmNtn4Ow$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Fri Jan 17 11:01:24 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Fri, 17 Jan 2025 17:01:24 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> Message-ID: <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> You are right!! Thank you again! Miguel On Jan 17, 2025, at 5:18?PM, Matthew Knepley wrote: On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ > wrote: Now the error is in the call to DMSwarmMigrate You have almost certainly overwritten memory somewhere. Can you use vlagrind or Address Sanitizer? Thanks, Matt Miguel [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!ahaUFTZu1Gh6ze-4bNgeOlBClBJgNvM5pGtGRg_FTEb_3UPeCPJoFlvxIW902aEQoIdUfh_LT3d9ZkiOWrCxdA$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ahaUFTZu1Gh6ze-4bNgeOlBClBJgNvM5pGtGRg_FTEb_3UPeCPJoFlvxIW902aEQoIdUfh_LT3d9ZkiorX5WCA$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 [0]PETSC ERROR: #3 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [0]PETSC ERROR: #4 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 On Jan 17, 2025, at 4:22?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt, this the piece of code I use to change the coordinates of the DM obtained using: You do not need the call to DMSetCoordinates(). What happens when you remove it? Thanks, Matt DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); DMGetApplicationContext(bounding_cell, &background_mesh); Thanks, Miguel /************************************************************************/ PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { PetscErrorCode ierr; Vec coordinates; PetscScalar* coordArray; PetscInt xs, ys, zs, xm, ym, zm, i, j, k; PetscInt dim, M, N, P; PetscFunctionBegin; // Get DMDA information ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); CHKERRQ(ierr); ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); CHKERRQ(ierr); // Get the coordinates vector ierr = DMGetCoordinates(dm, &coordinates); CHKERRQ(ierr); ierr = VecGetArray(coordinates, &coordArray); CHKERRQ(ierr); // Update the coordinates based on the desired transformation for (k = zs; k < zs + zm; k++) { for (j = ys; j < ys + ym; j++) { for (i = xs; i < xs + xm; i++) { PetscInt idx = ((k * N + j) * M + i) * dim; // Index for the i, j, k point coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate } } } // Restore the coordinates vector ierr = VecRestoreArray(coordinates, &coordArray); CHKERRQ(ierr); // Set the updated coordinates back to the DMDA ierr = DMSetCoordinates(dm, coordinates); CHKERRQ(ierr); PetscFunctionReturn(0); } /************************************************************************/ On 17 Jan 2025, at 16:00, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: I tried what you suggested, but still I got this error message. Maybe I should use main release? No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!ahaUFTZu1Gh6ze-4bNgeOlBClBJgNvM5pGtGRg_FTEb_3UPeCPJoFlvxIW902aEQoIdUfh_LT3d9ZkiOWrCxdA$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ahaUFTZu1Gh6ze-4bNgeOlBClBJgNvM5pGtGRg_FTEb_3UPeCPJoFlvxIW902aEQoIdUfh_LT3d9ZkiorX5WCA$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley > wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ahaUFTZu1Gh6ze-4bNgeOlBClBJgNvM5pGtGRg_FTEb_3UPeCPJoFlvxIW902aEQoIdUfh_LT3d9ZkieQm2p8w$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ahaUFTZu1Gh6ze-4bNgeOlBClBJgNvM5pGtGRg_FTEb_3UPeCPJoFlvxIW902aEQoIdUfh_LT3d9ZkieQm2p8w$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ahaUFTZu1Gh6ze-4bNgeOlBClBJgNvM5pGtGRg_FTEb_3UPeCPJoFlvxIW902aEQoIdUfh_LT3d9ZkieQm2p8w$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ahaUFTZu1Gh6ze-4bNgeOlBClBJgNvM5pGtGRg_FTEb_3UPeCPJoFlvxIW902aEQoIdUfh_LT3d9ZkieQm2p8w$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ahaUFTZu1Gh6ze-4bNgeOlBClBJgNvM5pGtGRg_FTEb_3UPeCPJoFlvxIW902aEQoIdUfh_LT3d9ZkieQm2p8w$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Fri Jan 17 12:34:56 2025 From: hongzhang at anl.gov (Zhang, Hong) Date: Fri, 17 Jan 2025 18:34:56 +0000 Subject: [petsc-users] Auto sparsity detection? In-Reply-To: References: Message-ID: We have an example in src/ts/tutorials/autodiff on using AD for reaction-diffusion equations. It does exactly what Matt said - differentiating the stencil kernel to get the Jacobian kernel. More information is available in this report: https://urldefense.us/v3/__https://arxiv.org/abs/1909.02836__;!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjvtqEGquw$ Hong (Mr.) ________________________________ From: petsc-users on behalf of Matthew Knepley Sent: Friday, January 17, 2025 6:22 AM To: Zou, Ling Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Auto sparsity detection? On Thu, Jan 16, 2025 at 10:43?PM Zou, Ling > wrote: Thank you, Matt. Seems that at least the matrix coloring part I am following the ?best practice?. Yes, for FD approximations of the Jacobian. If you have a stencil operation (like FEM or FVM), then AD can be very useful because you only have to differentiate the kernel to get the Jacobian kernel. Thanks, Matt -Ling From: Matthew Knepley > Date: Thursday, January 16, 2025 at 9:01?PM To: Zou, Ling > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Auto sparsity detection? On Thu, Jan 16, 2025 at 9:?50 PM Zou, Ling via petsc-users wrote: Hi all, Does PETSc has some automatic matrix sparsity detection algorithm available? Something like: https:?//docs.?sciml.?ai/NonlinearSolve/stable/basics/sparsity_detection/ ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. ZjQcmQRYFpfptBannerEnd On Thu, Jan 16, 2025 at 9:50?PM Zou, Ling via petsc-users > wrote: Hi all, Does PETSc has some automatic matrix sparsity detection algorithm available? Something like: https://urldefense.us/v3/__https://docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjs2FwvpFg$ Sparsity detection would rely on introspection of the user code for ComputeFunction(), which is not possible in C (unless you were to code up your evaluation in some symbolic framework). The background is that I use finite differencing plus matrix coloring to (efficiently) get the Jacobian. For the matrix coloring part, I color the matrix based on mesh connectivity and variable dependencies, which is not bad, but just try to be lazy to even eliminating this part. This is how the automatic frameworks also work. This is how we compute the sparsity pattern for PetscFE and PetscFV. A related but different question, how much does PETSc support automatic differentiation? I see some old paper: https://ftp.mcs.anl.gov/pub/tech_reports/reports/P922.pdf and discussion in the roadmap: https://urldefense.us/v3/__https://petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjtgW1TJuw$ I am thinking that if AD works so I don?t even need to do finite differencing Jacobian, or have it as another option. Other people understand that better than I do. Thanks, Matt Best, -Ling -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjtIbKZadg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cMXnAlSzQJa8lo5JBEpmUizoHds-gACnH-ecvwbHQpvuta1pc1NbtArflhZa6Td7oV1qIFEndu5eX9P1yjtIbKZadg$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lzou at anl.gov Sat Jan 18 10:48:48 2025 From: lzou at anl.gov (Zou, Ling) Date: Sat, 18 Jan 2025 16:48:48 +0000 Subject: [petsc-users] Auto sparsity detection? In-Reply-To: References: Message-ID: Thank you, both Hong and Matt. -Ling From: Zhang, Hong Date: Friday, January 17, 2025 at 12:34?PM To: Matthew Knepley , Zou, Ling Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Auto sparsity detection? We have an example in src/ts/tutorials/autodiff on using AD for reaction-diffusion equations. It does exactly what Matt said - differentiating the stencil kernel to get the Jacobian kernel. More information is available in this report: https://urldefense.us/v3/__https://arxiv.org/abs/1909.02836__;!!G_uCfscf7eWS!YAjJAY_-Po-R_8afPXjkPNIMrtGRpuOyrIvGq4kmS_GIk9E44GkBWYmWA38JPV_6BtwsHl2enTCgdAxzrgQ$ Hong (Mr.) ________________________________ From: petsc-users on behalf of Matthew Knepley Sent: Friday, January 17, 2025 6:22 AM To: Zou, Ling Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Auto sparsity detection? On Thu, Jan 16, 2025 at 10:43?PM Zou, Ling > wrote: Thank you, Matt. Seems that at least the matrix coloring part I am following the ?best practice?. Yes, for FD approximations of the Jacobian. If you have a stencil operation (like FEM or FVM), then AD can be very useful because you only have to differentiate the kernel to get the Jacobian kernel. Thanks, Matt -Ling From: Matthew Knepley > Date: Thursday, January 16, 2025 at 9:01?PM To: Zou, Ling > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Auto sparsity detection? On Thu, Jan 16, 2025 at 9:?50 PM Zou, Ling via petsc-users wrote: Hi all, Does PETSc has some automatic matrix sparsity detection algorithm available? Something like: https:?//docs.?sciml.?ai/NonlinearSolve/stable/basics/sparsity_detection/ ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. ZjQcmQRYFpfptBannerEnd On Thu, Jan 16, 2025 at 9:50?PM Zou, Ling via petsc-users > wrote: Hi all, Does PETSc has some automatic matrix sparsity detection algorithm available? Something like: https://urldefense.us/v3/__https://docs.sciml.ai/NonlinearSolve/stable/basics/sparsity_detection/__;!!G_uCfscf7eWS!YAjJAY_-Po-R_8afPXjkPNIMrtGRpuOyrIvGq4kmS_GIk9E44GkBWYmWA38JPV_6BtwsHl2enTCgV8Scdmw$ Sparsity detection would rely on introspection of the user code for ComputeFunction(), which is not possible in C (unless you were to code up your evaluation in some symbolic framework). The background is that I use finite differencing plus matrix coloring to (efficiently) get the Jacobian. For the matrix coloring part, I color the matrix based on mesh connectivity and variable dependencies, which is not bad, but just try to be lazy to even eliminating this part. This is how the automatic frameworks also work. This is how we compute the sparsity pattern for PetscFE and PetscFV. A related but different question, how much does PETSc support automatic differentiation? I see some old paper: https://ftp.mcs.anl.gov/pub/tech_reports/reports/P922.pdf and discussion in the roadmap: https://urldefense.us/v3/__https://petsc.org/release/community/roadmap/__;!!G_uCfscf7eWS!YAjJAY_-Po-R_8afPXjkPNIMrtGRpuOyrIvGq4kmS_GIk9E44GkBWYmWA38JPV_6BtwsHl2enTCgModP60M$ I am thinking that if AD works so I don?t even need to do finite differencing Jacobian, or have it as another option. Other people understand that better than I do. Thanks, Matt Best, -Ling -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YAjJAY_-Po-R_8afPXjkPNIMrtGRpuOyrIvGq4kmS_GIk9E44GkBWYmWA38JPV_6BtwsHl2enTCg4JqU_GI$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YAjJAY_-Po-R_8afPXjkPNIMrtGRpuOyrIvGq4kmS_GIk9E44GkBWYmWA38JPV_6BtwsHl2enTCg4JqU_GI$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.stone at opengosim.com Mon Jan 20 06:54:41 2025 From: daniel.stone at opengosim.com (Daniel Stone) Date: Mon, 20 Jan 2025 12:54:41 +0000 Subject: [petsc-users] Bug report: using a custom ksp convergence function Message-ID: Hello PETSc Community, I think I've found a bug - Go to $PETSC_DIR/src/ksp/ksp/tutorials/ open ex5f.F90, and add the following line in MyKSPConverged(), say around line 336: call KSPConvergedDefault(ksp,n,rnorm,flag,dummy,ierr) It should make sense why someone would want to do this - within a definition of custom convergence behaviour, get the default convergence flags, and, based on certain conditions, overwrite it. Now, building and running the exercise, making sure to include the flag to use the custom convergence: boston at boston-SYS-540A-TR:~/DATA_DRIVE/PETSC_DIRS/petsc_3.22/src/ksp/ksp/tutorials$ ./ex2f -my_ksp_convergence [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range[...] >From my debugging, the issue is at line 1520 of iterativ.c: KSPConvergedDefaultCtx *cctx = (KSPConvergedDefaultCtx *)ctx; Because a dummy is used for the context object, this cast fails (my debugger says "cannot access memory..." for cctx). Then the crash happens when trying to access members of cctx: if (cctx->convmaxits && n >= ksp->max_it) { I'm not sure how to fix it. Crucially, we must note that that this did work in older versions of PETSc - I can repeat this test in 3.19.1, for example, and it works fine - a debugger shows the cast succeeding and producing some defaulted version of the ctx object, the subsequent crash does not happen, etc. Does anybody have any advice for working around this for the time being? A piece of software I work with uses a custom ksp convergence test in the manner described above, and is only functional with older versions of petsc because of this. Like in ex2f, I have no need of the ctx object, and just as in ex2f, I use 0 instead. Things I'll try next: defining a proper ctx object (I use 0, like in ex2f), or looking for some PETSC_NULL_CTX definition somewhere. Many Thanks, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jan 20 09:19:51 2025 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 20 Jan 2025 11:19:51 -0400 Subject: [petsc-users] Bug report: using a custom ksp convergence function In-Reply-To: References: Message-ID: <94CE8AE8-9BA9-4D50-96FC-770F193452C2@petsc.dev> David, KSPConvergedDefault() now takes a non-NULL context that must be provided. I have attached a modified version of ex5f.F90 that demonstrates how it is constructured before being passed to KSPSetConvergenceTest(). Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: ex5f.F90 Type: application/octet-stream Size: 16786 bytes Desc: not available URL: -------------- next part -------------- > On Jan 20, 2025, at 8:54?AM, Daniel Stone wrote: > > Hello PETSc Community, > > I think I've found a bug - > > Go to $PETSC_DIR/src/ksp/ksp/tutorials/ > > open ex5f.F90, and add the following line in MyKSPConverged(), say around line 336: > > call KSPConvergedDefault(ksp,n,rnorm,flag,dummy,ierr) > > It should make sense why someone would want to do this - within a definition of custom convergence behaviour, get the default convergence flags, and, based on certain conditions, overwrite it. > > Now, building and running the exercise, making sure to include the flag to use the custom convergence: > > boston at boston-SYS-540A-TR:~/DATA_DRIVE/PETSC_DIRS/petsc_3.22/src/ksp/ksp/tutorials$ ./ex2f -my_ksp_convergence > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range[...] > > From my debugging, the issue is at line 1520 of iterativ.c: > > KSPConvergedDefaultCtx *cctx = (KSPConvergedDefaultCtx *)ctx; > > Because a dummy is used for the context object, this cast fails (my debugger says "cannot access memory..." for cctx). Then the crash happens when trying to access members of cctx: > > if (cctx->convmaxits && n >= ksp->max_it) { > > > I'm not sure how to fix it. Crucially, we must note that that this did work in older versions of PETSc - I can repeat this test in 3.19.1, for example, and it works fine - a debugger shows the cast succeeding and producing some > defaulted version of the ctx object, the subsequent crash does not happen, etc. > > Does anybody have any advice for working around this for the time being? A piece of software I work with uses a custom ksp convergence test in the manner described above, and is only functional with older versions of petsc because of this. > Like in ex2f, I have no need of the ctx object, and just as in ex2f, I use 0 instead. > > Things I'll try next: defining a proper ctx object (I use 0, like in ex2f), or looking for some PETSC_NULL_CTX definition somewhere. > > Many Thanks, > > Daniel > > > From lzou at anl.gov Mon Jan 20 09:27:22 2025 From: lzou at anl.gov (Zou, Ling) Date: Mon, 20 Jan 2025 15:27:22 +0000 Subject: [petsc-users] Vector 'Vec_0x84000002_0' (argument #1) was locked for read-only access Message-ID: Hi all, I updated PETSc to 3.22.1, and now my code run with the following error message. Any idea this may happen? In the entire code, VecGetArray() are used in the following places: void PETScApp::setupPETScIC() { PetscScalar *uu; VecGetArray(u, &uu); _sim->setupPETScIC(uu); VecRestoreArray(u, &uu); } PetscErrorCode snesFormFunction(SNES /*snes*/, Vec u, Vec f, void * appCtx) { PETScApp * petscApp = (PETScApp *)appCtx; // zero out residuals VecZeroEntries(petscApp->res_tran); VecZeroEntries(petscApp->res_spatial); // get vectors PetscScalar *uu, *res_tran, *res_spatial; VecGetArray(u, &uu); VecGetArray(petscApp->res_tran, &res_tran); VecGetArray(petscApp->res_spatial, &res_spatial); // use the most updated solution vector to update solution, to compute RHS and transient residuals petscApp->_sim->updateSolutions(uu); petscApp->_sim->updateAuxVariables(); petscApp->_sim->computeTranRes(res_tran); petscApp->_sim->computeSpatialRes(res_spatial); // restore vectors VecRestoreArray(u, &uu); VecRestoreArray(petscApp->res_tran, &res_tran); VecRestoreArray(petscApp->res_spatial, &res_spatial); // assemble final residuals: f = transient + spatial VecWAXPY(f, 1.0, petscApp->res_tran, petscApp->res_spatial); PetscFunctionReturn(PETSC_SUCCESS); } PS: PETSc is available provided by moose env. -Ling Time Step 1, time = 0.1, dt = 0.1 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: Vector 'Vec_0x84000002_0' (argument #1) was locked for read-only access in unknown_function() at unknown file:0 (line numbers only accurate to function begin) [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the program crashed before usage or a spelling mistake, etc! [0]PETSC ERROR: Option left: name:-i value: tests/hc1d.i source: command line [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!afeyp9qigjd3lpaTTPGFhRudAAw-yvYL974yeq5bR6xXsoGjkjeuvOHT8IxSXBtbGT-ULg-27kGV3D9K5Mo$ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown [0]PETSC ERROR: ./open_phase-opt with 1 MPI process(es) and PETSC_ARCH on CSI365324.local by lingzou Mon Jan 20 09:20:20 2025 [0]PETSC ERROR: Configure options: --with-64-bit-indices --with-cxx-dialect=C++17 --with-debugging=no --with-fortran-bindings=0 --with-mpi=1 --with-openmp=1 --with-strict-petscerrorcode=1 --with-shared-libraries=1 --with-sowing=0 --download-fblaslapack=1 --download-hpddm=1 --download-hypre=1 --download-metis=1 --download-mumps=1 --download-ptscotch=1 --download-parmetis=1 --download-scalapack=1 --download-slepc=1 --download-strumpack=1 --download-superlu_dist=1 --with-hdf5-dir=/Users/lingzou/miniforge/envs/moose --with-make-np=8 --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-x=0 --with-ssl=0 --with-mpi-dir=/Users/lingzou/miniforge/envs/moose AR=arm64-apple-darwin20.0.0-ar RANLIB=arm64-apple-darwin20.0.0-ranlib CFLAGS="-ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -isystem /Users/lingzou/miniforge/envs/moose/include -fdebug-prefix-map=/opt/civet0/build_0/_env/conda-bld/moose-mpi_1735973237402/work=/usr/local/src/conda/moose-mpi-base-2024.12.23 -fdebug-prefix-map=/Users/lingzou/miniforge/envs/moose=/usr/local/src/conda-prefix " CXXFLAGS="-ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /Users/lingzou/miniforge/envs/moose/include -fdebug-prefix-map=/opt/civet0/build_0/_env/conda-bld/moose-mpi_1735973237402/work=/usr/local/src/conda/moose-mpi-base-2024.12.23 -fdebug-prefix-map=/Users/lingzou/miniforge/envs/moose=/usr/local/src/conda-prefix " CPPFLAGS="-D_FORTIFY_SOURCE=2 -isystem /Users/lingzou/miniforge/envs/moose/include -mmacosx-version-min=11.3 " FFLAGS="-march=armv8.3-a -ftree-vectorize -fPIC -fno-stack-protector -O2 -pipe -isystem /Users/lingzou/miniforge/envs/moose/include -fdebug-prefix-map=/opt/civet0/build_0/_env/conda-bld/moose-mpi_1735973237402/work=/usr/local/src/conda/moose-mpi-base-2024.12.23 -fdebug-prefix-map=/Users/lingzou/miniforge/envs/moose=/usr/local/src/conda-prefix " FCFLAGS="-march=armv8.3-a -ftree-vectorize -fPIC -fno-stack-protector -O2 -pipe -isystem /Users/lingzou/miniforge/envs/moose/include -fdebug-prefix-map=/opt/civet0/build_0/_env/conda-bld/moose-mpi_1735973237402/work=/usr/local/src/conda/moose-mpi-base-2024.12.23 -fdebug-prefix-map=/Users/lingzou/miniforge/envs/moose=/usr/local/src/conda-prefix " LDFLAGS="-Wl,-headerpad_max_install_names -Wl,-rpath,/Users/lingzou/miniforge/envs/moose/lib -L/Users/lingzou/miniforge/envs/moose/lib -Wl,-ld_classic -Wl,-commons,use_dylibs" --prefix=/Users/lingzou/miniforge/envs/moose/petsc [0]PETSC ERROR: #1 VecSetErrorIfLocked() at /opt/civet0/build_0/_env/conda-bld/moose-petsc_1735976555173/work/include/petscvec.h:649 [0]PETSC ERROR: #2 VecGetArray() at /opt/civet0/build_0/_env/conda-bld/moose-petsc_1735976555173/work/src/vec/vec/interface/rvector.c:2020 NL Step = 0, fnorm = 1.10935E+30 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lzou at anl.gov Mon Jan 20 12:02:04 2025 From: lzou at anl.gov (Zou, Ling) Date: Mon, 20 Jan 2025 18:02:04 +0000 Subject: [petsc-users] Vector 'Vec_0x84000002_0' (argument #1) was locked for read-only access In-Reply-To: References: Message-ID: I figure that it is most likely related to using VecGetArrayRead/VecRestoreArrayRead in one of the places. I will update later. -Ling From: petsc-users on behalf of Zou, Ling via petsc-users Date: Monday, January 20, 2025 at 9:27?AM To: petsc-users at mcs.anl.gov Subject: [petsc-users] Vector 'Vec_0x84000002_0' (argument #1) was locked for read-only access Hi all, I updated PETSc to 3.22.1, and now my code run with the following error message. Any idea this may happen? In the entire code, VecGetArray() are used in the following places: void PETScApp::setupPETScIC() { PetscScalar *uu; VecGetArray(u, &uu); _sim->setupPETScIC(uu); VecRestoreArray(u, &uu); } PetscErrorCode snesFormFunction(SNES /*snes*/, Vec u, Vec f, void * appCtx) { PETScApp * petscApp = (PETScApp *)appCtx; // zero out residuals VecZeroEntries(petscApp->res_tran); VecZeroEntries(petscApp->res_spatial); // get vectors PetscScalar *uu, *res_tran, *res_spatial; VecGetArray(u, &uu); VecGetArray(petscApp->res_tran, &res_tran); VecGetArray(petscApp->res_spatial, &res_spatial); // use the most updated solution vector to update solution, to compute RHS and transient residuals petscApp->_sim->updateSolutions(uu); petscApp->_sim->updateAuxVariables(); petscApp->_sim->computeTranRes(res_tran); petscApp->_sim->computeSpatialRes(res_spatial); // restore vectors VecRestoreArray(u, &uu); VecRestoreArray(petscApp->res_tran, &res_tran); VecRestoreArray(petscApp->res_spatial, &res_spatial); // assemble final residuals: f = transient + spatial VecWAXPY(f, 1.0, petscApp->res_tran, petscApp->res_spatial); PetscFunctionReturn(PETSC_SUCCESS); } PS: PETSc is available provided by moose env. -Ling Time Step 1, time = 0.1, dt = 0.1 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: Vector 'Vec_0x84000002_0' (argument #1) was locked for read-only access in unknown_function() at unknown file:0 (line numbers only accurate to function begin) [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the program crashed before usage or a spelling mistake, etc! [0]PETSC ERROR: Option left: name:-i value: tests/hc1d.i source: command line [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!bjh67AIY-WYl8pcVpuVi-UdwrQxTomz89NwYOxn0UDPGEvYcI6GtB_u6hcnYtFefPp2i1egt6PykZOxvSZo$ for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown [0]PETSC ERROR: ./open_phase-opt with 1 MPI process(es) and PETSC_ARCH on CSI365324.local by lingzou Mon Jan 20 09:20:20 2025 [0]PETSC ERROR: Configure options: --with-64-bit-indices --with-cxx-dialect=C++17 --with-debugging=no --with-fortran-bindings=0 --with-mpi=1 --with-openmp=1 --with-strict-petscerrorcode=1 --with-shared-libraries=1 --with-sowing=0 --download-fblaslapack=1 --download-hpddm=1 --download-hypre=1 --download-metis=1 --download-mumps=1 --download-ptscotch=1 --download-parmetis=1 --download-scalapack=1 --download-slepc=1 --download-strumpack=1 --download-superlu_dist=1 --with-hdf5-dir=/Users/lingzou/miniforge/envs/moose --with-make-np=8 --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-x=0 --with-ssl=0 --with-mpi-dir=/Users/lingzou/miniforge/envs/moose AR=arm64-apple-darwin20.0.0-ar RANLIB=arm64-apple-darwin20.0.0-ranlib CFLAGS="-ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -isystem /Users/lingzou/miniforge/envs/moose/include -fdebug-prefix-map=/opt/civet0/build_0/_env/conda-bld/moose-mpi_1735973237402/work=/usr/local/src/conda/moose-mpi-base-2024.12.23 -fdebug-prefix-map=/Users/lingzou/miniforge/envs/moose=/usr/local/src/conda-prefix " CXXFLAGS="-ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /Users/lingzou/miniforge/envs/moose/include -fdebug-prefix-map=/opt/civet0/build_0/_env/conda-bld/moose-mpi_1735973237402/work=/usr/local/src/conda/moose-mpi-base-2024.12.23 -fdebug-prefix-map=/Users/lingzou/miniforge/envs/moose=/usr/local/src/conda-prefix " CPPFLAGS="-D_FORTIFY_SOURCE=2 -isystem /Users/lingzou/miniforge/envs/moose/include -mmacosx-version-min=11.3 " FFLAGS="-march=armv8.3-a -ftree-vectorize -fPIC -fno-stack-protector -O2 -pipe -isystem /Users/lingzou/miniforge/envs/moose/include -fdebug-prefix-map=/opt/civet0/build_0/_env/conda-bld/moose-mpi_1735973237402/work=/usr/local/src/conda/moose-mpi-base-2024.12.23 -fdebug-prefix-map=/Users/lingzou/miniforge/envs/moose=/usr/local/src/conda-prefix " FCFLAGS="-march=armv8.3-a -ftree-vectorize -fPIC -fno-stack-protector -O2 -pipe -isystem /Users/lingzou/miniforge/envs/moose/include -fdebug-prefix-map=/opt/civet0/build_0/_env/conda-bld/moose-mpi_1735973237402/work=/usr/local/src/conda/moose-mpi-base-2024.12.23 -fdebug-prefix-map=/Users/lingzou/miniforge/envs/moose=/usr/local/src/conda-prefix " LDFLAGS="-Wl,-headerpad_max_install_names -Wl,-rpath,/Users/lingzou/miniforge/envs/moose/lib -L/Users/lingzou/miniforge/envs/moose/lib -Wl,-ld_classic -Wl,-commons,use_dylibs" --prefix=/Users/lingzou/miniforge/envs/moose/petsc [0]PETSC ERROR: #1 VecSetErrorIfLocked() at /opt/civet0/build_0/_env/conda-bld/moose-petsc_1735976555173/work/include/petscvec.h:649 [0]PETSC ERROR: #2 VecGetArray() at /opt/civet0/build_0/_env/conda-bld/moose-petsc_1735976555173/work/src/vec/vec/interface/rvector.c:2020 NL Step = 0, fnorm = 1.10935E+30 -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.stone at opengosim.com Tue Jan 21 03:25:28 2025 From: daniel.stone at opengosim.com (Daniel Stone) Date: Tue, 21 Jan 2025 09:25:28 +0000 Subject: [petsc-users] Bug report: using a custom ksp convergence function In-Reply-To: <94CE8AE8-9BA9-4D50-96FC-770F193452C2@petsc.dev> References: <94CE8AE8-9BA9-4D50-96FC-770F193452C2@petsc.dev> Message-ID: That works wonderfully, thanks! Should be paired with call KSPConvergedDefaultDestroy(ctx,ierr) I think. Best Regards, Daniel On Mon, Jan 20, 2025 at 3:20?PM Barry Smith wrote: > > David, > > KSPConvergedDefault() now takes a non-NULL context that must be > provided. I have attached a modified version of ex5f.F90 that demonstrates > how it is constructured before being passed to KSPSetConvergenceTest(). > > Barry > > > > On Jan 20, 2025, at 8:54?AM, Daniel Stone > wrote: > > > > Hello PETSc Community, > > > > I think I've found a bug - > > > > Go to $PETSC_DIR/src/ksp/ksp/tutorials/ > > > > open ex5f.F90, and add the following line in MyKSPConverged(), say > around line 336: > > > > call KSPConvergedDefault(ksp,n,rnorm,flag,dummy,ierr) > > > > It should make sense why someone would want to do this - within a > definition of custom convergence behaviour, get the default convergence > flags, and, based on certain conditions, overwrite it. > > > > Now, building and running the exercise, making sure to include the flag > to use the custom convergence: > > > > boston at boston-SYS-540A-TR:~/DATA_DRIVE/PETSC_DIRS/petsc_3.22/src/ksp/ksp/tutorials$ > ./ex2f -my_ksp_convergence > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range[...] > > > > From my debugging, the issue is at line 1520 of iterativ.c: > > > > KSPConvergedDefaultCtx *cctx = (KSPConvergedDefaultCtx *)ctx; > > > > Because a dummy is used for the context object, this cast fails (my > debugger says "cannot access memory..." for cctx). Then the crash happens > when trying to access members of cctx: > > > > if (cctx->convmaxits && n >= ksp->max_it) { > > > > > > I'm not sure how to fix it. Crucially, we must note that that this did > work in older versions of PETSc - I can repeat this test in 3.19.1, for > example, and it works fine - a debugger shows the cast succeeding and > producing some > > defaulted version of the ctx object, the subsequent crash does not > happen, etc. > > > > Does anybody have any advice for working around this for the time being? > A piece of software I work with uses a custom ksp convergence test in the > manner described above, and is only functional with older versions of petsc > because of this. > > Like in ex2f, I have no need of the ctx object, and just as in ex2f, I > use 0 instead. > > > > Things I'll try next: defining a proper ctx object (I use 0, like in > ex2f), or looking for some PETSC_NULL_CTX definition somewhere. > > > > Many Thanks, > > > > Daniel > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jan 21 07:16:13 2025 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 21 Jan 2025 08:16:13 -0500 Subject: [petsc-users] Bug report: using a custom ksp convergence function In-Reply-To: References: <94CE8AE8-9BA9-4D50-96FC-770F193452C2@petsc.dev> Message-ID: Yes, that can be passed in also with KSPSetConvergenceTest() > On Jan 21, 2025, at 4:25?AM, Daniel Stone wrote: > > That works wonderfully, thanks! Should be paired with > > call KSPConvergedDefaultDestroy(ctx,ierr) > > I think. > > Best Regards, > > Daniel > > > > On Mon, Jan 20, 2025 at 3:20?PM Barry Smith > wrote: >> >> David, >> >> KSPConvergedDefault() now takes a non-NULL context that must be provided. I have attached a modified version of ex5f.F90 that demonstrates how it is constructured before being passed to KSPSetConvergenceTest(). >> >> Barry >> >> >> > On Jan 20, 2025, at 8:54?AM, Daniel Stone > wrote: >> > >> > Hello PETSc Community, >> > >> > I think I've found a bug - >> > >> > Go to $PETSC_DIR/src/ksp/ksp/tutorials/ >> > >> > open ex5f.F90, and add the following line in MyKSPConverged(), say around line 336: >> > >> > call KSPConvergedDefault(ksp,n,rnorm,flag,dummy,ierr) >> > >> > It should make sense why someone would want to do this - within a definition of custom convergence behaviour, get the default convergence flags, and, based on certain conditions, overwrite it. >> > >> > Now, building and running the exercise, making sure to include the flag to use the custom convergence: >> > >> > boston at boston-SYS-540A-TR:~/DATA_DRIVE/PETSC_DIRS/petsc_3.22/src/ksp/ksp/tutorials$ ./ex2f -my_ksp_convergence >> > [0]PETSC ERROR: ------------------------------------------------------------------------ >> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range[...] >> > >> > From my debugging, the issue is at line 1520 of iterativ.c: >> > >> > KSPConvergedDefaultCtx *cctx = (KSPConvergedDefaultCtx *)ctx; >> > >> > Because a dummy is used for the context object, this cast fails (my debugger says "cannot access memory..." for cctx). Then the crash happens when trying to access members of cctx: >> > >> > if (cctx->convmaxits && n >= ksp->max_it) { >> > >> > >> > I'm not sure how to fix it. Crucially, we must note that that this did work in older versions of PETSc - I can repeat this test in 3.19.1, for example, and it works fine - a debugger shows the cast succeeding and producing some >> > defaulted version of the ctx object, the subsequent crash does not happen, etc. >> > >> > Does anybody have any advice for working around this for the time being? A piece of software I work with uses a custom ksp convergence test in the manner described above, and is only functional with older versions of petsc because of this. >> > Like in ex2f, I have no need of the ctx object, and just as in ex2f, I use 0 instead. >> > >> > Things I'll try next: defining a proper ctx object (I use 0, like in ex2f), or looking for some PETSC_NULL_CTX definition somewhere. >> > >> > Many Thanks, >> > >> > Daniel >> > >> > >> > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jan 21 07:39:40 2025 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 21 Jan 2025 08:39:40 -0500 Subject: [petsc-users] Bug report: using a custom ksp convergence function In-Reply-To: References: <94CE8AE8-9BA9-4D50-96FC-770F193452C2@petsc.dev> Message-ID: Yes, that can be passed in also with KSPSetConvergenceTest() > On Jan 21, 2025, at 4:25?AM, Daniel Stone wrote: > > That works wonderfully, thanks! Should be paired with > > call KSPConvergedDefaultDestroy(ctx,ierr) > > I think. > > Best Regards, > > Daniel > > > > On Mon, Jan 20, 2025 at 3:20?PM Barry Smith > wrote: >> >> David, >> >> KSPConvergedDefault() now takes a non-NULL context that must be provided. I have attached a modified version of ex5f.F90 that demonstrates how it is constructured before being passed to KSPSetConvergenceTest(). >> >> Barry >> >> >> > On Jan 20, 2025, at 8:54?AM, Daniel Stone > wrote: >> > >> > Hello PETSc Community, >> > >> > I think I've found a bug - >> > >> > Go to $PETSC_DIR/src/ksp/ksp/tutorials/ >> > >> > open ex5f.F90, and add the following line in MyKSPConverged(), say around line 336: >> > >> > call KSPConvergedDefault(ksp,n,rnorm,flag,dummy,ierr) >> > >> > It should make sense why someone would want to do this - within a definition of custom convergence behaviour, get the default convergence flags, and, based on certain conditions, overwrite it. >> > >> > Now, building and running the exercise, making sure to include the flag to use the custom convergence: >> > >> > boston at boston-SYS-540A-TR:~/DATA_DRIVE/PETSC_DIRS/petsc_3.22/src/ksp/ksp/tutorials$ ./ex2f -my_ksp_convergence >> > [0]PETSC ERROR: ------------------------------------------------------------------------ >> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range[...] >> > >> > From my debugging, the issue is at line 1520 of iterativ.c: >> > >> > KSPConvergedDefaultCtx *cctx = (KSPConvergedDefaultCtx *)ctx; >> > >> > Because a dummy is used for the context object, this cast fails (my debugger says "cannot access memory..." for cctx). Then the crash happens when trying to access members of cctx: >> > >> > if (cctx->convmaxits && n >= ksp->max_it) { >> > >> > >> > I'm not sure how to fix it. Crucially, we must note that that this did work in older versions of PETSc - I can repeat this test in 3.19.1, for example, and it works fine - a debugger shows the cast succeeding and producing some >> > defaulted version of the ctx object, the subsequent crash does not happen, etc. >> > >> > Does anybody have any advice for working around this for the time being? A piece of software I work with uses a custom ksp convergence test in the manner described above, and is only functional with older versions of petsc because of this. >> > Like in ex2f, I have no need of the ctx object, and just as in ex2f, I use 0 instead. >> > >> > Things I'll try next: defining a proper ctx object (I use 0, like in ex2f), or looking for some PETSC_NULL_CTX definition somewhere. >> > >> > Many Thanks, >> > >> > Daniel >> > >> > >> > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.stone at opengosim.com Tue Jan 21 08:45:48 2025 From: daniel.stone at opengosim.com (Daniel Stone) Date: Tue, 21 Jan 2025 14:45:48 +0000 Subject: [petsc-users] Bug report: using a custom ksp convergence function In-Reply-To: References: <94CE8AE8-9BA9-4D50-96FC-770F193452C2@petsc.dev> Message-ID: Is there a quick way for me to find out which version of PETSc KSPConvergedDefaultCreate() was introduced with? I'll likely need to conditionally pragma-out these calls for back compatibility. Thanks, Daniel On Tue, Jan 21, 2025 at 1:16?PM Barry Smith wrote: > > Yes, that can be passed in also with KSPSetConvergenceTest() > > > On Jan 21, 2025, at 4:25?AM, Daniel Stone > wrote: > > That works wonderfully, thanks! Should be paired with > > call KSPConvergedDefaultDestroy(ctx,ierr) > > I think. > > Best Regards, > > Daniel > > > > On Mon, Jan 20, 2025 at 3:20?PM Barry Smith wrote: > >> >> David, >> >> KSPConvergedDefault() now takes a non-NULL context that must be >> provided. I have attached a modified version of ex5f.F90 that demonstrates >> how it is constructured before being passed to KSPSetConvergenceTest(). >> >> Barry >> >> >> > On Jan 20, 2025, at 8:54?AM, Daniel Stone >> wrote: >> > >> > Hello PETSc Community, >> > >> > I think I've found a bug - >> > >> > Go to $PETSC_DIR/src/ksp/ksp/tutorials/ >> > >> > open ex5f.F90, and add the following line in MyKSPConverged(), say >> around line 336: >> > >> > call KSPConvergedDefault(ksp,n,rnorm,flag,dummy,ierr) >> > >> > It should make sense why someone would want to do this - within a >> definition of custom convergence behaviour, get the default convergence >> flags, and, based on certain conditions, overwrite it. >> > >> > Now, building and running the exercise, making sure to include the flag >> to use the custom convergence: >> > >> > boston at boston-SYS-540A-TR:~/DATA_DRIVE/PETSC_DIRS/petsc_3.22/src/ksp/ksp/tutorials$ >> ./ex2f -my_ksp_convergence >> > [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range[...] >> > >> > From my debugging, the issue is at line 1520 of iterativ.c: >> > >> > KSPConvergedDefaultCtx *cctx = (KSPConvergedDefaultCtx *)ctx; >> > >> > Because a dummy is used for the context object, this cast fails (my >> debugger says "cannot access memory..." for cctx). Then the crash happens >> when trying to access members of cctx: >> > >> > if (cctx->convmaxits && n >= ksp->max_it) { >> > >> > >> > I'm not sure how to fix it. Crucially, we must note that that this did >> work in older versions of PETSc - I can repeat this test in 3.19.1, for >> example, and it works fine - a debugger shows the cast succeeding and >> producing some >> > defaulted version of the ctx object, the subsequent crash does not >> happen, etc. >> > >> > Does anybody have any advice for working around this for the time >> being? A piece of software I work with uses a custom ksp convergence test >> in the manner described above, and is only functional with older versions >> of petsc because of this. >> > Like in ex2f, I have no need of the ctx object, and just as in ex2f, I >> use 0 instead. >> > >> > Things I'll try next: defining a proper ctx object (I use 0, like in >> ex2f), or looking for some PETSC_NULL_CTX definition somewhere. >> > >> > Many Thanks, >> > >> > Daniel >> > >> > >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 21 13:45:13 2025 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 21 Jan 2025 14:45:13 -0500 Subject: [petsc-users] Bug report: using a custom ksp convergence function In-Reply-To: References: <94CE8AE8-9BA9-4D50-96FC-770F193452C2@petsc.dev> Message-ID: On Tue, Jan 21, 2025 at 9:46?AM Daniel Stone wrote: > Is there a quick way for me to find out which version of > PETSc KSPConvergedDefaultCreate() was introduced with? > I'll likely need to conditionally pragma-out these calls for back > compatibility. > We have changes docs: https://urldefense.us/v3/__https://petsc.org/main/changes/__;!!G_uCfscf7eWS!cAwR4aLIcKH3Pn0R90bQMD51mNRpiPcM7Ic_X9IQC6ANOcmJIekcW1TYwAdodLUkR1pyNfHatMuaoZiRhrFA$ main *$:~/PETSc4/petsc/petsc-pylith$ find doc/changes/ | xargs grep KSPConvergedDefaultCreate doc/changes//35.rst: ``KSPConvergedDefaultCreate()``, Thanks, Matt Thanks, > > Daniel > > On Tue, Jan 21, 2025 at 1:16?PM Barry Smith wrote: > >> >> Yes, that can be passed in also with KSPSetConvergenceTest() >> >> >> On Jan 21, 2025, at 4:25?AM, Daniel Stone >> wrote: >> >> That works wonderfully, thanks! Should be paired with >> >> call KSPConvergedDefaultDestroy(ctx,ierr) >> >> I think. >> >> Best Regards, >> >> Daniel >> >> >> >> On Mon, Jan 20, 2025 at 3:20?PM Barry Smith wrote: >> >>> >>> David, >>> >>> KSPConvergedDefault() now takes a non-NULL context that must be >>> provided. I have attached a modified version of ex5f.F90 that demonstrates >>> how it is constructured before being passed to KSPSetConvergenceTest(). >>> >>> Barry >>> >>> >>> > On Jan 20, 2025, at 8:54?AM, Daniel Stone >>> wrote: >>> > >>> > Hello PETSc Community, >>> > >>> > I think I've found a bug - >>> > >>> > Go to $PETSC_DIR/src/ksp/ksp/tutorials/ >>> > >>> > open ex5f.F90, and add the following line in MyKSPConverged(), say >>> around line 336: >>> > >>> > call KSPConvergedDefault(ksp,n,rnorm,flag,dummy,ierr) >>> > >>> > It should make sense why someone would want to do this - within a >>> definition of custom convergence behaviour, get the default convergence >>> flags, and, based on certain conditions, overwrite it. >>> > >>> > Now, building and running the exercise, making sure to include the >>> flag to use the custom convergence: >>> > >>> > boston at boston-SYS-540A-TR:~/DATA_DRIVE/PETSC_DIRS/petsc_3.22/src/ksp/ksp/tutorials$ >>> ./ex2f -my_ksp_convergence >>> > [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range[...] >>> > >>> > From my debugging, the issue is at line 1520 of iterativ.c: >>> > >>> > KSPConvergedDefaultCtx *cctx = (KSPConvergedDefaultCtx *)ctx; >>> > >>> > Because a dummy is used for the context object, this cast fails (my >>> debugger says "cannot access memory..." for cctx). Then the crash happens >>> when trying to access members of cctx: >>> > >>> > if (cctx->convmaxits && n >= ksp->max_it) { >>> > >>> > >>> > I'm not sure how to fix it. Crucially, we must note that that this did >>> work in older versions of PETSc - I can repeat this test in 3.19.1, for >>> example, and it works fine - a debugger shows the cast succeeding and >>> producing some >>> > defaulted version of the ctx object, the subsequent crash does not >>> happen, etc. >>> > >>> > Does anybody have any advice for working around this for the time >>> being? A piece of software I work with uses a custom ksp convergence test >>> in the manner described above, and is only functional with older versions >>> of petsc because of this. >>> > Like in ex2f, I have no need of the ctx object, and just as in ex2f, I >>> use 0 instead. >>> > >>> > Things I'll try next: defining a proper ctx object (I use 0, like in >>> ex2f), or looking for some PETSC_NULL_CTX definition somewhere. >>> > >>> > Many Thanks, >>> > >>> > Daniel >>> > >>> > >>> > >>> >>> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cAwR4aLIcKH3Pn0R90bQMD51mNRpiPcM7Ic_X9IQC6ANOcmJIekcW1TYwAdodLUkR1pyNfHatMuaoR6gbnMD$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bramkamp at nsc.liu.se Wed Jan 22 14:17:47 2025 From: bramkamp at nsc.liu.se (Frank Bramkamp) Date: Wed, 22 Jan 2025 21:17:47 +0100 Subject: [petsc-users] User Defined KSP Method in Fortran Message-ID: <5ED9CE0C-C8A9-4864-9092-5ECB4CAAC559@nsc.liu.se> Dear PETSc team, I was planning to program a custom KSP method, some modified GMRES. We mainly use PETSc from Fortran. Therefore I wonder it is possible to have an interface to a custom KSP solver that is written in fortran. I thought of using KSPRegister to register my own routine, but that seems only available in C. Or is it possible to have a fortran/C wrapper to do that ? Thanks, Frank Bramkamp From knepley at gmail.com Wed Jan 22 16:11:35 2025 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 22 Jan 2025 17:11:35 -0500 Subject: [petsc-users] User Defined KSP Method in Fortran In-Reply-To: <5ED9CE0C-C8A9-4864-9092-5ECB4CAAC559@nsc.liu.se> References: <5ED9CE0C-C8A9-4864-9092-5ECB4CAAC559@nsc.liu.se> Message-ID: On Wed, Jan 22, 2025 at 3:18?PM Frank Bramkamp wrote: > Dear PETSc team, > > I was planning to program a custom KSP method, some modified GMRES. > We mainly use PETSc from Fortran. Therefore I wonder it is possible > to have an interface to a custom KSP solver that is written in fortran. > > I thought of using KSPRegister to register my own routine, but that seems > only > available in C. Or is it possible to have a fortran/C wrapper to do that ? > We have wrappers for other functions that take callbacks, such as SNESSetFunction(). What we need to do is have a list of Fortran function pointers for this method. They when you register, we actually stick in a C wrapper that calls your Fortran function pointer that we have stored in our list. It should be straightforward looking at the implementation for something like SNESSetFunction(). We would help if you want to try :) Barry, is this impacted by your binding rewrite? Thanks, Matt > Thanks, Frank Bramkamp > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eRXZvRoV2JfqzeOdQlhP6UA71kWfULNX_F1C0-Fer5IItdUKkmstwIO3N1VrmApHJYGGisuS6EyybVTqmo4R$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Jan 22 21:58:29 2025 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 22 Jan 2025 22:58:29 -0500 Subject: [petsc-users] User Defined KSP Method in Fortran In-Reply-To: References: <5ED9CE0C-C8A9-4864-9092-5ECB4CAAC559@nsc.liu.se> Message-ID: <3D93C752-F929-44E9-A630-356EF2F9A35E@petsc.dev> I think it is best to code the modified GMRES method in C; likely, much of the current GMRES code could be reused. We'd be happy to help incorporate it into PETSc. Barry > On Jan 22, 2025, at 5:11?PM, Matthew Knepley wrote: > > On Wed, Jan 22, 2025 at 3:18?PM Frank Bramkamp > wrote: >> Dear PETSc team, >> >> I was planning to program a custom KSP method, some modified GMRES. >> We mainly use PETSc from Fortran. Therefore I wonder it is possible >> to have an interface to a custom KSP solver that is written in fortran. >> >> I thought of using KSPRegister to register my own routine, but that seems only >> available in C. Or is it possible to have a fortran/C wrapper to do that ? > > We have wrappers for other functions that take callbacks, such as SNESSetFunction(). What > we need to do is have a list of Fortran function pointers for this method. They when you > register, we actually stick in a C wrapper that calls your Fortran function pointer that we have > stored in our list. It should be straightforward looking at the implementation for something like > SNESSetFunction(). We would help if you want to try :) > > Barry, is this impacted by your binding rewrite? > > Thanks, > > Matt > >> Thanks, Frank Bramkamp >> >> >> >> >> > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fZ-M6nCWDDSrX42HCG2Hd_eMWEdTWiNP56U7piFnAZP-_DkwvSartT6C_Ioe5Aay6Q2Ts_7l_vkJQSAUzZxWHPo$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Thu Jan 23 02:16:33 2025 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 23 Jan 2025 08:16:33 +0000 Subject: [petsc-users] fortran TYPE IS statement vs petsc index set IS Message-ID: In fortran I'm using the following structure to check the type of an incoming variable: SELECT TYPE (myvar) TYPE IS (mytype) ... END SELECT Here IS is a fortan intrinsic, so far so good. However, when I add a petsc index set as follows #include "petsc/finclude/petscksp.h" use petscksp, only: tIS IS :: myIS the compiler gets confused and thinks that the intrinsic fortran IS is the petsc index set IS, and errors-out on the SELECT TYPE: error #8245: SELECT TYPE statement must be immediately followed by CLASS IS, TYPE IS, CLASS DEFAULT or END SELECT statement. SELECT TYPE (myvar) ----^ error #6410: This name has not been declared as an array or a function. [TYPE] TYPE type(tIS) (mytype) ---------^ compilation aborted What would be the right way to deal with this problem? dr. ir. Christiaan Klaij | Senior Researcher | Research & Development T +31 317 49 33 44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!ci7RiI8WEqh81becsu6CMRqmK1It91JWMStWzWcFLARdy0n8d2WiqmINXWd-0992Ex6wcTfqupvy9nnMG_F6cHw$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image870949.png Type: image/png Size: 5004 bytes Desc: image870949.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image753651.png Type: image/png Size: 487 bytes Desc: image753651.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image316694.png Type: image/png Size: 504 bytes Desc: image316694.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image966725.png Type: image/png Size: 482 bytes Desc: image966725.png URL: From bramkamp at nsc.liu.se Thu Jan 23 03:18:51 2025 From: bramkamp at nsc.liu.se (Frank Bramkamp) Date: Thu, 23 Jan 2025 10:18:51 +0100 Subject: [petsc-users] User Defined KSP Method in Fortran In-Reply-To: <3D93C752-F929-44E9-A630-356EF2F9A35E@petsc.dev> References: <5ED9CE0C-C8A9-4864-9092-5ECB4CAAC559@nsc.liu.se> <3D93C752-F929-44E9-A630-356EF2F9A35E@petsc.dev> Message-ID: <287AFB0E-58D6-415D-B93A-51619FA5D69B@nsc.liu.se> Thanks for the quick response, so it seems that there is currently not a straight forward way to add fortran solvers to petsc. I will also have a look how I to add a new solver in the petsc source code. I wanted to test some ideas about a modified approach for gram schmidt orthogonalisation. It is called ?windowed? orthogonalisation. The basic idea is as follows: One first starts gram schmidt as usual, until we have e.g. 20 krylov vectors. For the next iterations, where it gets more expensive, one does not orthogonalize the new vector against all previous vectors but only the last 20 ones. As we start with the first 20 ones as usual one has a good basis that mainly covers the lower frequencies. (is that also your experience that the first krylov vectors are more in the low frequency range ?! I still have to check this myself) As for higher iterations we only consider the last 20 vectors for gam schmidt, it makes it cheaper (that is the ?window?). As we do not want to have too large orthogonalization errors, after another 20 iterations one makes a restart using deflation where we extract the main eigenvectors from the existing solution so far, so we have a new full orthogonal basis after lets say 40 iterations. For that one can probably use the approach from DGMRES. So one could simply modify DMRES, respectively the gram-schmidt algorithm to allow for a more flexible way how many vectors to consider. I think one basically just has to modify the start index in gram schmidt to allow not to use all vectors but just the last lets say 20 vectors for orthogonalization. There is a paper and matlab implementation where they discuss this approach (I have to look up where I found it). Does that approach sound to have some potential to you to make gram schmidt cheaper ? The other thing that I wanted to check is the least squares givens rotation. It seems that in the gingko linear solver they are reusing certain components within the givens rotation that could potentially make it a bit faster. I have to look into the details again. What I do to determine how they do it in gingko is, that I let claude.ai crawl petsc code and gingko code. Then the AI can explain me the differences and show me in detail where are the differences between the codes, so I can take over the best from different codes and the AI can often explain me quite well what the code does (at least that is the plan) Greetings, Frank > On 23 Jan 2025, at 04:58, Barry Smith wrote: > > > I think it is best to code the modified GMRES method in C; likely, much of the current GMRES code could be reused. We'd be happy to help incorporate it into PETSc. > > Barry > > >> On Jan 22, 2025, at 5:11?PM, Matthew Knepley wrote: >> >> On Wed, Jan 22, 2025 at 3:18?PM Frank Bramkamp > wrote: >>> Dear PETSc team, >>> >>> I was planning to program a custom KSP method, some modified GMRES. >>> We mainly use PETSc from Fortran. Therefore I wonder it is possible >>> to have an interface to a custom KSP solver that is written in fortran. >>> >>> I thought of using KSPRegister to register my own routine, but that seems only >>> available in C. Or is it possible to have a fortran/C wrapper to do that ? >> >> We have wrappers for other functions that take callbacks, such as SNESSetFunction(). What >> we need to do is have a list of Fortran function pointers for this method. They when you >> register, we actually stick in a C wrapper that calls your Fortran function pointer that we have >> stored in our list. It should be straightforward looking at the implementation for something like >> SNESSetFunction(). We would help if you want to try :) >> >> Barry, is this impacted by your binding rewrite? >> >> Thanks, >> >> Matt >> >>> Thanks, Frank Bramkamp >>> >>> >>> >>> >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZRnbvVdIrLWKaLXQZRVItrhdMLGS8sF3iOVpHxMU0Ho3S-cvD4g8uNCzryCDYgnJU1NqStZ6SLQxGHEhobo_mwWyYA$ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 23 07:05:29 2025 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 23 Jan 2025 08:05:29 -0500 Subject: [petsc-users] User Defined KSP Method in Fortran In-Reply-To: <287AFB0E-58D6-415D-B93A-51619FA5D69B@nsc.liu.se> References: <5ED9CE0C-C8A9-4864-9092-5ECB4CAAC559@nsc.liu.se> <3D93C752-F929-44E9-A630-356EF2F9A35E@petsc.dev> <287AFB0E-58D6-415D-B93A-51619FA5D69B@nsc.liu.se> Message-ID: On Thu, Jan 23, 2025 at 4:19?AM Frank Bramkamp wrote: > Thanks for the quick response, > > so it seems that there is currently not a straight forward way to add > fortran solvers to petsc. > > > I will also have a look how I to add a new solver in the petsc source code. > > I wanted to test some ideas about a modified approach for gram schmidt > orthogonalisation. > It is called ?windowed? orthogonalisation. The basic idea is as follows: > One first starts gram schmidt as usual, until we have e.g. 20 krylov > vectors. > > For the next iterations, where it gets more expensive, one does not > orthogonalize the new vector > against all previous vectors but only the last 20 ones. As we start with > the first 20 ones > as usual one has a good basis that mainly covers the lower frequencies. > (is that also your experience that the first krylov vectors are more in > the low frequency range ?! > I still have to check this myself) > > As for higher iterations we only consider the last 20 vectors for gam > schmidt, it makes it cheaper > (that is the ?window?). > At least the windowing part is in Saad's book. The restart is not, as far as I remember. When I tried it, it failed often enough that I did not keep using it. Putting effort into the PC was more effective for me. Certainly a smaller number of vectors is cheaper. We usually use Classical with selective reorthog, so we are not paying a sync penalty in parallel that scales with the number of vectors. Thanks, Matt > As we do not want to have too large orthogonalization errors, after > another 20 iterations > one makes a restart using deflation where we extract the main eigenvectors > from the existing solution so far, > so we have a new full orthogonal basis after lets say 40 iterations. > > For that one can probably use the approach from DGMRES. So one could > simply modify DMRES, > respectively the gram-schmidt algorithm to allow for a more flexible way > how many vectors to consider. > I think one basically just has to modify the start index in gram schmidt > to allow not to use all vectors > but just the last lets say 20 vectors for orthogonalization. > > There is a paper and matlab implementation where they discuss this > approach (I have to look up where I found it). > > Does that approach sound to have some potential to you to make gram > schmidt cheaper ? > > > The other thing that I wanted to check is the least squares givens > rotation. It seems that in the gingko linear solver > they are reusing certain components within the givens rotation that could > potentially make it a bit faster. > I have to look into the details again. > > What I do to determine how they do it in gingko is, that I let claude.ai > crawl petsc code and gingko code. > Then the AI can explain me the differences and show me in detail where are > the differences between the codes, > so I can take over the best from different codes and the AI can often > explain me quite well what the code does > (at least that is the plan) > > > Greetings, Frank > > > > > > > > > > > > > > > > > > On 23 Jan 2025, at 04:58, Barry Smith wrote: > > > I think it is best to code the modified GMRES method in C; likely, much > of the current GMRES code could be reused. We'd be happy to help > incorporate it into PETSc. > > Barry > > > On Jan 22, 2025, at 5:11?PM, Matthew Knepley wrote: > > On Wed, Jan 22, 2025 at 3:18?PM Frank Bramkamp > wrote: > >> Dear PETSc team, >> >> I was planning to program a custom KSP method, some modified GMRES. >> We mainly use PETSc from Fortran. Therefore I wonder it is possible >> to have an interface to a custom KSP solver that is written in fortran. >> >> I thought of using KSPRegister to register my own routine, but that seems >> only >> available in C. Or is it possible to have a fortran/C wrapper to do that >> ? >> > > We have wrappers for other functions that take callbacks, such as > SNESSetFunction(). What > we need to do is have a list of Fortran function pointers for this method. > They when you > register, we actually stick in a C wrapper that calls your Fortran > function pointer that we have > stored in our list. It should be straightforward looking at the > implementation for something like > SNESSetFunction(). We would help if you want to try :) > > Barry, is this impacted by your binding rewrite? > > Thanks, > > Matt > > >> Thanks, Frank Bramkamp >> >> >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d5lyAoohIbFIbr0fMSRTmYKgfiy_v7tXjQZRCuNJAQt_RrI8wXRdX6HiTzSjsU1UQvGCoIqQ5AYlGowlIhRF$ > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d5lyAoohIbFIbr0fMSRTmYKgfiy_v7tXjQZRCuNJAQt_RrI8wXRdX6HiTzSjsU1UQvGCoIqQ5AYlGowlIhRF$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Fri Jan 24 03:41:34 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Fri, 24 Jan 2025 09:41:34 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> Message-ID: <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> Dear Matt, the error was in the implementation of the volume expansion function. I updated it, and it works finte under finite domains. However, if I include periodic boundary conditions the volume of the cell does not accommodate the volume expansion of the particles. The deformation gradient is not the identity? I guess I am missing the fine detail on how periodic bcc are implemented in DMDA mesh, I?m right? Thanks, Miguel static PetscErrorCode Volumetric_Expansion_DMDA(DM * da, const Eigen::Matrix3d& F) { PetscInt i, j, mstart, m, nstart, n, pstart, p, k; Vec local, global; DMDACoor3d ***coors, ***coorslocal; DM cda; PetscFunctionBeginUser; PetscCall(DMGetCoordinateDM(*da, &cda)); PetscCall(DMGetCoordinates(*da, &global)); PetscCall(DMGetCoordinatesLocal(*da, &local)); PetscCall(DMDAVecGetArray(cda, global, &coors)); PetscCall(DMDAVecGetArrayRead(cda, local, &coorslocal)); PetscCall(DMDAGetCorners(cda, &mstart, &nstart, &pstart, &m, &n, &p)); for (i = mstart; i < mstart + m; i++) { for (j = nstart; j < nstart + n; j++) { for (k = pstart; k < pstart + p; k++) { coors[k][j][i].x = coorslocal[k][j][i].x * F(0, 0); coors[k][j][i].y = coorslocal[k][j][i].y * F(1, 1); coors[k][j][i].z = coorslocal[k][j][i].z * F(2, 2); } } } PetscCall(DMDAVecRestoreArray(cda, global, &coors)); PetscCall(DMDAVecRestoreArrayRead(cda, local, &coorslocal)); PetscCall(DMGlobalToLocalBegin(cda, global, INSERT_VALUES, local)); PetscCall(DMGlobalToLocalEnd(cda, global, INSERT_VALUES, local)); PetscFunctionReturn(PETSC_SUCCESS); } On 17 Jan 2025, at 18:01, MIGUEL MOLINOS PEREZ wrote: You are right!! Thank you again! Miguel On Jan 17, 2025, at 5:18?PM, Matthew Knepley wrote: On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ > wrote: Now the error is in the call to DMSwarmMigrate You have almost certainly overwritten memory somewhere. Can you use vlagrind or Address Sanitizer? Thanks, Matt Miguel [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!f-hZcABjffdf8jMIdOxot2T8D4VV1XAClLZxnyfsTbrVRjlBUUkRbNLi7voZJLJa_gqV_cpBush-QwbG452dLw$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!f-hZcABjffdf8jMIdOxot2T8D4VV1XAClLZxnyfsTbrVRjlBUUkRbNLi7voZJLJa_gqV_cpBush-Qwb_mCV03g$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 [0]PETSC ERROR: #3 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [0]PETSC ERROR: #4 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 On Jan 17, 2025, at 4:22?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt, this the piece of code I use to change the coordinates of the DM obtained using: You do not need the call to DMSetCoordinates(). What happens when you remove it? Thanks, Matt DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); DMGetApplicationContext(bounding_cell, &background_mesh); Thanks, Miguel /************************************************************************/ PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { PetscErrorCode ierr; Vec coordinates; PetscScalar* coordArray; PetscInt xs, ys, zs, xm, ym, zm, i, j, k; PetscInt dim, M, N, P; PetscFunctionBegin; // Get DMDA information ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); CHKERRQ(ierr); ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); CHKERRQ(ierr); // Get the coordinates vector ierr = DMGetCoordinates(dm, &coordinates); CHKERRQ(ierr); ierr = VecGetArray(coordinates, &coordArray); CHKERRQ(ierr); // Update the coordinates based on the desired transformation for (k = zs; k < zs + zm; k++) { for (j = ys; j < ys + ym; j++) { for (i = xs; i < xs + xm; i++) { PetscInt idx = ((k * N + j) * M + i) * dim; // Index for the i, j, k point coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate } } } // Restore the coordinates vector ierr = VecRestoreArray(coordinates, &coordArray); CHKERRQ(ierr); // Set the updated coordinates back to the DMDA ierr = DMSetCoordinates(dm, coordinates); CHKERRQ(ierr); PetscFunctionReturn(0); } /************************************************************************/ On 17 Jan 2025, at 16:00, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: I tried what you suggested, but still I got this error message. Maybe I should use main release? No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!f-hZcABjffdf8jMIdOxot2T8D4VV1XAClLZxnyfsTbrVRjlBUUkRbNLi7voZJLJa_gqV_cpBush-QwbG452dLw$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!f-hZcABjffdf8jMIdOxot2T8D4VV1XAClLZxnyfsTbrVRjlBUUkRbNLi7voZJLJa_gqV_cpBush-Qwb_mCV03g$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley > wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f-hZcABjffdf8jMIdOxot2T8D4VV1XAClLZxnyfsTbrVRjlBUUkRbNLi7voZJLJa_gqV_cpBush-QwawInSNVw$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f-hZcABjffdf8jMIdOxot2T8D4VV1XAClLZxnyfsTbrVRjlBUUkRbNLi7voZJLJa_gqV_cpBush-QwawInSNVw$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f-hZcABjffdf8jMIdOxot2T8D4VV1XAClLZxnyfsTbrVRjlBUUkRbNLi7voZJLJa_gqV_cpBush-QwawInSNVw$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f-hZcABjffdf8jMIdOxot2T8D4VV1XAClLZxnyfsTbrVRjlBUUkRbNLi7voZJLJa_gqV_cpBush-QwawInSNVw$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f-hZcABjffdf8jMIdOxot2T8D4VV1XAClLZxnyfsTbrVRjlBUUkRbNLi7voZJLJa_gqV_cpBush-QwawInSNVw$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 24 07:20:31 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 24 Jan 2025 08:20:31 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> Message-ID: On Fri, Jan 24, 2025 at 4:41?AM MIGUEL MOLINOS PEREZ wrote: > Dear Matt, the error was in the implementation of the volume expansion > function. I updated it, and it works finte under finite domains. However, > if I include periodic boundary conditions the volume of the cell does not > accommodate the volume expansion of the particles. The deformation gradient > is not the identity? I guess I am missing the fine detail on how periodic > bcc are implemented in DMDA mesh, I?m right? > DMDA identifies vertices using a VecScatter to implement periodic BC. This should be insensitive to coordinates. However, I don't think the algorithm below is correct for local coordinates. You use GlobalToLocal(), which means that some global coordinate "wins" for each local cell, so cells on the periodic boundary can be wrong. I would set the local coordinates by hand as well. Thanks, Matt > Thanks, > Miguel > > static PetscErrorCode Volumetric_Expansion_DMDA(DM * da, > const Eigen::Matrix3d& F) { > > PetscInt i, j, mstart, m, nstart, n, pstart, p, k; > Vec local, global; > DMDACoor3d ***coors, ***coorslocal; > DM cda; > > PetscFunctionBeginUser; > PetscCall(DMGetCoordinateDM(*da, &cda)); > PetscCall(DMGetCoordinates(*da, &global)); > PetscCall(DMGetCoordinatesLocal(*da, &local)); > PetscCall(DMDAVecGetArray(cda, global, &coors)); > PetscCall(DMDAVecGetArrayRead(cda, local, &coorslocal)); > PetscCall(DMDAGetCorners(cda, &mstart, &nstart, &pstart, &m, &n, &p)); > for (i = mstart; i < mstart + m; i++) { > for (j = nstart; j < nstart + n; j++) { > for (k = pstart; k < pstart + p; k++) { > coors[k][j][i].x = coorslocal[k][j][i].x * F(0, 0); > coors[k][j][i].y = coorslocal[k][j][i].y * F(1, 1); > coors[k][j][i].z = coorslocal[k][j][i].z * F(2, 2); > } > } > } > PetscCall(DMDAVecRestoreArray(cda, global, &coors)); > PetscCall(DMDAVecRestoreArrayRead(cda, local, &coorslocal)); > > PetscCall(DMGlobalToLocalBegin(cda, global, INSERT_VALUES, local)); > PetscCall(DMGlobalToLocalEnd(cda, global, INSERT_VALUES, local)); > > PetscFunctionReturn(PETSC_SUCCESS); > } > > On 17 Jan 2025, at 18:01, MIGUEL MOLINOS PEREZ wrote: > > You are right!! Thank you again! > > Miguel > > On Jan 17, 2025, at 5:18?PM, Matthew Knepley wrote: > > On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ > wrote: > >> Now the error is in the call to DMSwarmMigrate >> > > You have almost certainly overwritten memory somewhere. Can you use > vlagrind or Address Sanitizer? > > Thanks, > > Matt > > >> Miguel >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpQ8EGYjN$ and >> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpcRZYRM5$ >> [0]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [0]PETSC ERROR: The line numbers in the error traceback are not always >> exact. >> [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at >> /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 >> [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at >> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 >> [0]PETSC ERROR: #3 DMSwarmMigrate() at >> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >> [0]PETSC ERROR: #4 main() at >> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >> >> On Jan 17, 2025, at 4:22?PM, Matthew Knepley wrote: >> >> On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ >> wrote: >> >>> Thank you Matt, this the piece of code I use to change the coordinates >>> of the DM obtained using: >>> >> >> You do not need the call to DMSetCoordinates(). What happens when you >> remove it? >> >> Thanks, >> >> Matt >> >> >>> >>> DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); >>> DMGetApplicationContext(bounding_cell, &background_mesh); >>> >>> Thanks, >>> Miguel >>> >>> >>> /************************************************************************/ >>> >>> PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { >>> PetscErrorCode ierr; >>> Vec coordinates; >>> PetscScalar* coordArray; >>> PetscInt xs, ys, zs, xm, ym, zm, i, j, k; >>> PetscInt dim, M, N, P; >>> >>> PetscFunctionBegin; >>> // Get DMDA information >>> ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, >>> NULL, >>> NULL, NULL, NULL); >>> CHKERRQ(ierr); >>> ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); >>> CHKERRQ(ierr); >>> >>> // Get the coordinates vector >>> ierr = DMGetCoordinates(dm, &coordinates); >>> CHKERRQ(ierr); >>> ierr = VecGetArray(coordinates, &coordArray); >>> CHKERRQ(ierr); >>> >>> // Update the coordinates based on the desired transformation >>> for (k = zs; k < zs + zm; k++) { >>> for (j = ys; j < ys + ym; j++) { >>> for (i = xs; i < xs + xm; i++) { >>> PetscInt idx = >>> ((k * N + j) * M + i) * dim; // Index for the i, j, k point >>> coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate >>> coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update >>> y-coordinate >>> coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update >>> z-coordinate >>> } >>> } >>> } >>> >>> // Restore the coordinates vector >>> ierr = VecRestoreArray(coordinates, &coordArray); >>> CHKERRQ(ierr); >>> >>> // Set the updated coordinates back to the DMDA >>> ierr = DMSetCoordinates(dm, coordinates); >>> CHKERRQ(ierr); >>> >>> PetscFunctionReturn(0); >>> } >>> >>> >>> /************************************************************************/ >>> >>> On 17 Jan 2025, at 16:00, Matthew Knepley wrote: >>> >>> On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ >>> wrote: >>> >>>> I tried what you suggested, but still I got this error message. Maybe I >>>> should use main release? >>>> >>> >>> No. I suspect something is wrong with the way you are setting >>> coordinates. Can you share the code? >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Miguel >>>> >>>> [4]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>>> probably memory access out of range >>>> [4]PETSC ERROR: Try option -start_in_debugger or >>>> -on_error_attach_debugger >>>> [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpQ8EGYjN$ and >>>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpcRZYRM5$ >>>> [4]PETSC ERROR: --------------------- Stack Frames >>>> ------------------------------------ >>>> [4]PETSC ERROR: The line numbers in the error traceback are not always >>>> exact. >>>> [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at >>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 >>>> [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at >>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 >>>> [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at >>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 >>>> [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at >>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 >>>> [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at >>>> /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 >>>> [4]PETSC ERROR: #6 VecScatterBegin_Internal() at >>>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 >>>> [4]PETSC ERROR: #7 VecScatterBegin() at >>>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 >>>> [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at >>>> /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 >>>> [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at >>>> /Users/migmolper/petsc/src/dm/interface/dm.c:2844 >>>> [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at >>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 >>>> [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at >>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 >>>> [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at >>>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 >>>> [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at >>>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 >>>> [4]PETSC ERROR: #14 DMLocatePoints() at >>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 >>>> [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at >>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 >>>> [4]PETSC ERROR: #16 DMSwarmMigrate() at >>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >>>> [4]PETSC ERROR: #17 main() at >>>> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >>>> >>>> >>>> >>>> On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ >>>> wrote: >>>> >>>> Thank you Matt for the useful info. I?ll try your idea. >>>> >>>> Miguel >>>> >>>> On 15 Jan 2025, at 16:48, Matthew Knepley wrote: >>>> >>>> On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ >>>> wrote: >>>> >>>>> Thank you Matt. >>>>> >>>>> Yes, I am getting the "CellDM" from the DMSwarm. >>>>> >>>>> 1. I have recently overhauled this functionality because it was not >>>>> flexible enough for the plasma simulation we do. Thus main and release work >>>>> differently. >>>>> >>>>> >>>>> Nice to hear that. Should I move to main? >>>>> >>>> >>>> The changes allow you to have several cell DMs. I want to bin particles >>>> in space, but also in velocity, and then in the tensor product of space and >>>> velocity. Moreover, sometimes I want to use different Swarm fields as the >>>> DM field for the solver. You can do all that with main now. If you just >>>> need a single DM with the same DM fields, release is fine. >>>> >>>> >>>>> 2. I assume you are using release >>>>> >>>>> >>>>> You are correct. >>>>> >>>>> 3. In both main and release, if you change the coordinates of your >>>>> CellDM mesh, you need to rebin the particles. The easiest way to do this is >>>>> to call DMSwarmMigrate(sw, PETSC_FALSE). >>>>> >>>>> >>>>> What do you mean by rebin? >>>>> >>>> >>>> When you provide the cell DM, Swrm makes a "sort context" that bins the >>>> particles into DM cells. If you change the coordinates, this binning will >>>> change, so you need it to "rebin" or recreate the sort context. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Miguel >>>>> >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Best, >>>>>> Miguel >>>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpSDqciPi$ >>>>> >>>>> >>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpSDqciPi$ >>>> >>>> >>>> >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpSDqciPi$ >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpSDqciPi$ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpSDqciPi$ > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!d_kL76N30FqcKUQr7kFkiAg2Rt5XrsdTHF_opDlqKEu3B89-dlXJyQgH64brfGhgW5PJg0jmxzkTpSDqciPi$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Fri Jan 24 09:36:27 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Fri, 24 Jan 2025 15:36:27 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> Message-ID: <725862F5-8689-42E9-B496-C5088856C5FB@us.es> Thanks Matt, I tried that too, and the problem remains. The box is updated only if I set no periodic bcc. Miguel On 24 Jan 2025, at 14:20, Matthew Knepley wrote: On Fri, Jan 24, 2025 at 4:41?AM MIGUEL MOLINOS PEREZ > wrote: Dear Matt, the error was in the implementation of the volume expansion function. I updated it, and it works finte under finite domains. However, if I include periodic boundary conditions the volume of the cell does not accommodate the volume expansion of the particles. The deformation gradient is not the identity? I guess I am missing the fine detail on how periodic bcc are implemented in DMDA mesh, I?m right? DMDA identifies vertices using a VecScatter to implement periodic BC. This should be insensitive to coordinates. However, I don't think the algorithm below is correct for local coordinates. You use GlobalToLocal(), which means that some global coordinate "wins" for each local cell, so cells on the periodic boundary can be wrong. I would set the local coordinates by hand as well. Thanks, Matt Thanks, Miguel static PetscErrorCode Volumetric_Expansion_DMDA(DM * da, const Eigen::Matrix3d& F) { PetscInt i, j, mstart, m, nstart, n, pstart, p, k; Vec local, global; DMDACoor3d ***coors, ***coorslocal; DM cda; PetscFunctionBeginUser; PetscCall(DMGetCoordinateDM(*da, &cda)); PetscCall(DMGetCoordinates(*da, &global)); PetscCall(DMGetCoordinatesLocal(*da, &local)); PetscCall(DMDAVecGetArray(cda, global, &coors)); PetscCall(DMDAVecGetArrayRead(cda, local, &coorslocal)); PetscCall(DMDAGetCorners(cda, &mstart, &nstart, &pstart, &m, &n, &p)); for (i = mstart; i < mstart + m; i++) { for (j = nstart; j < nstart + n; j++) { for (k = pstart; k < pstart + p; k++) { coors[k][j][i].x = coorslocal[k][j][i].x * F(0, 0); coors[k][j][i].y = coorslocal[k][j][i].y * F(1, 1); coors[k][j][i].z = coorslocal[k][j][i].z * F(2, 2); } } } PetscCall(DMDAVecRestoreArray(cda, global, &coors)); PetscCall(DMDAVecRestoreArrayRead(cda, local, &coorslocal)); PetscCall(DMGlobalToLocalBegin(cda, global, INSERT_VALUES, local)); PetscCall(DMGlobalToLocalEnd(cda, global, INSERT_VALUES, local)); PetscFunctionReturn(PETSC_SUCCESS); } On 17 Jan 2025, at 18:01, MIGUEL MOLINOS PEREZ > wrote: You are right!! Thank you again! Miguel On Jan 17, 2025, at 5:18?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ > wrote: Now the error is in the call to DMSwarmMigrate You have almost certainly overwritten memory somewhere. Can you use vlagrind or Address Sanitizer? Thanks, Matt Miguel [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2ymvSn3HA$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2xZ4KBEtw$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 [0]PETSC ERROR: #3 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [0]PETSC ERROR: #4 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 On Jan 17, 2025, at 4:22?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt, this the piece of code I use to change the coordinates of the DM obtained using: You do not need the call to DMSetCoordinates(). What happens when you remove it? Thanks, Matt DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); DMGetApplicationContext(bounding_cell, &background_mesh); Thanks, Miguel /************************************************************************/ PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { PetscErrorCode ierr; Vec coordinates; PetscScalar* coordArray; PetscInt xs, ys, zs, xm, ym, zm, i, j, k; PetscInt dim, M, N, P; PetscFunctionBegin; // Get DMDA information ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); CHKERRQ(ierr); ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); CHKERRQ(ierr); // Get the coordinates vector ierr = DMGetCoordinates(dm, &coordinates); CHKERRQ(ierr); ierr = VecGetArray(coordinates, &coordArray); CHKERRQ(ierr); // Update the coordinates based on the desired transformation for (k = zs; k < zs + zm; k++) { for (j = ys; j < ys + ym; j++) { for (i = xs; i < xs + xm; i++) { PetscInt idx = ((k * N + j) * M + i) * dim; // Index for the i, j, k point coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate } } } // Restore the coordinates vector ierr = VecRestoreArray(coordinates, &coordArray); CHKERRQ(ierr); // Set the updated coordinates back to the DMDA ierr = DMSetCoordinates(dm, coordinates); CHKERRQ(ierr); PetscFunctionReturn(0); } /************************************************************************/ On 17 Jan 2025, at 16:00, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: I tried what you suggested, but still I got this error message. Maybe I should use main release? No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2ymvSn3HA$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2xZ4KBEtw$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley > wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2ziDCDYfg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2ziDCDYfg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2ziDCDYfg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2ziDCDYfg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2ziDCDYfg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Y9-nv3vjIkiUGcGgHS4N2PmUXo6p6fgSPDAR3vEJl2yMIirG9LPbHkwIjh4_dEo7dgR84xE8yuD5m2ziDCDYfg$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 24 09:50:24 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 24 Jan 2025 10:50:24 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <725862F5-8689-42E9-B496-C5088856C5FB@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> <725862F5-8689-42E9-B496-C5088856C5FB@us.es> Message-ID: On Fri, Jan 24, 2025 at 10:36?AM MIGUEL MOLINOS PEREZ wrote: > Thanks Matt, I tried that too, and the problem remains. The box is updated > only if I set no periodic bcc. > What do you mean by "The box is updated"? I am trying to understand how you test things. Clearly the coordinates are updated, even in the periodic case. Thus, I need to understand the test. Once we do that, we can work backwards to the first broken thing. Thanks, Matt > Miguel > > On 24 Jan 2025, at 14:20, Matthew Knepley wrote: > > On Fri, Jan 24, 2025 at 4:41?AM MIGUEL MOLINOS PEREZ > wrote: > >> Dear Matt, the error was in the implementation of the volume expansion >> function. I updated it, and it works finte under finite domains. However, >> if I include periodic boundary conditions the volume of the cell does not >> accommodate the volume expansion of the particles. The deformation gradient >> is not the identity? I guess I am missing the fine detail on how periodic >> bcc are implemented in DMDA mesh, I?m right? >> > > DMDA identifies vertices using a VecScatter to implement periodic BC. This > should be insensitive to coordinates. However, I don't think the algorithm > below is correct for local coordinates. You use GlobalToLocal(), which > means that some global coordinate "wins" for each local cell, so cells on > the periodic boundary can be wrong. I would set the local coordinates by > hand as well. > > Thanks, > > Matt > > >> Thanks, >> Miguel >> >> static PetscErrorCode Volumetric_Expansion_DMDA(DM * da, >> const Eigen::Matrix3d& F) { >> >> PetscInt i, j, mstart, m, nstart, n, pstart, p, k; >> Vec local, global; >> DMDACoor3d ***coors, ***coorslocal; >> DM cda; >> >> PetscFunctionBeginUser; >> PetscCall(DMGetCoordinateDM(*da, &cda)); >> PetscCall(DMGetCoordinates(*da, &global)); >> PetscCall(DMGetCoordinatesLocal(*da, &local)); >> PetscCall(DMDAVecGetArray(cda, global, &coors)); >> PetscCall(DMDAVecGetArrayRead(cda, local, &coorslocal)); >> PetscCall(DMDAGetCorners(cda, &mstart, &nstart, &pstart, &m, &n, &p)); >> for (i = mstart; i < mstart + m; i++) { >> for (j = nstart; j < nstart + n; j++) { >> for (k = pstart; k < pstart + p; k++) { >> coors[k][j][i].x = coorslocal[k][j][i].x * F(0, 0); >> coors[k][j][i].y = coorslocal[k][j][i].y * F(1, 1); >> coors[k][j][i].z = coorslocal[k][j][i].z * F(2, 2); >> } >> } >> } >> PetscCall(DMDAVecRestoreArray(cda, global, &coors)); >> PetscCall(DMDAVecRestoreArrayRead(cda, local, &coorslocal)); >> >> PetscCall(DMGlobalToLocalBegin(cda, global, INSERT_VALUES, local)); >> PetscCall(DMGlobalToLocalEnd(cda, global, INSERT_VALUES, local)); >> >> PetscFunctionReturn(PETSC_SUCCESS); >> } >> >> On 17 Jan 2025, at 18:01, MIGUEL MOLINOS PEREZ wrote: >> >> You are right!! Thank you again! >> >> Miguel >> >> On Jan 17, 2025, at 5:18?PM, Matthew Knepley wrote: >> >> On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ >> wrote: >> >>> Now the error is in the call to DMSwarmMigrate >>> >> >> You have almost certainly overwritten memory somewhere. Can you use >> vlagrind or Address Sanitizer? >> >> Thanks, >> >> Matt >> >> >>> Miguel >>> >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSupuIaVdEg$ and >>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSupsTWdVgG$ >>> [0]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [0]PETSC ERROR: The line numbers in the error traceback are not always >>> exact. >>> [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at >>> /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 >>> [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at >>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 >>> [0]PETSC ERROR: #3 DMSwarmMigrate() at >>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >>> [0]PETSC ERROR: #4 main() at >>> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >>> >>> On Jan 17, 2025, at 4:22?PM, Matthew Knepley wrote: >>> >>> On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ >>> wrote: >>> >>>> Thank you Matt, this the piece of code I use to change the coordinates >>>> of the DM obtained using: >>>> >>> >>> You do not need the call to DMSetCoordinates(). What happens when you >>> remove it? >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> >>>> DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); >>>> DMGetApplicationContext(bounding_cell, &background_mesh); >>>> >>>> Thanks, >>>> Miguel >>>> >>>> >>>> /************************************************************************/ >>>> >>>> PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { >>>> PetscErrorCode ierr; >>>> Vec coordinates; >>>> PetscScalar* coordArray; >>>> PetscInt xs, ys, zs, xm, ym, zm, i, j, k; >>>> PetscInt dim, M, N, P; >>>> >>>> PetscFunctionBegin; >>>> // Get DMDA information >>>> ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, >>>> NULL, >>>> NULL, NULL, NULL); >>>> CHKERRQ(ierr); >>>> ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); >>>> CHKERRQ(ierr); >>>> >>>> // Get the coordinates vector >>>> ierr = DMGetCoordinates(dm, &coordinates); >>>> CHKERRQ(ierr); >>>> ierr = VecGetArray(coordinates, &coordArray); >>>> CHKERRQ(ierr); >>>> >>>> // Update the coordinates based on the desired transformation >>>> for (k = zs; k < zs + zm; k++) { >>>> for (j = ys; j < ys + ym; j++) { >>>> for (i = xs; i < xs + xm; i++) { >>>> PetscInt idx = >>>> ((k * N + j) * M + i) * dim; // Index for the i, j, k point >>>> coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate >>>> coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update >>>> y-coordinate >>>> coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update >>>> z-coordinate >>>> } >>>> } >>>> } >>>> >>>> // Restore the coordinates vector >>>> ierr = VecRestoreArray(coordinates, &coordArray); >>>> CHKERRQ(ierr); >>>> >>>> // Set the updated coordinates back to the DMDA >>>> ierr = DMSetCoordinates(dm, coordinates); >>>> CHKERRQ(ierr); >>>> >>>> PetscFunctionReturn(0); >>>> } >>>> >>>> >>>> /************************************************************************/ >>>> >>>> On 17 Jan 2025, at 16:00, Matthew Knepley wrote: >>>> >>>> On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ >>>> wrote: >>>> >>>>> I tried what you suggested, but still I got this error message. Maybe >>>>> I should use main release? >>>>> >>>> >>>> No. I suspect something is wrong with the way you are setting >>>> coordinates. Can you share the code? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Miguel >>>>> >>>>> [4]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>>>> probably memory access out of range >>>>> [4]PETSC ERROR: Try option -start_in_debugger or >>>>> -on_error_attach_debugger >>>>> [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSupuIaVdEg$ and >>>>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSupsTWdVgG$ >>>>> [4]PETSC ERROR: --------------------- Stack Frames >>>>> ------------------------------------ >>>>> [4]PETSC ERROR: The line numbers in the error traceback are not always >>>>> exact. >>>>> [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at >>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 >>>>> [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at >>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 >>>>> [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at >>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 >>>>> [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at >>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 >>>>> [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at >>>>> /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 >>>>> [4]PETSC ERROR: #6 VecScatterBegin_Internal() at >>>>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 >>>>> [4]PETSC ERROR: #7 VecScatterBegin() at >>>>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 >>>>> [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at >>>>> /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 >>>>> [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at >>>>> /Users/migmolper/petsc/src/dm/interface/dm.c:2844 >>>>> [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at >>>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 >>>>> [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at >>>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 >>>>> [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at >>>>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 >>>>> [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at >>>>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 >>>>> [4]PETSC ERROR: #14 DMLocatePoints() at >>>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 >>>>> [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at >>>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 >>>>> [4]PETSC ERROR: #16 DMSwarmMigrate() at >>>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >>>>> [4]PETSC ERROR: #17 main() at >>>>> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >>>>> >>>>> >>>>> >>>>> On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ >>>>> wrote: >>>>> >>>>> Thank you Matt for the useful info. I?ll try your idea. >>>>> >>>>> Miguel >>>>> >>>>> On 15 Jan 2025, at 16:48, Matthew Knepley wrote: >>>>> >>>>> On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ >>>>> wrote: >>>>> >>>>>> Thank you Matt. >>>>>> >>>>>> Yes, I am getting the "CellDM" from the DMSwarm. >>>>>> >>>>>> 1. I have recently overhauled this functionality because it was not >>>>>> flexible enough for the plasma simulation we do. Thus main and release work >>>>>> differently. >>>>>> >>>>>> >>>>>> Nice to hear that. Should I move to main? >>>>>> >>>>> >>>>> The changes allow you to have several cell DMs. I want to bin >>>>> particles in space, but also in velocity, and then in the tensor product of >>>>> space and velocity. Moreover, sometimes I want to use different Swarm >>>>> fields as the DM field for the solver. You can do all that with main now. >>>>> If you just need a single DM with the same DM fields, release is fine. >>>>> >>>>> >>>>>> 2. I assume you are using release >>>>>> >>>>>> >>>>>> You are correct. >>>>>> >>>>>> 3. In both main and release, if you change the coordinates of your >>>>>> CellDM mesh, you need to rebin the particles. The easiest way to do this is >>>>>> to call DMSwarmMigrate(sw, PETSC_FALSE). >>>>>> >>>>>> >>>>>> What do you mean by rebin? >>>>>> >>>>> >>>>> When you provide the cell DM, Swrm makes a "sort context" that bins >>>>> the particles into DM cells. If you change the coordinates, this binning >>>>> will change, so you need it to "rebin" or recreate the sort context. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Miguel >>>>>> >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Best, >>>>>>> Miguel >>>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSuphgzDu4D$ >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSuphgzDu4D$ >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSuphgzDu4D$ >>>> >>>> >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSuphgzDu4D$ >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSuphgzDu4D$ >> >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSuphgzDu4D$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bl1IKWYD5vUTh2-bmOUYT72lASrjcNt_e-FTAgWDrFKIB9bal_DAR9Nx9MMl8oqM4OEMlkElerSuphgzDu4D$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Fri Jan 24 09:56:10 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Fri, 24 Jan 2025 15:56:10 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> <725862F5-8689-42E9-B496-C5088856C5FB@us.es> Message-ID: Sorry I wasn?t clear enough. By ?the box is updated? I mean: I run DMGetBoundingBox and the resulting coordinates are updated according to the deformation gradient ?F". Thanks, Miguel On 24 Jan 2025, at 16:50, Matthew Knepley wrote: On Fri, Jan 24, 2025 at 10:36?AM MIGUEL MOLINOS PEREZ > wrote: Thanks Matt, I tried that too, and the problem remains. The box is updated only if I set no periodic bcc. What do you mean by "The box is updated"? I am trying to understand how you test things. Clearly the coordinates are updated, even in the periodic case. Thus, I need to understand the test. Once we do that, we can work backwards to the first broken thing. Thanks, Matt Miguel On 24 Jan 2025, at 14:20, Matthew Knepley > wrote: On Fri, Jan 24, 2025 at 4:41?AM MIGUEL MOLINOS PEREZ > wrote: Dear Matt, the error was in the implementation of the volume expansion function. I updated it, and it works finte under finite domains. However, if I include periodic boundary conditions the volume of the cell does not accommodate the volume expansion of the particles. The deformation gradient is not the identity? I guess I am missing the fine detail on how periodic bcc are implemented in DMDA mesh, I?m right? DMDA identifies vertices using a VecScatter to implement periodic BC. This should be insensitive to coordinates. However, I don't think the algorithm below is correct for local coordinates. You use GlobalToLocal(), which means that some global coordinate "wins" for each local cell, so cells on the periodic boundary can be wrong. I would set the local coordinates by hand as well. Thanks, Matt Thanks, Miguel static PetscErrorCode Volumetric_Expansion_DMDA(DM * da, const Eigen::Matrix3d& F) { PetscInt i, j, mstart, m, nstart, n, pstart, p, k; Vec local, global; DMDACoor3d ***coors, ***coorslocal; DM cda; PetscFunctionBeginUser; PetscCall(DMGetCoordinateDM(*da, &cda)); PetscCall(DMGetCoordinates(*da, &global)); PetscCall(DMGetCoordinatesLocal(*da, &local)); PetscCall(DMDAVecGetArray(cda, global, &coors)); PetscCall(DMDAVecGetArrayRead(cda, local, &coorslocal)); PetscCall(DMDAGetCorners(cda, &mstart, &nstart, &pstart, &m, &n, &p)); for (i = mstart; i < mstart + m; i++) { for (j = nstart; j < nstart + n; j++) { for (k = pstart; k < pstart + p; k++) { coors[k][j][i].x = coorslocal[k][j][i].x * F(0, 0); coors[k][j][i].y = coorslocal[k][j][i].y * F(1, 1); coors[k][j][i].z = coorslocal[k][j][i].z * F(2, 2); } } } PetscCall(DMDAVecRestoreArray(cda, global, &coors)); PetscCall(DMDAVecRestoreArrayRead(cda, local, &coorslocal)); PetscCall(DMGlobalToLocalBegin(cda, global, INSERT_VALUES, local)); PetscCall(DMGlobalToLocalEnd(cda, global, INSERT_VALUES, local)); PetscFunctionReturn(PETSC_SUCCESS); } On 17 Jan 2025, at 18:01, MIGUEL MOLINOS PEREZ > wrote: You are right!! Thank you again! Miguel On Jan 17, 2025, at 5:18?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ > wrote: Now the error is in the call to DMSwarmMigrate You have almost certainly overwritten memory somewhere. Can you use vlagrind or Address Sanitizer? Thanks, Matt Miguel [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNoqip5EBaA$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNooBisXS8g$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 [0]PETSC ERROR: #3 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [0]PETSC ERROR: #4 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 On Jan 17, 2025, at 4:22?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt, this the piece of code I use to change the coordinates of the DM obtained using: You do not need the call to DMSetCoordinates(). What happens when you remove it? Thanks, Matt DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); DMGetApplicationContext(bounding_cell, &background_mesh); Thanks, Miguel /************************************************************************/ PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { PetscErrorCode ierr; Vec coordinates; PetscScalar* coordArray; PetscInt xs, ys, zs, xm, ym, zm, i, j, k; PetscInt dim, M, N, P; PetscFunctionBegin; // Get DMDA information ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); CHKERRQ(ierr); ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); CHKERRQ(ierr); // Get the coordinates vector ierr = DMGetCoordinates(dm, &coordinates); CHKERRQ(ierr); ierr = VecGetArray(coordinates, &coordArray); CHKERRQ(ierr); // Update the coordinates based on the desired transformation for (k = zs; k < zs + zm; k++) { for (j = ys; j < ys + ym; j++) { for (i = xs; i < xs + xm; i++) { PetscInt idx = ((k * N + j) * M + i) * dim; // Index for the i, j, k point coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate } } } // Restore the coordinates vector ierr = VecRestoreArray(coordinates, &coordArray); CHKERRQ(ierr); // Set the updated coordinates back to the DMDA ierr = DMSetCoordinates(dm, coordinates); CHKERRQ(ierr); PetscFunctionReturn(0); } /************************************************************************/ On 17 Jan 2025, at 16:00, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: I tried what you suggested, but still I got this error message. Maybe I should use main release? No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNoqip5EBaA$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNooBisXS8g$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley > wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNoqs-WseFA$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNoqs-WseFA$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNoqs-WseFA$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNoqs-WseFA$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNoqs-WseFA$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNoqs-WseFA$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YBJNxMpqdAnpfP9vHgi1gmfo-WFNm3YYrsXC5UySv0wtvEJ1q1rYc4ekeCMs4GzMdy4KqT6QX8XCNoqs-WseFA$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 24 10:00:20 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 24 Jan 2025 11:00:20 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> <725862F5-8689-42E9-B496-C5088856C5FB@us.es> Message-ID: On Fri, Jan 24, 2025 at 10:56?AM MIGUEL MOLINOS PEREZ wrote: > Sorry I wasn?t clear enough. By ?the box is updated? I mean: I run > DMGetBoundingBox and the resulting coordinates are updated according to the > deformation gradient ?F". > Oh, if you change the periodic extent, which you are, you have to recall DMSetPeriodicity(), which is what DMGetBoundaingBox() consults for the periodic extent (because the coordinates cannot be trusted). Thanks, Matt > Thanks, > Miguel > > On 24 Jan 2025, at 16:50, Matthew Knepley wrote: > > On Fri, Jan 24, 2025 at 10:36?AM MIGUEL MOLINOS PEREZ > wrote: > >> Thanks Matt, I tried that too, and the problem remains. The box is >> updated only if I set no periodic bcc. >> > > What do you mean by "The box is updated"? I am trying to understand how > you test things. Clearly the coordinates are updated, > even in the periodic case. Thus, I need to understand the test. Once we do > that, we can work backwards to the first broken thing. > > Thanks, > > Matt > > >> Miguel >> >> On 24 Jan 2025, at 14:20, Matthew Knepley wrote: >> >> On Fri, Jan 24, 2025 at 4:41?AM MIGUEL MOLINOS PEREZ >> wrote: >> >>> Dear Matt, the error was in the implementation of the volume expansion >>> function. I updated it, and it works finte under finite domains. However, >>> if I include periodic boundary conditions the volume of the cell does not >>> accommodate the volume expansion of the particles. The deformation gradient >>> is not the identity? I guess I am missing the fine detail on how periodic >>> bcc are implemented in DMDA mesh, I?m right? >>> >> >> DMDA identifies vertices using a VecScatter to implement periodic BC. >> This should be insensitive to coordinates. However, I don't think the >> algorithm below is correct for local coordinates. You use GlobalToLocal(), >> which means that some global coordinate "wins" for each local cell, so >> cells on the periodic boundary can be wrong. I would set the local >> coordinates by hand as well. >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Miguel >>> >>> static PetscErrorCode Volumetric_Expansion_DMDA(DM * da, >>> const Eigen::Matrix3d& F) { >>> >>> PetscInt i, j, mstart, m, nstart, n, pstart, p, k; >>> Vec local, global; >>> DMDACoor3d ***coors, ***coorslocal; >>> DM cda; >>> >>> PetscFunctionBeginUser; >>> PetscCall(DMGetCoordinateDM(*da, &cda)); >>> PetscCall(DMGetCoordinates(*da, &global)); >>> PetscCall(DMGetCoordinatesLocal(*da, &local)); >>> PetscCall(DMDAVecGetArray(cda, global, &coors)); >>> PetscCall(DMDAVecGetArrayRead(cda, local, &coorslocal)); >>> PetscCall(DMDAGetCorners(cda, &mstart, &nstart, &pstart, &m, &n, &p)); >>> for (i = mstart; i < mstart + m; i++) { >>> for (j = nstart; j < nstart + n; j++) { >>> for (k = pstart; k < pstart + p; k++) { >>> coors[k][j][i].x = coorslocal[k][j][i].x * F(0, 0); >>> coors[k][j][i].y = coorslocal[k][j][i].y * F(1, 1); >>> coors[k][j][i].z = coorslocal[k][j][i].z * F(2, 2); >>> } >>> } >>> } >>> PetscCall(DMDAVecRestoreArray(cda, global, &coors)); >>> PetscCall(DMDAVecRestoreArrayRead(cda, local, &coorslocal)); >>> >>> PetscCall(DMGlobalToLocalBegin(cda, global, INSERT_VALUES, local)); >>> PetscCall(DMGlobalToLocalEnd(cda, global, INSERT_VALUES, local)); >>> >>> PetscFunctionReturn(PETSC_SUCCESS); >>> } >>> >>> On 17 Jan 2025, at 18:01, MIGUEL MOLINOS PEREZ wrote: >>> >>> You are right!! Thank you again! >>> >>> Miguel >>> >>> On Jan 17, 2025, at 5:18?PM, Matthew Knepley wrote: >>> >>> On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ >>> wrote: >>> >>>> Now the error is in the call to DMSwarmMigrate >>>> >>> >>> You have almost certainly overwritten memory somewhere. Can you use >>> vlagrind or Address Sanitizer? >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Miguel >>>> >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>>> probably memory access out of range >>>> [0]PETSC ERROR: Try option -start_in_debugger or >>>> -on_error_attach_debugger >>>> [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8NYc1XCM$ and >>>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8MiTOrk_$ >>>> [0]PETSC ERROR: --------------------- Stack Frames >>>> ------------------------------------ >>>> [0]PETSC ERROR: The line numbers in the error traceback are not always >>>> exact. >>>> [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at >>>> /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 >>>> [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at >>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 >>>> [0]PETSC ERROR: #3 DMSwarmMigrate() at >>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >>>> [0]PETSC ERROR: #4 main() at >>>> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >>>> >>>> On Jan 17, 2025, at 4:22?PM, Matthew Knepley wrote: >>>> >>>> On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ >>>> wrote: >>>> >>>>> Thank you Matt, this the piece of code I use to change the coordinates >>>>> of the DM obtained using: >>>>> >>>> >>>> You do not need the call to DMSetCoordinates(). What happens when you >>>> remove it? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> >>>>> DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); >>>>> DMGetApplicationContext(bounding_cell, &background_mesh); >>>>> >>>>> Thanks, >>>>> Miguel >>>>> >>>>> >>>>> /************************************************************************/ >>>>> >>>>> PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { >>>>> PetscErrorCode ierr; >>>>> Vec coordinates; >>>>> PetscScalar* coordArray; >>>>> PetscInt xs, ys, zs, xm, ym, zm, i, j, k; >>>>> PetscInt dim, M, N, P; >>>>> >>>>> PetscFunctionBegin; >>>>> // Get DMDA information >>>>> ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, >>>>> NULL, >>>>> NULL, NULL, NULL); >>>>> CHKERRQ(ierr); >>>>> ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); >>>>> CHKERRQ(ierr); >>>>> >>>>> // Get the coordinates vector >>>>> ierr = DMGetCoordinates(dm, &coordinates); >>>>> CHKERRQ(ierr); >>>>> ierr = VecGetArray(coordinates, &coordArray); >>>>> CHKERRQ(ierr); >>>>> >>>>> // Update the coordinates based on the desired transformation >>>>> for (k = zs; k < zs + zm; k++) { >>>>> for (j = ys; j < ys + ym; j++) { >>>>> for (i = xs; i < xs + xm; i++) { >>>>> PetscInt idx = >>>>> ((k * N + j) * M + i) * dim; // Index for the i, j, k point >>>>> coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate >>>>> coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update >>>>> y-coordinate >>>>> coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update >>>>> z-coordinate >>>>> } >>>>> } >>>>> } >>>>> >>>>> // Restore the coordinates vector >>>>> ierr = VecRestoreArray(coordinates, &coordArray); >>>>> CHKERRQ(ierr); >>>>> >>>>> // Set the updated coordinates back to the DMDA >>>>> ierr = DMSetCoordinates(dm, coordinates); >>>>> CHKERRQ(ierr); >>>>> >>>>> PetscFunctionReturn(0); >>>>> } >>>>> >>>>> >>>>> /************************************************************************/ >>>>> >>>>> On 17 Jan 2025, at 16:00, Matthew Knepley wrote: >>>>> >>>>> On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ >>>>> wrote: >>>>> >>>>>> I tried what you suggested, but still I got this error message. Maybe >>>>>> I should use main release? >>>>>> >>>>> >>>>> No. I suspect something is wrong with the way you are setting >>>>> coordinates. Can you share the code? >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Miguel >>>>>> >>>>>> [4]PETSC ERROR: >>>>>> ------------------------------------------------------------------------ >>>>>> [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>>>>> probably memory access out of range >>>>>> [4]PETSC ERROR: Try option -start_in_debugger or >>>>>> -on_error_attach_debugger >>>>>> [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8NYc1XCM$ and >>>>>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8MiTOrk_$ >>>>>> [4]PETSC ERROR: --------------------- Stack Frames >>>>>> ------------------------------------ >>>>>> [4]PETSC ERROR: The line numbers in the error traceback are not >>>>>> always exact. >>>>>> [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at >>>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 >>>>>> [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at >>>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 >>>>>> [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at >>>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 >>>>>> [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at >>>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 >>>>>> [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at >>>>>> /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 >>>>>> [4]PETSC ERROR: #6 VecScatterBegin_Internal() at >>>>>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 >>>>>> [4]PETSC ERROR: #7 VecScatterBegin() at >>>>>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 >>>>>> [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at >>>>>> /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 >>>>>> [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at >>>>>> /Users/migmolper/petsc/src/dm/interface/dm.c:2844 >>>>>> [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at >>>>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 >>>>>> [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at >>>>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 >>>>>> [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at >>>>>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 >>>>>> [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at >>>>>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 >>>>>> [4]PETSC ERROR: #14 DMLocatePoints() at >>>>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 >>>>>> [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at >>>>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 >>>>>> [4]PETSC ERROR: #16 DMSwarmMigrate() at >>>>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >>>>>> [4]PETSC ERROR: #17 main() at >>>>>> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >>>>>> >>>>>> >>>>>> >>>>>> On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ >>>>>> wrote: >>>>>> >>>>>> Thank you Matt for the useful info. I?ll try your idea. >>>>>> >>>>>> Miguel >>>>>> >>>>>> On 15 Jan 2025, at 16:48, Matthew Knepley wrote: >>>>>> >>>>>> On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ >>>>>> wrote: >>>>>> >>>>>>> Thank you Matt. >>>>>>> >>>>>>> Yes, I am getting the "CellDM" from the DMSwarm. >>>>>>> >>>>>>> 1. I have recently overhauled this functionality because it was not >>>>>>> flexible enough for the plasma simulation we do. Thus main and release work >>>>>>> differently. >>>>>>> >>>>>>> >>>>>>> Nice to hear that. Should I move to main? >>>>>>> >>>>>> >>>>>> The changes allow you to have several cell DMs. I want to bin >>>>>> particles in space, but also in velocity, and then in the tensor product of >>>>>> space and velocity. Moreover, sometimes I want to use different Swarm >>>>>> fields as the DM field for the solver. You can do all that with main now. >>>>>> If you just need a single DM with the same DM fields, release is fine. >>>>>> >>>>>> >>>>>>> 2. I assume you are using release >>>>>>> >>>>>>> >>>>>>> You are correct. >>>>>>> >>>>>>> 3. In both main and release, if you change the coordinates of your >>>>>>> CellDM mesh, you need to rebin the particles. The easiest way to do this is >>>>>>> to call DMSwarmMigrate(sw, PETSC_FALSE). >>>>>>> >>>>>>> >>>>>>> What do you mean by rebin? >>>>>>> >>>>>> >>>>>> When you provide the cell DM, Swrm makes a "sort context" that bins >>>>>> the particles into DM cells. If you change the coordinates, this binning >>>>>> will change, so you need it to "rebin" or recreate the sort context. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Miguel >>>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Best, >>>>>>>> Miguel >>>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8Eq7h-km$ >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8Eq7h-km$ >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8Eq7h-km$ >>>>> >>>>> >>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8Eq7h-km$ >>>> >>>> >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8Eq7h-km$ >>> >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8Eq7h-km$ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8Eq7h-km$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fc8y_gbukKK4Ojqu74S_BEOm3W_FECUM5GpjTcm238J5eF-f3HAR5zO4lim3dgv93gMsf77ZnfXC8Eq7h-km$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Fri Jan 24 10:08:00 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Fri, 24 Jan 2025 16:08:00 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> <725862F5-8689-42E9-B496-C5088856C5FB@us.es> Message-ID: <41D2201F-A290-4239-B57B-42DF1F8A32CB@us.es> Ohh, I wasn't aware of this function. Thank you Matt, I?ll see if that solves the problem. Miguel On 24 Jan 2025, at 17:00, Matthew Knepley wrote: On Fri, Jan 24, 2025 at 10:56?AM MIGUEL MOLINOS PEREZ > wrote: Sorry I wasn?t clear enough. By ?the box is updated? I mean: I run DMGetBoundingBox and the resulting coordinates are updated according to the deformation gradient ?F". Oh, if you change the periodic extent, which you are, you have to recall DMSetPeriodicity(), which is what DMGetBoundaingBox() consults for the periodic extent (because the coordinates cannot be trusted). Thanks, Matt Thanks, Miguel On 24 Jan 2025, at 16:50, Matthew Knepley > wrote: On Fri, Jan 24, 2025 at 10:36?AM MIGUEL MOLINOS PEREZ > wrote: Thanks Matt, I tried that too, and the problem remains. The box is updated only if I set no periodic bcc. What do you mean by "The box is updated"? I am trying to understand how you test things. Clearly the coordinates are updated, even in the periodic case. Thus, I need to understand the test. Once we do that, we can work backwards to the first broken thing. Thanks, Matt Miguel On 24 Jan 2025, at 14:20, Matthew Knepley > wrote: On Fri, Jan 24, 2025 at 4:41?AM MIGUEL MOLINOS PEREZ > wrote: Dear Matt, the error was in the implementation of the volume expansion function. I updated it, and it works finte under finite domains. However, if I include periodic boundary conditions the volume of the cell does not accommodate the volume expansion of the particles. The deformation gradient is not the identity? I guess I am missing the fine detail on how periodic bcc are implemented in DMDA mesh, I?m right? DMDA identifies vertices using a VecScatter to implement periodic BC. This should be insensitive to coordinates. However, I don't think the algorithm below is correct for local coordinates. You use GlobalToLocal(), which means that some global coordinate "wins" for each local cell, so cells on the periodic boundary can be wrong. I would set the local coordinates by hand as well. Thanks, Matt Thanks, Miguel static PetscErrorCode Volumetric_Expansion_DMDA(DM * da, const Eigen::Matrix3d& F) { PetscInt i, j, mstart, m, nstart, n, pstart, p, k; Vec local, global; DMDACoor3d ***coors, ***coorslocal; DM cda; PetscFunctionBeginUser; PetscCall(DMGetCoordinateDM(*da, &cda)); PetscCall(DMGetCoordinates(*da, &global)); PetscCall(DMGetCoordinatesLocal(*da, &local)); PetscCall(DMDAVecGetArray(cda, global, &coors)); PetscCall(DMDAVecGetArrayRead(cda, local, &coorslocal)); PetscCall(DMDAGetCorners(cda, &mstart, &nstart, &pstart, &m, &n, &p)); for (i = mstart; i < mstart + m; i++) { for (j = nstart; j < nstart + n; j++) { for (k = pstart; k < pstart + p; k++) { coors[k][j][i].x = coorslocal[k][j][i].x * F(0, 0); coors[k][j][i].y = coorslocal[k][j][i].y * F(1, 1); coors[k][j][i].z = coorslocal[k][j][i].z * F(2, 2); } } } PetscCall(DMDAVecRestoreArray(cda, global, &coors)); PetscCall(DMDAVecRestoreArrayRead(cda, local, &coorslocal)); PetscCall(DMGlobalToLocalBegin(cda, global, INSERT_VALUES, local)); PetscCall(DMGlobalToLocalEnd(cda, global, INSERT_VALUES, local)); PetscFunctionReturn(PETSC_SUCCESS); } On 17 Jan 2025, at 18:01, MIGUEL MOLINOS PEREZ > wrote: You are right!! Thank you again! Miguel On Jan 17, 2025, at 5:18?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ > wrote: Now the error is in the call to DMSwarmMigrate You have almost certainly overwritten memory somewhere. Can you use vlagrind or Address Sanitizer? Thanks, Matt Miguel [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDHLFFGCsQ$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDGVkoQJsw$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 [0]PETSC ERROR: #3 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [0]PETSC ERROR: #4 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 On Jan 17, 2025, at 4:22?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt, this the piece of code I use to change the coordinates of the DM obtained using: You do not need the call to DMSetCoordinates(). What happens when you remove it? Thanks, Matt DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); DMGetApplicationContext(bounding_cell, &background_mesh); Thanks, Miguel /************************************************************************/ PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { PetscErrorCode ierr; Vec coordinates; PetscScalar* coordArray; PetscInt xs, ys, zs, xm, ym, zm, i, j, k; PetscInt dim, M, N, P; PetscFunctionBegin; // Get DMDA information ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); CHKERRQ(ierr); ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); CHKERRQ(ierr); // Get the coordinates vector ierr = DMGetCoordinates(dm, &coordinates); CHKERRQ(ierr); ierr = VecGetArray(coordinates, &coordArray); CHKERRQ(ierr); // Update the coordinates based on the desired transformation for (k = zs; k < zs + zm; k++) { for (j = ys; j < ys + ym; j++) { for (i = xs; i < xs + xm; i++) { PetscInt idx = ((k * N + j) * M + i) * dim; // Index for the i, j, k point coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate } } } // Restore the coordinates vector ierr = VecRestoreArray(coordinates, &coordArray); CHKERRQ(ierr); // Set the updated coordinates back to the DMDA ierr = DMSetCoordinates(dm, coordinates); CHKERRQ(ierr); PetscFunctionReturn(0); } /************************************************************************/ On 17 Jan 2025, at 16:00, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: I tried what you suggested, but still I got this error message. Maybe I should use main release? No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDHLFFGCsQ$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDGVkoQJsw$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley > wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDFYEjkgbg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDFYEjkgbg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDFYEjkgbg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDFYEjkgbg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDFYEjkgbg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDFYEjkgbg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDFYEjkgbg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cH3zcBaa7wpMv_H0Ns0rbef6fTMzZem8vGh5BoujDstkv8nuSDZ9I6q019NeB9LwStpL7zWf9v8sKDFYEjkgbg$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ntilton at mines.edu Thu Jan 23 21:05:49 2025 From: ntilton at mines.edu (Nils Tilton) Date: Fri, 24 Jan 2025 03:05:49 +0000 Subject: [petsc-users] Issue when installing PETSc Message-ID: Dear PETSc Team, I hope this email finds you all well. I have a question regarding installation of PETSc. I recently installed PETSc on Ubuntu 20.04.2 LTS. I followed the online instructions, whereby I first downloaded the files using git, and then ran ?./configure? and then ?make all check.? However, I noted two issues during this process. First, after running ./configure, I got the following text: ?Language used to compile PETSc: C? This gave me the impression PETSc was compiled with C, but my code is written in C++. Is the above going to be an issue? I am attaching my configure.log and make.log files if that helps. The second issue is that when I ran ?make all check,? I did get some errors in the final checking stage. These don?t appear in the make.log file, so I am attaching a screenshot. I seem to recall that screenshots are frowned upon when asking questions to the PETSc team. I apologize in advance. I just couldn?t find any other way to save the text. I will add that I was able to successfully compile my C++ code that uses PETSc, but I am getting some funny behavior that could be related to the issues above. I am holding off on including those issues here to avoid complicating my question too soon. Thank you very much for all your help! Best Wishes, Nils -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1218783 bytes Desc: configure.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 130770 bytes Desc: make.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2025-01-23 at 5.57.48?PM.png Type: image/png Size: 2614977 bytes Desc: Screenshot 2025-01-23 at 5.57.48?PM.png URL: From bsmith at petsc.dev Fri Jan 24 11:14:24 2025 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 24 Jan 2025 12:14:24 -0500 Subject: [petsc-users] Issue when installing PETSc In-Reply-To: References: Message-ID: > On Jan 23, 2025, at 10:05?PM, Nils Tilton wrote: > > Dear PETSc Team, > > I hope this email finds you all well. I have a question regarding installation of PETSc. I recently installed PETSc on Ubuntu 20.04.2 LTS. I followed the online instructions, whereby I first downloaded the files using git, and then ran ?./configure? and then ?make all check.? However, I noted two issues during this process. First, after running ./configure, I got the following text: > > ?Language used to compile PETSc: C? > > This gave me the impression PETSc was compiled with C, but my code is written in C++. Is the above going to be an issue? I am attaching my configure.log and make.log files if that helps. No issue. You can use it fully from C++ > > The second issue is that when I ran ?make all check,? I did get some errors in the final checking stage. These don?t appear in the make.log file, so I am attaching a screenshot. I seem to recall that screenshots are frowned upon when asking questions to the PETSc team. I apologize in advance. I just couldn?t find any other way to save the text. > > I will add that I was able to successfully compile my C++ code that uses PETSc, but I am getting some funny behavior that could be related to the issues above. I am holding off on including those issues here to avoid complicating my question too soon. The "protocal" error message is from the MPI and can be ignored. Sadly MPI implementers still do not prefix their warning/error messages to indicate the are coming from MPI. So you are set to go, good luck Barry > > Thank you very much for all your help! > Best Wishes, > Nils > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ntilton at mines.edu Fri Jan 24 11:20:44 2025 From: ntilton at mines.edu (Nils Tilton) Date: Fri, 24 Jan 2025 17:20:44 +0000 Subject: [petsc-users] [EXTERNAL] Re: Issue when installing PETSc In-Reply-To: References: Message-ID: Dear Barry, Thanks so much for your help. I was eventually able to get my code to compile and run correctly with PETSc. The other strange issues I alluded to in my email turned out to be unrelated. Thanks again! Best Wishes, Nils From: Barry Smith Date: Friday, January 24, 2025 at 10:14?AM To: Nils Tilton Cc: petsc-users at mcs.anl.gov Subject: [EXTERNAL] Re: [petsc-users] Issue when installing PETSc CAUTION: This email originated from outside of the Colorado School of Mines organization. Do not click on links or open attachments unless you recognize the sender and know the content is safe. On Jan 23, 2025, at 10:05?PM, Nils Tilton wrote: Dear PETSc Team, I hope this email finds you all well. I have a question regarding installation of PETSc. I recently installed PETSc on Ubuntu 20.04.2 LTS. I followed the online instructions, whereby I first downloaded the files using git, and then ran ?./configure? and then ?make all check.? However, I noted two issues during this process. First, after running ./configure, I got the following text: ?Language used to compile PETSc: C? This gave me the impression PETSc was compiled with C, but my code is written in C++. Is the above going to be an issue? I am attaching my configure.log and make.log files if that helps. No issue. You can use it fully from C++ The second issue is that when I ran ?make all check,? I did get some errors in the final checking stage. These don?t appear in the make.log file, so I am attaching a screenshot. I seem to recall that screenshots are frowned upon when asking questions to the PETSc team. I apologize in advance. I just couldn?t find any other way to save the text. I will add that I was able to successfully compile my C++ code that uses PETSc, but I am getting some funny behavior that could be related to the issues above. I am holding off on including those issues here to avoid complicating my question too soon. The "protocal" error message is from the MPI and can be ignored. Sadly MPI implementers still do not prefix their warning/error messages to indicate the are coming from MPI. So you are set to go, good luck Barry Thank you very much for all your help! Best Wishes, Nils -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Jan 26 18:11:53 2025 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 26 Jan 2025 19:11:53 -0500 Subject: [petsc-users] fortran TYPE IS statement vs petsc index set IS In-Reply-To: References: Message-ID: <2DF566A0-3486-4870-BFB7-FCFB9A483CAE@petsc.dev> Sorry for the delay in responding. The easiest way is to simply skip the #include "petsc/finclude/petscksp.h" and use type(tIS) instead of IS. Explanation: the #include "petsc/finclude/petscksp.h" defines a few macros to make the current PETSc Fortran API look like the old PETSc API. For example #define IS type(tIS) Barry > On Jan 23, 2025, at 3:16?AM, Klaij, Christiaan via petsc-users wrote: > > In fortran I'm using the following structure to check the type of > an incoming variable: > > SELECT TYPE (myvar) > TYPE IS (mytype) > ... > END SELECT > > Here IS is a fortan intrinsic, so far so good. However, when I > add a petsc index set as follows > > #include "petsc/finclude/petscksp.h" > > use petscksp, only: tIS > > IS :: myIS > > the compiler gets confused and thinks that the intrinsic fortran > IS is the petsc index set IS, and errors-out on the SELECT > TYPE: > > error #8245: SELECT TYPE statement must be immediately followed by CLASS IS, TYPE IS, CLASS DEFAULT or END SELECT statement. > SELECT TYPE (myvar) > ----^ > error #6410: This name has not been declared as an array or a function. [TYPE] > TYPE type(tIS) (mytype) > ---------^ > compilation aborted > > What would be the right way to deal with this problem? > > dr. ir.???? > Christiaan > Klaij > | > Senior Researcher > | > Research & Development > T +31?317?49?33?44 > | > C.Klaij at marin.nl > | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!YHMEsXfJ-XNk5b6HgivxkchNuwt2FjCj_pmjkzHYlkXw1xWEKDDXrJx9_kU4X76OwINhpeSRGozWLg-nNI4YirE$ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Jan 26 19:11:43 2025 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 26 Jan 2025 20:11:43 -0500 Subject: [petsc-users] fortran TYPE IS statement vs petsc index set IS In-Reply-To: <2DF566A0-3486-4870-BFB7-FCFB9A483CAE@petsc.dev> References: <2DF566A0-3486-4870-BFB7-FCFB9A483CAE@petsc.dev> Message-ID: Or, simpler, you can just use is() in lower case for the Fortran is. Barry > On Jan 26, 2025, at 7:11?PM, Barry Smith wrote: > > > Sorry for the delay in responding. > > The easiest way is to simply skip the #include "petsc/finclude/petscksp.h" and use type(tIS) instead of IS. > > Explanation: the #include "petsc/finclude/petscksp.h" defines a few macros to make the current PETSc Fortran API look like the old PETSc API. For example > > #define IS type(tIS) > > Barry > > >> On Jan 23, 2025, at 3:16?AM, Klaij, Christiaan via petsc-users wrote: >> >> In fortran I'm using the following structure to check the type of >> an incoming variable: >> >> SELECT TYPE (myvar) >> TYPE IS (mytype) >> ... >> END SELECT >> >> Here IS is a fortan intrinsic, so far so good. However, when I >> add a petsc index set as follows >> >> #include "petsc/finclude/petscksp.h" >> >> use petscksp, only: tIS >> >> IS :: myIS >> >> the compiler gets confused and thinks that the intrinsic fortran >> IS is the petsc index set IS, and errors-out on the SELECT >> TYPE: >> >> error #8245: SELECT TYPE statement must be immediately followed by CLASS IS, TYPE IS, CLASS DEFAULT or END SELECT statement. >> SELECT TYPE (myvar) >> ----^ >> error #6410: This name has not been declared as an array or a function. [TYPE] >> TYPE type(tIS) (mytype) >> ---------^ >> compilation aborted >> >> What would be the right way to deal with this problem? >> >> dr. ir.???? >> Christiaan >> Klaij >> | >> Senior Researcher >> | >> Research & Development >> T +31?317?49?33?44 >> | >> C.Klaij at marin.nl >> | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!eqUBMOZ3B0EOG1EMwA4E2Rz6hft5fVy3Ivj2YBHHWuy-IgCZLB-Pj7a2oeJOVg2grofaECvwA9G1ob6vv6kPWU0$ >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Jan 26 19:11:43 2025 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 26 Jan 2025 20:11:43 -0500 Subject: [petsc-users] fortran TYPE IS statement vs petsc index set IS In-Reply-To: <2DF566A0-3486-4870-BFB7-FCFB9A483CAE@petsc.dev> References: <2DF566A0-3486-4870-BFB7-FCFB9A483CAE@petsc.dev> Message-ID: Or, simpler, you can just use is() in lower case for the Fortran is. Barry > On Jan 26, 2025, at 7:11?PM, Barry Smith wrote: > > > Sorry for the delay in responding. > > The easiest way is to simply skip the #include "petsc/finclude/petscksp.h" and use type(tIS) instead of IS. > > Explanation: the #include "petsc/finclude/petscksp.h" defines a few macros to make the current PETSc Fortran API look like the old PETSc API. For example > > #define IS type(tIS) > > Barry > > >> On Jan 23, 2025, at 3:16?AM, Klaij, Christiaan via petsc-users wrote: >> >> In fortran I'm using the following structure to check the type of >> an incoming variable: >> >> SELECT TYPE (myvar) >> TYPE IS (mytype) >> ... >> END SELECT >> >> Here IS is a fortan intrinsic, so far so good. However, when I >> add a petsc index set as follows >> >> #include "petsc/finclude/petscksp.h" >> >> use petscksp, only: tIS >> >> IS :: myIS >> >> the compiler gets confused and thinks that the intrinsic fortran >> IS is the petsc index set IS, and errors-out on the SELECT >> TYPE: >> >> error #8245: SELECT TYPE statement must be immediately followed by CLASS IS, TYPE IS, CLASS DEFAULT or END SELECT statement. >> SELECT TYPE (myvar) >> ----^ >> error #6410: This name has not been declared as an array or a function. [TYPE] >> TYPE type(tIS) (mytype) >> ---------^ >> compilation aborted >> >> What would be the right way to deal with this problem? >> >> dr. ir.???? >> Christiaan >> Klaij >> | >> Senior Researcher >> | >> Research & Development >> T +31?317?49?33?44 >> | >> C.Klaij at marin.nl >> | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!dPFI3QMmP_EuU0rMN41EuTIWIgbY1PFACvkhrCPOh9oKU26ve6P1d3thlPG95YAu9kEYXP5WACupHiH-gOAafpQ$ >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From medane.tchakorom at univ-fcomte.fr Mon Jan 27 07:23:24 2025 From: medane.tchakorom at univ-fcomte.fr (medane.tchakorom at univ-fcomte.fr) Date: Mon, 27 Jan 2025 14:23:24 +0100 Subject: [petsc-users] Copy dense matrix into half part of another dense matrix Message-ID: <22B53CCE-5155-4D79-B6FC-223489382DC7@univ-fcomte.fr> Dear PETSc users, I hope this message finds you well. I don?t know If my question is relevant, but I?am currently working with DENSE type matrix, and would like to copy one matrix R_part [ n/2 x m] (resulted from a MatMatMult operation) into another dense matrix R_full [n x m]. Both matrices being on the same communicator, I would like to efficiently copy R_part in the first half of R_full. I have being using MatSetValues, but for large matrices, the subsequent assembling operation is costly. Please could you suggest me some strategies or functions to do this efficiently. Thank you for your time and assistance. Best regards, Tchakorom Medane From pierre at joliv.et Mon Jan 27 07:42:01 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Mon, 27 Jan 2025 14:42:01 +0100 Subject: [petsc-users] Copy dense matrix into half part of another dense matrix In-Reply-To: <22B53CCE-5155-4D79-B6FC-223489382DC7@univ-fcomte.fr> References: <22B53CCE-5155-4D79-B6FC-223489382DC7@univ-fcomte.fr> Message-ID: > On 27 Jan 2025, at 2:23?PM, medane.tchakorom at univ-fcomte.fr wrote: > > Dear PETSc users, > > I hope this message finds you well. I don?t know If my question is relevant, but I?am currently working with DENSE type matrix, and would like to copy one matrix R_part [ n/2 x m] (resulted from a MatMatMult operation) into another dense matrix R_full [n x m]. > Both matrices being on the same communicator, I would like to efficiently copy R_part in the first half of R_full. > I have being using MatSetValues, but for large matrices, the subsequent assembling operation is costly. Could you please share the output of -log_view as well as a single file that will be generated with -info dump (e.g., dump.0, the file associated to process #0)? This shouldn?t be that costly, so there may be an option missing, like MAT_NO_OFF_PROC_ENTRIES. Anyway, if you want to optimize this, the fastest way would be to call MatDenseGetArray[Read,Write]() and then do the necessary PetscArraycpy(). Thanks, Pierre > Please could you suggest me some strategies or functions to do this efficiently. > > Thank you for your time and assistance. > > Best regards, > Tchakorom Medane > From knepley at gmail.com Mon Jan 27 07:52:36 2025 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 27 Jan 2025 08:52:36 -0500 Subject: [petsc-users] Copy dense matrix into half part of another dense matrix In-Reply-To: References: <22B53CCE-5155-4D79-B6FC-223489382DC7@univ-fcomte.fr> Message-ID: On Mon, Jan 27, 2025 at 8:42?AM Pierre Jolivet wrote: > > On 27 Jan 2025, at 2:23?PM, medane.tchakorom at univ-fcomte.fr wrote: > > > > Dear PETSc users, > > > > I hope this message finds you well. I don?t know If my question is > relevant, but I?am currently working with DENSE type matrix, and would like > to copy one matrix R_part [ n/2 x m] (resulted from a MatMatMult operation) > into another dense matrix R_full [n x m]. > > Both matrices being on the same communicator, I would like to > efficiently copy R_part in the first half of R_full. > > I have being using MatSetValues, but for large matrices, the subsequent > assembling operation is costly. > > Could you please share the output of -log_view as well as a single file > that will be generated with -info dump (e.g., dump.0, the file associated > to process #0)? > This shouldn?t be that costly, so there may be an option missing, like > MAT_NO_OFF_PROC_ENTRIES. > Anyway, if you want to optimize this, the fastest way would be to call > MatDenseGetArray[Read,Write]() and then do the necessary PetscArraycpy(). > The other alternative (which I think makes cleaner code) is to use https://urldefense.us/v3/__https://petsc.org/main/manualpages/Mat/MatDenseGetSubMatrix/__;!!G_uCfscf7eWS!ZOoOupB2xSO9fOY1fdeyPQU6bNVEuctFDItgqZzhKerRBnX3177w7U4_2rlep0tcSVOrNgKt75z9xd8BvjI9$ to create your R_part matrix. Then you are directly acting on the memory you want when assemble the smaller matrix. THanks, Matt > Thanks, > Pierre > > > Please could you suggest me some strategies or functions to do this > efficiently. > > > > Thank you for your time and assistance. > > > > Best regards, > > Tchakorom Medane > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZOoOupB2xSO9fOY1fdeyPQU6bNVEuctFDItgqZzhKerRBnX3177w7U4_2rlep0tcSVOrNgKt75z9xRlvvjmx$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Mon Jan 27 13:19:31 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Mon, 27 Jan 2025 20:19:31 +0100 Subject: [petsc-users] Copy dense matrix into half part of another dense matrix In-Reply-To: <5EBB3A0A-10A3-4DAC-A349-6AB2AF3A5CD8@univ-fcomte.fr> References: <22B53CCE-5155-4D79-B6FC-223489382DC7@univ-fcomte.fr> <8596A695-9318-41BD-BB08-1CF97161E8B6@univ-fcomte.fr> <4FBC3B13-03A1-4269-B9E2-39C6C3102705@joliv.et> <5EBB3A0A-10A3-4DAC-A349-6AB2AF3A5CD8@univ-fcomte.fr> Message-ID: Please always keep the list in copy. The way you create A is not correct, I?ve attached a fixed code. If you want to keep your own distribution for A (and not the one associated to R_part), you?ll need to first call https://urldefense.us/v3/__https://petsc.org/main/manualpages/Mat/MatCreateSubMatrix/__;!!G_uCfscf7eWS!YjRNPHiOB2cmuRYkj3oAk-pZq_o8h3NlpeO9PlDH0X9SBfFvdi3ClO4y8ytxjLkg8u16l6dmVO7PZsCIAdueNw$ to redistribute A and then do a MatCopy() of the resulting Mat into R_part Thanks, Pierre $ /Volumes/Data/repositories/petsc/arch-darwin-c-debug-real/bin/mpirun -n 4 ./ex1234 Mat Object: 4 MPI processes type: mpidense 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 Mat Object: 4 MPI processes type: mpidense 2.6219599187040323e+00 1.9661197867318445e+00 1.5218640363910978e+00 3.5202261875977947e+00 3.6311893358251384e+00 2.2279492868785069e+00 2.7505403755038014e+00 3.1546072728892756e+00 1.8416294994524489e+00 2.4676055638467314e+00 2.3185625557889602e+00 2.0401666986599833e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 ? > On 27 Jan 2025, at 6:53?PM, medane.tchakorom at univ-fcomte.fr wrote: > > Re: > > This is a small reproductible example using MatDenseGetSubMatrix > > > Command: petscmpiexec -n 4 ./example > > ========================================================== > > PetscInt nlines = 8; // lines > PetscInt ncolumns = 3; // columns > PetscInt random_size = 12; > PetscInt rank; > PetscInt size; > > // Initialize PETSc > PetscInitialize(&argc, &args, NULL, NULL); > > MPI_Comm_rank(MPI_COMM_WORLD, &rank); > MPI_Comm_size(MPI_COMM_WORLD, &size); > > // R_full with all values to zero > Mat R_full; > MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nlines, ncolumns, NULL, &R_full); > MatZeroEntries(R_full); > MatView(R_full, PETSC_VIEWER_STDOUT_WORLD); > > // Creating and setting A and S to rand values > Mat A, S; > MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nlines / 2, random_size, NULL, &A); > MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, random_size, ncolumns, NULL, &S); > MatSetRandom(A, NULL); > MatSetRandom(S, NULL); > > // Computing R_part > Mat R_part; > MatDenseGetSubMatrix(R_full, PETSC_DECIDE, nlines / 2, PETSC_DECIDE, PETSC_DECIDE, &R_part); > MatMatMult(A, S, MAT_REUSE_MATRIX, PETSC_DECIDE, &R_part); > > // Visualizing R_full > MatDenseRestoreSubMatrix(R_full, &R_part); > MatView(R_full, PETSC_VIEWER_STDOUT_WORLD); > > // Destroying matrices > MatDestroy(&R_part); > MatDestroy(&R_full); > > PetscFinalize(); > return 0; > > ========================================================== > > > Part of the error output contains?.: > > "Cannot change/reset row sizes to 1 local 4 global after previously setting them to 2 local 4 global ?.? > > > > > ========================================================== > > PetscInt nlines = 8; // lines > PetscInt ncolumns = 3; // columns > PetscInt random_size = 12; > PetscInt rank; > PetscInt size; > > // Initialize PETSc > PetscInitialize(&argc, &args, NULL, NULL); > > MPI_Comm_rank(MPI_COMM_WORLD, &rank); > MPI_Comm_size(MPI_COMM_WORLD, &size); > > // R_full with all values to zero > Mat R_full; > MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nlines, ncolumns, NULL, &R_full); > MatZeroEntries(R_full); > MatView(R_full, PETSC_VIEWER_STDOUT_WORLD); > > // Creating and setting A and S to rand values > Mat A, S; > MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nlines / 2, random_size, NULL, &A); > MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, random_size, ncolumns, NULL, &S); > MatSetRandom(A, NULL); > MatSetRandom(S, NULL); > > // Computing R_part > Mat R_part; > MatMatMult(A, S, MAT_INITIAL_MATRIX, PETSC_DECIDE, &R_part); > MatView(R_part, PETSC_VIEWER_STDOUT_WORLD); > > Mat R_sub; > MatDenseGetSubMatrix(R_full, PETSC_DECIDE, nlines / 2, PETSC_DECIDE, PETSC_DECIDE, &R_sub); > > PetscScalar *storage = NULL; > MatDenseGetArray(R_part, &storage); > PetscScalar *storage_sub = NULL; > MatDenseGetArray(R_sub, &storage_sub); > > PetscArraycpy(storage_sub, storage, (nlines / 2) * ncolumns); > > MatDenseRestoreArray(R_part, &storage); > MatDenseRestoreArray(R_sub, &storage_sub); > > MatDenseRestoreSubMatrix(R_full, &R_sub); > > MatView(R_full, PETSC_VIEWER_STDOUT_WORLD); > > // Destroying matrices > MatDestroy(&R_part); > MatDestroy(&R_full); > > PetscFinalize(); > return 0; > ========================================================== > > > Now Using MatDenseGetArray > > > > Please let me know if I need to clarify something. > > > > > Thanks > Medane > > >> On 27 Jan 2025, at 16:26, Pierre Jolivet wrote: >> >> >> >>> On 27 Jan 2025, at 3:52?PM, Matthew Knepley wrote: >>> >>> On Mon, Jan 27, 2025 at 9:23?AM medane.tchakorom at univ-fcomte.fr > wrote: >>>> Re: >>>> >>>> MatDenseGetSubMatrix in fact could be the best alternative (cleaner code), but as mentioned earlier, I would like to use the smaller matrix R_part to get the result of a MatMatMult operation. >>>> >>>> MatMatMult(A, S, MAT_INITIAL_MATRIX, PETSC_DECIDE, &R_part); >>>> >>>> When trying to use MAT_REUSE_MATRIX, this gave an error (expected error I think). On the other side, I mentionned on MatDenseGetSubMatrix page, "The output matrix is not redistributed by PETSc?. So will R_part be a valid output matrix for MatMatMult? >>> >>> I believe it is, but I am not the expert. >>> >>> Pierre, can the SubMatrix be used as the output of a MatMatMult()? >> >> It should be, but there may be some limitations due to leading dimensions and what not. >> By looking at just the single line of code we got from you, I can see at least one issue: it should be MAT_REUSE_MATRIX, not MAT_INITIAL_MATRIX (assuming you got R_part from MatDenseGetSubMatrix). >> Feel free to share a (minimal) reproducer. >> >> Thanks, >> Pierre >> >>> Thanks, >>> >>> Matt >>> >>>> Thanks >>>> >>>> >>>>> On 27 Jan 2025, at 14:52, Matthew Knepley > wrote: >>>>> >>>>> On Mon, Jan 27, 2025 at 8:42?AM Pierre Jolivet > wrote: >>>>>> > On 27 Jan 2025, at 2:23?PM, medane.tchakorom at univ-fcomte.fr wrote: >>>>>> > >>>>>> > Dear PETSc users, >>>>>> > >>>>>> > I hope this message finds you well. I don?t know If my question is relevant, but I?am currently working with DENSE type matrix, and would like to copy one matrix R_part [ n/2 x m] (resulted from a MatMatMult operation) into another dense matrix R_full [n x m]. >>>>>> > Both matrices being on the same communicator, I would like to efficiently copy R_part in the first half of R_full. >>>>>> > I have being using MatSetValues, but for large matrices, the subsequent assembling operation is costly. >>>>>> >>>>>> Could you please share the output of -log_view as well as a single file that will be generated with -info dump (e.g., dump.0, the file associated to process #0)? >>>>>> This shouldn?t be that costly, so there may be an option missing, like MAT_NO_OFF_PROC_ENTRIES. >>>>>> Anyway, if you want to optimize this, the fastest way would be to call MatDenseGetArray[Read,Write]() and then do the necessary PetscArraycpy(). >>>>> >>>>> The other alternative (which I think makes cleaner code) is to use >>>>> >>>>> https://urldefense.us/v3/__https://petsc.org/main/manualpages/Mat/MatDenseGetSubMatrix/__;!!G_uCfscf7eWS!YjRNPHiOB2cmuRYkj3oAk-pZq_o8h3NlpeO9PlDH0X9SBfFvdi3ClO4y8ytxjLkg8u16l6dmVO7PZsCM1NrN_g$ >>>>> >>>>> to create your R_part matrix. Then you are directly acting on the memory you want when assemble the smaller matrix. >>>>> >>>>> THanks, >>>>> >>>>> Matt >>>>> >>>>>> Thanks, >>>>>> Pierre >>>>>> >>>>>> > Please could you suggest me some strategies or functions to do this efficiently. >>>>>> > >>>>>> > Thank you for your time and assistance. >>>>>> > >>>>>> > Best regards, >>>>>> > Tchakorom Medane >>>>>> > >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YjRNPHiOB2cmuRYkj3oAk-pZq_o8h3NlpeO9PlDH0X9SBfFvdi3ClO4y8ytxjLkg8u16l6dmVO7PZsBr2sh27w$ >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YjRNPHiOB2cmuRYkj3oAk-pZq_o8h3NlpeO9PlDH0X9SBfFvdi3ClO4y8ytxjLkg8u16l6dmVO7PZsBr2sh27w$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex1234.c Type: application/octet-stream Size: 1849 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From medane.tchakorom at univ-fcomte.fr Tue Jan 28 03:50:09 2025 From: medane.tchakorom at univ-fcomte.fr (medane.tchakorom at univ-fcomte.fr) Date: Tue, 28 Jan 2025 10:50:09 +0100 Subject: [petsc-users] Copy dense matrix into half part of another dense matrix In-Reply-To: References: <22B53CCE-5155-4D79-B6FC-223489382DC7@univ-fcomte.fr> <8596A695-9318-41BD-BB08-1CF97161E8B6@univ-fcomte.fr> <4FBC3B13-03A1-4269-B9E2-39C6C3102705@joliv.et> <5EBB3A0A-10A3-4DAC-A349-6AB2AF3A5CD8@univ-fcomte.fr> Message-ID: <4E2D09B7-1A08-48D4-9896-71F8851E4F95@univ-fcomte.fr> Re: Thank you Pierre, I really appreciate. I?am testing it right now to access the improvements. BR, Medane > On 27 Jan 2025, at 20:19, Pierre Jolivet wrote: > > Please always keep the list in copy. > The way you create A is not correct, I?ve attached a fixed code. > If you want to keep your own distribution for A (and not the one associated to R_part), you?ll need to first call https://urldefense.us/v3/__https://petsc.org/main/manualpages/Mat/MatCreateSubMatrix/__;!!G_uCfscf7eWS!Ye7A4fqD5xLobOPjgvYkh9cj1-JExIqX_EJIHFm-NHw5rEk2PU5kvs3GfKlJd2TZPorWhvb0Jh7eTcKii9t7Z7tgYSIoeSHTchrF1snH$ to redistribute A and then do a MatCopy() of the resulting Mat into R_part > > Thanks, > Pierre > > $ /Volumes/Data/repositories/petsc/arch-darwin-c-debug-real/bin/mpirun -n 4 ./ex1234 > Mat Object: 4 MPI processes > type: mpidense > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > Mat Object: 4 MPI processes > type: mpidense > 2.6219599187040323e+00 1.9661197867318445e+00 1.5218640363910978e+00 > 3.5202261875977947e+00 3.6311893358251384e+00 2.2279492868785069e+00 > 2.7505403755038014e+00 3.1546072728892756e+00 1.8416294994524489e+00 > 2.4676055638467314e+00 2.3185625557889602e+00 2.0401666986599833e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > > > > > >> On 27 Jan 2025, at 6:53?PM, medane.tchakorom at univ-fcomte.fr wrote: >> >> Re: >> >> This is a small reproductible example using MatDenseGetSubMatrix >> >> >> Command: petscmpiexec -n 4 ./example >> >> ========================================================== >> >> PetscInt nlines = 8; // lines >> PetscInt ncolumns = 3; // columns >> PetscInt random_size = 12; >> PetscInt rank; >> PetscInt size; >> >> // Initialize PETSc >> PetscInitialize(&argc, &args, NULL, NULL); >> >> MPI_Comm_rank(MPI_COMM_WORLD, &rank); >> MPI_Comm_size(MPI_COMM_WORLD, &size); >> >> // R_full with all values to zero >> Mat R_full; >> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nlines, ncolumns, NULL, &R_full); >> MatZeroEntries(R_full); >> MatView(R_full, PETSC_VIEWER_STDOUT_WORLD); >> >> // Creating and setting A and S to rand values >> Mat A, S; >> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nlines / 2, random_size, NULL, &A); >> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, random_size, ncolumns, NULL, &S); >> MatSetRandom(A, NULL); >> MatSetRandom(S, NULL); >> >> // Computing R_part >> Mat R_part; >> MatDenseGetSubMatrix(R_full, PETSC_DECIDE, nlines / 2, PETSC_DECIDE, PETSC_DECIDE, &R_part); >> MatMatMult(A, S, MAT_REUSE_MATRIX, PETSC_DECIDE, &R_part); >> >> // Visualizing R_full >> MatDenseRestoreSubMatrix(R_full, &R_part); >> MatView(R_full, PETSC_VIEWER_STDOUT_WORLD); >> >> // Destroying matrices >> MatDestroy(&R_part); >> MatDestroy(&R_full); >> >> PetscFinalize(); >> return 0; >> >> ========================================================== >> >> >> Part of the error output contains?.: >> >> "Cannot change/reset row sizes to 1 local 4 global after previously setting them to 2 local 4 global ?.? >> >> >> >> >> ========================================================== >> >> PetscInt nlines = 8; // lines >> PetscInt ncolumns = 3; // columns >> PetscInt random_size = 12; >> PetscInt rank; >> PetscInt size; >> >> // Initialize PETSc >> PetscInitialize(&argc, &args, NULL, NULL); >> >> MPI_Comm_rank(MPI_COMM_WORLD, &rank); >> MPI_Comm_size(MPI_COMM_WORLD, &size); >> >> // R_full with all values to zero >> Mat R_full; >> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nlines, ncolumns, NULL, &R_full); >> MatZeroEntries(R_full); >> MatView(R_full, PETSC_VIEWER_STDOUT_WORLD); >> >> // Creating and setting A and S to rand values >> Mat A, S; >> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nlines / 2, random_size, NULL, &A); >> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, random_size, ncolumns, NULL, &S); >> MatSetRandom(A, NULL); >> MatSetRandom(S, NULL); >> >> // Computing R_part >> Mat R_part; >> MatMatMult(A, S, MAT_INITIAL_MATRIX, PETSC_DECIDE, &R_part); >> MatView(R_part, PETSC_VIEWER_STDOUT_WORLD); >> >> Mat R_sub; >> MatDenseGetSubMatrix(R_full, PETSC_DECIDE, nlines / 2, PETSC_DECIDE, PETSC_DECIDE, &R_sub); >> >> PetscScalar *storage = NULL; >> MatDenseGetArray(R_part, &storage); >> PetscScalar *storage_sub = NULL; >> MatDenseGetArray(R_sub, &storage_sub); >> >> PetscArraycpy(storage_sub, storage, (nlines / 2) * ncolumns); >> >> MatDenseRestoreArray(R_part, &storage); >> MatDenseRestoreArray(R_sub, &storage_sub); >> >> MatDenseRestoreSubMatrix(R_full, &R_sub); >> >> MatView(R_full, PETSC_VIEWER_STDOUT_WORLD); >> >> // Destroying matrices >> MatDestroy(&R_part); >> MatDestroy(&R_full); >> >> PetscFinalize(); >> return 0; >> ========================================================== >> >> >> Now Using MatDenseGetArray >> >> >> >> Please let me know if I need to clarify something. >> >> >> >> >> Thanks >> Medane >> >> >>> On 27 Jan 2025, at 16:26, Pierre Jolivet wrote: >>> >>> >>> >>>> On 27 Jan 2025, at 3:52?PM, Matthew Knepley wrote: >>>> >>>> On Mon, Jan 27, 2025 at 9:23?AM medane.tchakorom at univ-fcomte.fr > wrote: >>>>> Re: >>>>> >>>>> MatDenseGetSubMatrix in fact could be the best alternative (cleaner code), but as mentioned earlier, I would like to use the smaller matrix R_part to get the result of a MatMatMult operation. >>>>> >>>>> MatMatMult(A, S, MAT_INITIAL_MATRIX, PETSC_DECIDE, &R_part); >>>>> >>>>> When trying to use MAT_REUSE_MATRIX, this gave an error (expected error I think). On the other side, I mentionned on MatDenseGetSubMatrix page, "The output matrix is not redistributed by PETSc?. So will R_part be a valid output matrix for MatMatMult? >>>> >>>> I believe it is, but I am not the expert. >>>> >>>> Pierre, can the SubMatrix be used as the output of a MatMatMult()? >>> >>> It should be, but there may be some limitations due to leading dimensions and what not. >>> By looking at just the single line of code we got from you, I can see at least one issue: it should be MAT_REUSE_MATRIX, not MAT_INITIAL_MATRIX (assuming you got R_part from MatDenseGetSubMatrix). >>> Feel free to share a (minimal) reproducer. >>> >>> Thanks, >>> Pierre >>> >>>> Thanks, >>>> >>>> Matt >>>> >>>>> Thanks >>>>> >>>>> >>>>>> On 27 Jan 2025, at 14:52, Matthew Knepley > wrote: >>>>>> >>>>>> On Mon, Jan 27, 2025 at 8:42?AM Pierre Jolivet > wrote: >>>>>>> > On 27 Jan 2025, at 2:23?PM, medane.tchakorom at univ-fcomte.fr wrote: >>>>>>> > >>>>>>> > Dear PETSc users, >>>>>>> > >>>>>>> > I hope this message finds you well. I don?t know If my question is relevant, but I?am currently working with DENSE type matrix, and would like to copy one matrix R_part [ n/2 x m] (resulted from a MatMatMult operation) into another dense matrix R_full [n x m]. >>>>>>> > Both matrices being on the same communicator, I would like to efficiently copy R_part in the first half of R_full. >>>>>>> > I have being using MatSetValues, but for large matrices, the subsequent assembling operation is costly. >>>>>>> >>>>>>> Could you please share the output of -log_view as well as a single file that will be generated with -info dump (e.g., dump.0, the file associated to process #0)? >>>>>>> This shouldn?t be that costly, so there may be an option missing, like MAT_NO_OFF_PROC_ENTRIES. >>>>>>> Anyway, if you want to optimize this, the fastest way would be to call MatDenseGetArray[Read,Write]() and then do the necessary PetscArraycpy(). >>>>>> >>>>>> The other alternative (which I think makes cleaner code) is to use >>>>>> >>>>>> https://urldefense.us/v3/__https://petsc.org/main/manualpages/Mat/MatDenseGetSubMatrix/__;!!G_uCfscf7eWS!Ye7A4fqD5xLobOPjgvYkh9cj1-JExIqX_EJIHFm-NHw5rEk2PU5kvs3GfKlJd2TZPorWhvb0Jh7eTcKii9t7Z7tgYSIoeSHTchex8FLh$ >>>>>> >>>>>> to create your R_part matrix. Then you are directly acting on the memory you want when assemble the smaller matrix. >>>>>> >>>>>> THanks, >>>>>> >>>>>> Matt >>>>>> >>>>>>> Thanks, >>>>>>> Pierre >>>>>>> >>>>>>> > Please could you suggest me some strategies or functions to do this efficiently. >>>>>>> > >>>>>>> > Thank you for your time and assistance. >>>>>>> > >>>>>>> > Best regards, >>>>>>> > Tchakorom Medane >>>>>>> > >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Ye7A4fqD5xLobOPjgvYkh9cj1-JExIqX_EJIHFm-NHw5rEk2PU5kvs3GfKlJd2TZPorWhvb0Jh7eTcKii9t7Z7tgYSIoeSHTcquicPKi$ >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Ye7A4fqD5xLobOPjgvYkh9cj1-JExIqX_EJIHFm-NHw5rEk2PU5kvs3GfKlJd2TZPorWhvb0Jh7eTcKii9t7Z7tgYSIoeSHTcquicPKi$ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anna.dalklint at solid.lth.se Tue Jan 28 05:50:13 2025 From: anna.dalklint at solid.lth.se (Anna Dalklint) Date: Tue, 28 Jan 2025 11:50:13 +0000 Subject: [petsc-users] Visualizing higher order finite element output in ParaView Message-ID: Hello, We have created a finite element code in PETSc for unstructured meshes using DMPlex. The first order meshes are created in gmsh and loaded into PETSc. To introduce higher order elements, e.g. 10 node tetrahedral elements, we start from scratch using PetscSection and loop over the relevant points it the DM to introduce additional degrees-of-freedom (example; for 10 node tets we have 4 vertices ?nodes? and 6 edge ?nodes?). The coordinates of the new ?nodes? are obtained by interpolation using the finite element basis functions. The simulations seem to run well, but we face issues when trying to visualize the results in ParaView. We have tried to use both CGNS and HDF5+XDMF file formats for e.g. VecView. CGNS works, but the edge degrees-of-freedom appear to not be interpolated correctly (we observe oscillations in the fields, don?t know if this is a PETSc och ParaView issue). Also, we would prefer to use another file format than CGNS since it does not appear to directly allow timeseries (at least ParaView doesn?t recognize it). We haven?t got the HDF5+XDMF file format to work at all when running on more than one core (the mesh is highly distorted when saving using VecView and DMView + running the ?petsc_gen_xdmf.py? script on the .h5 output file). VTU format works but then only the vertices? degrees-of-freedom are visualized. As far as we have understood it, this is because VTU/VTK only supports degrees-of-freedom on vertices/cell level. Does anyone have any idea of how to visualize fields generated from higher order elements in ParaView? Or understand what we might be doing wrong? Best regards, Anna -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Tue Jan 28 09:45:44 2025 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Tue, 28 Jan 2025 15:45:44 +0000 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <41D2201F-A290-4239-B57B-42DF1F8A32CB@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> <725862F5-8689-42E9-B496-C5088856C5FB@us.es> <41D2201F-A290-4239-B57B-42DF1F8A32CB@us.es> Message-ID: <45D3E45F-D293-4B16-B752-044D09915ACE@us.es> It works! Thank you again Matt. Miguel On 24 Jan 2025, at 17:07, MIGUEL MOLINOS PEREZ wrote: Ohh, I wasn't aware of this function. Thank you Matt, I?ll see if that solves the problem. Miguel On 24 Jan 2025, at 17:00, Matthew Knepley wrote: On Fri, Jan 24, 2025 at 10:56?AM MIGUEL MOLINOS PEREZ > wrote: Sorry I wasn?t clear enough. By ?the box is updated? I mean: I run DMGetBoundingBox and the resulting coordinates are updated according to the deformation gradient ?F". Oh, if you change the periodic extent, which you are, you have to recall DMSetPeriodicity(), which is what DMGetBoundaingBox() consults for the periodic extent (because the coordinates cannot be trusted). Thanks, Matt Thanks, Miguel On 24 Jan 2025, at 16:50, Matthew Knepley > wrote: On Fri, Jan 24, 2025 at 10:36?AM MIGUEL MOLINOS PEREZ > wrote: Thanks Matt, I tried that too, and the problem remains. The box is updated only if I set no periodic bcc. What do you mean by "The box is updated"? I am trying to understand how you test things. Clearly the coordinates are updated, even in the periodic case. Thus, I need to understand the test. Once we do that, we can work backwards to the first broken thing. Thanks, Matt Miguel On 24 Jan 2025, at 14:20, Matthew Knepley > wrote: On Fri, Jan 24, 2025 at 4:41?AM MIGUEL MOLINOS PEREZ > wrote: Dear Matt, the error was in the implementation of the volume expansion function. I updated it, and it works finte under finite domains. However, if I include periodic boundary conditions the volume of the cell does not accommodate the volume expansion of the particles. The deformation gradient is not the identity? I guess I am missing the fine detail on how periodic bcc are implemented in DMDA mesh, I?m right? DMDA identifies vertices using a VecScatter to implement periodic BC. This should be insensitive to coordinates. However, I don't think the algorithm below is correct for local coordinates. You use GlobalToLocal(), which means that some global coordinate "wins" for each local cell, so cells on the periodic boundary can be wrong. I would set the local coordinates by hand as well. Thanks, Matt Thanks, Miguel static PetscErrorCode Volumetric_Expansion_DMDA(DM * da, const Eigen::Matrix3d& F) { PetscInt i, j, mstart, m, nstart, n, pstart, p, k; Vec local, global; DMDACoor3d ***coors, ***coorslocal; DM cda; PetscFunctionBeginUser; PetscCall(DMGetCoordinateDM(*da, &cda)); PetscCall(DMGetCoordinates(*da, &global)); PetscCall(DMGetCoordinatesLocal(*da, &local)); PetscCall(DMDAVecGetArray(cda, global, &coors)); PetscCall(DMDAVecGetArrayRead(cda, local, &coorslocal)); PetscCall(DMDAGetCorners(cda, &mstart, &nstart, &pstart, &m, &n, &p)); for (i = mstart; i < mstart + m; i++) { for (j = nstart; j < nstart + n; j++) { for (k = pstart; k < pstart + p; k++) { coors[k][j][i].x = coorslocal[k][j][i].x * F(0, 0); coors[k][j][i].y = coorslocal[k][j][i].y * F(1, 1); coors[k][j][i].z = coorslocal[k][j][i].z * F(2, 2); } } } PetscCall(DMDAVecRestoreArray(cda, global, &coors)); PetscCall(DMDAVecRestoreArrayRead(cda, local, &coorslocal)); PetscCall(DMGlobalToLocalBegin(cda, global, INSERT_VALUES, local)); PetscCall(DMGlobalToLocalEnd(cda, global, INSERT_VALUES, local)); PetscFunctionReturn(PETSC_SUCCESS); } On 17 Jan 2025, at 18:01, MIGUEL MOLINOS PEREZ > wrote: You are right!! Thank you again! Miguel On Jan 17, 2025, at 5:18?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ > wrote: Now the error is in the call to DMSwarmMigrate You have almost certainly overwritten memory somewhere. Can you use vlagrind or Address Sanitizer? Thanks, Matt Miguel [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcTmBxXFRg$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSJj_8xog$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 [0]PETSC ERROR: #3 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [0]PETSC ERROR: #4 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 On Jan 17, 2025, at 4:22?PM, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt, this the piece of code I use to change the coordinates of the DM obtained using: You do not need the call to DMSetCoordinates(). What happens when you remove it? Thanks, Matt DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); DMGetApplicationContext(bounding_cell, &background_mesh); Thanks, Miguel /************************************************************************/ PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) { PetscErrorCode ierr; Vec coordinates; PetscScalar* coordArray; PetscInt xs, ys, zs, xm, ym, zm, i, j, k; PetscInt dim, M, N, P; PetscFunctionBegin; // Get DMDA information ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); CHKERRQ(ierr); ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); CHKERRQ(ierr); // Get the coordinates vector ierr = DMGetCoordinates(dm, &coordinates); CHKERRQ(ierr); ierr = VecGetArray(coordinates, &coordArray); CHKERRQ(ierr); // Update the coordinates based on the desired transformation for (k = zs; k < zs + zm; k++) { for (j = ys; j < ys + ym; j++) { for (i = xs; i < xs + xm; i++) { PetscInt idx = ((k * N + j) * M + i) * dim; // Index for the i, j, k point coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update y-coordinate coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update z-coordinate } } } // Restore the coordinates vector ierr = VecRestoreArray(coordinates, &coordArray); CHKERRQ(ierr); // Set the updated coordinates back to the DMDA ierr = DMSetCoordinates(dm, coordinates); CHKERRQ(ierr); PetscFunctionReturn(0); } /************************************************************************/ On 17 Jan 2025, at 16:00, Matthew Knepley > wrote: On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ > wrote: I tried what you suggested, but still I got this error message. Maybe I should use main release? No. I suspect something is wrong with the way you are setting coordinates. Can you share the code? Thanks, Matt Miguel [4]PETSC ERROR: ------------------------------------------------------------------------ [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [4]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcTmBxXFRg$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSJj_8xog$ [4]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [4]PETSC ERROR: The line numbers in the error traceback are not always exact. [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 [4]PETSC ERROR: #6 VecScatterBegin_Internal() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 [4]PETSC ERROR: #7 VecScatterBegin() at /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at /Users/migmolper/petsc/src/dm/interface/dm.c:2844 [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 [4]PETSC ERROR: #14 DMLocatePoints() at /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 [4]PETSC ERROR: #16 DMSwarmMigrate() at /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 [4]PETSC ERROR: #17 main() at /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Matt for the useful info. I?ll try your idea. Miguel On 15 Jan 2025, at 16:48, Matthew Knepley > wrote: On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ > wrote: Thank you Matt. Yes, I am getting the "CellDM" from the DMSwarm. 1. I have recently overhauled this functionality because it was not flexible enough for the plasma simulation we do. Thus main and release work differently. Nice to hear that. Should I move to main? The changes allow you to have several cell DMs. I want to bin particles in space, but also in velocity, and then in the tensor product of space and velocity. Moreover, sometimes I want to use different Swarm fields as the DM field for the solver. You can do all that with main now. If you just need a single DM with the same DM fields, release is fine. 2. I assume you are using release You are correct. 3. In both main and release, if you change the coordinates of your CellDM mesh, you need to rebin the particles. The easiest way to do this is to call DMSwarmMigrate(sw, PETSC_FALSE). What do you mean by rebin? When you provide the cell DM, Swrm makes a "sort context" that bins the particles into DM cells. If you change the coordinates, this binning will change, so you need it to "rebin" or recreate the sort context. Thanks, Matt Miguel Thanks, Matt Best, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSp4Ukp9A$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSp4Ukp9A$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSp4Ukp9A$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSp4Ukp9A$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSp4Ukp9A$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSp4Ukp9A$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSp4Ukp9A$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cHcYc8E8heB7GrC_nkwCyiyqQGCyKWk3TmgCNKUaObDuWYXcQBRBcpn6FIP89413lopJQu1866DnIcSp4Ukp9A$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 28 10:24:24 2025 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Jan 2025 11:24:24 -0500 Subject: [petsc-users] Update DMDA attached to DMSWARM In-Reply-To: <45D3E45F-D293-4B16-B752-044D09915ACE@us.es> References: <6C80E750-CA55-4519-843E-A3E90600C9E7@us.es> <184A2F6D-C76A-4D4F-8D19-7EFF2308D759@us.es> <41714333-6FCC-496D-88D6-E90AFAE43E45@us.es> <30099029-1BA9-45C5-A000-47A5178F53A1@us.es> <596FE6D9-3200-4946-95CD-0C30BCD96238@us.es> <2D7B88F9-D98F-4FAE-82C6-D48EA02CCCA1@us.es> <725862F5-8689-42E9-B496-C5088856C5FB@us.es> <41D2201F-A290-4239-B57B-42DF1F8A32CB@us.es> <45D3E45F-D293-4B16-B752-044D09915ACE@us.es> Message-ID: No problem. Sorry I did not think of it sooner. Matt On Tue, Jan 28, 2025 at 10:45?AM MIGUEL MOLINOS PEREZ wrote: > It works! Thank you again Matt. > > Miguel > > > On 24 Jan 2025, at 17:07, MIGUEL MOLINOS PEREZ wrote: > > Ohh, I wasn't aware of this function. Thank you Matt, I?ll see if that > solves the problem. > > Miguel > > On 24 Jan 2025, at 17:00, Matthew Knepley wrote: > > On Fri, Jan 24, 2025 at 10:56?AM MIGUEL MOLINOS PEREZ > wrote: > >> Sorry I wasn?t clear enough. By ?the box is updated? I mean: I run >> DMGetBoundingBox and the resulting coordinates are updated according to the >> deformation gradient ?F". >> > > Oh, if you change the periodic extent, which you are, you have to recall > DMSetPeriodicity(), which is what DMGetBoundaingBox() consults for the > periodic extent (because the coordinates cannot be trusted). > > Thanks, > > Matt > > >> Thanks, >> Miguel >> >> On 24 Jan 2025, at 16:50, Matthew Knepley wrote: >> >> On Fri, Jan 24, 2025 at 10:36?AM MIGUEL MOLINOS PEREZ >> wrote: >> >>> Thanks Matt, I tried that too, and the problem remains. The box is >>> updated only if I set no periodic bcc. >>> >> >> What do you mean by "The box is updated"? I am trying to understand how >> you test things. Clearly the coordinates are updated, >> even in the periodic case. Thus, I need to understand the test. Once we >> do that, we can work backwards to the first broken thing. >> >> Thanks, >> >> Matt >> >> >>> Miguel >>> >>> On 24 Jan 2025, at 14:20, Matthew Knepley wrote: >>> >>> On Fri, Jan 24, 2025 at 4:41?AM MIGUEL MOLINOS PEREZ >>> wrote: >>> >>>> Dear Matt, the error was in the implementation of the volume expansion >>>> function. I updated it, and it works finte under finite domains. However, >>>> if I include periodic boundary conditions the volume of the cell does not >>>> accommodate the volume expansion of the particles. The deformation gradient >>>> is not the identity? I guess I am missing the fine detail on how periodic >>>> bcc are implemented in DMDA mesh, I?m right? >>>> >>> >>> DMDA identifies vertices using a VecScatter to implement periodic BC. >>> This should be insensitive to coordinates. However, I don't think the >>> algorithm below is correct for local coordinates. You use GlobalToLocal(), >>> which means that some global coordinate "wins" for each local cell, so >>> cells on the periodic boundary can be wrong. I would set the local >>> coordinates by hand as well. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Miguel >>>> >>>> static PetscErrorCode Volumetric_Expansion_DMDA(DM * da, >>>> const Eigen::Matrix3d& F) { >>>> >>>> PetscInt i, j, mstart, m, nstart, n, pstart, p, k; >>>> Vec local, global; >>>> DMDACoor3d ***coors, ***coorslocal; >>>> DM cda; >>>> >>>> PetscFunctionBeginUser; >>>> PetscCall(DMGetCoordinateDM(*da, &cda)); >>>> PetscCall(DMGetCoordinates(*da, &global)); >>>> PetscCall(DMGetCoordinatesLocal(*da, &local)); >>>> PetscCall(DMDAVecGetArray(cda, global, &coors)); >>>> PetscCall(DMDAVecGetArrayRead(cda, local, &coorslocal)); >>>> PetscCall(DMDAGetCorners(cda, &mstart, &nstart, &pstart, &m, &n, &p)); >>>> for (i = mstart; i < mstart + m; i++) { >>>> for (j = nstart; j < nstart + n; j++) { >>>> for (k = pstart; k < pstart + p; k++) { >>>> coors[k][j][i].x = coorslocal[k][j][i].x * F(0, 0); >>>> coors[k][j][i].y = coorslocal[k][j][i].y * F(1, 1); >>>> coors[k][j][i].z = coorslocal[k][j][i].z * F(2, 2); >>>> } >>>> } >>>> } >>>> PetscCall(DMDAVecRestoreArray(cda, global, &coors)); >>>> PetscCall(DMDAVecRestoreArrayRead(cda, local, &coorslocal)); >>>> >>>> PetscCall(DMGlobalToLocalBegin(cda, global, INSERT_VALUES, local)); >>>> PetscCall(DMGlobalToLocalEnd(cda, global, INSERT_VALUES, local)); >>>> >>>> PetscFunctionReturn(PETSC_SUCCESS); >>>> } >>>> >>>> On 17 Jan 2025, at 18:01, MIGUEL MOLINOS PEREZ wrote: >>>> >>>> You are right!! Thank you again! >>>> >>>> Miguel >>>> >>>> On Jan 17, 2025, at 5:18?PM, Matthew Knepley wrote: >>>> >>>> On Fri, Jan 17, 2025 at 10:49?AM MIGUEL MOLINOS PEREZ >>>> wrote: >>>> >>>>> Now the error is in the call to DMSwarmMigrate >>>>> >>>> >>>> You have almost certainly overwritten memory somewhere. Can you use >>>> vlagrind or Address Sanitizer? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Miguel >>>>> >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>>>> probably memory access out of range >>>>> [0]PETSC ERROR: Try option -start_in_debugger or >>>>> -on_error_attach_debugger >>>>> [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUxxZ4fooO$ and >>>>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx15Kn1xQ$ >>>>> [0]PETSC ERROR: --------------------- Stack Frames >>>>> ------------------------------------ >>>>> [0]PETSC ERROR: The line numbers in the error traceback are not always >>>>> exact. >>>>> [0]PETSC ERROR: #1 DMSwarmDataBucketGetSizes() at >>>>> /Users/migmolper/petsc/src/dm/impls/swarm/data_bucket.c:297 >>>>> [0]PETSC ERROR: #2 DMSwarmMigrate_CellDMScatter() at >>>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:201 >>>>> [0]PETSC ERROR: #3 DMSwarmMigrate() at >>>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >>>>> [0]PETSC ERROR: #4 main() at >>>>> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >>>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >>>>> >>>>> On Jan 17, 2025, at 4:22?PM, Matthew Knepley >>>>> wrote: >>>>> >>>>> On Fri, Jan 17, 2025 at 10:08?AM MIGUEL MOLINOS PEREZ >>>>> wrote: >>>>> >>>>>> Thank you Matt, this the piece of code I use to change the >>>>>> coordinates of the DM obtained using: >>>>>> >>>>> >>>>> You do not need the call to DMSetCoordinates(). What happens when you >>>>> remove it? >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> >>>>>> DMSwarmGetCellDM(Simulation.atomistic_data, &bounding_cell); >>>>>> DMGetApplicationContext(bounding_cell, &background_mesh); >>>>>> >>>>>> Thanks, >>>>>> Miguel >>>>>> >>>>>> >>>>>> /************************************************************************/ >>>>>> >>>>>> PetscErrorCode Volumetric_Expansion(DM dm, const Eigen::Matrix3d& F) >>>>>> { >>>>>> PetscErrorCode ierr; >>>>>> Vec coordinates; >>>>>> PetscScalar* coordArray; >>>>>> PetscInt xs, ys, zs, xm, ym, zm, i, j, k; >>>>>> PetscInt dim, M, N, P; >>>>>> >>>>>> PetscFunctionBegin; >>>>>> // Get DMDA information >>>>>> ierr = DMDAGetInfo(dm, &dim, &M, &N, &P, NULL, NULL, NULL, NULL, NULL, >>>>>> NULL, >>>>>> NULL, NULL, NULL); >>>>>> CHKERRQ(ierr); >>>>>> ierr = DMDAGetCorners(dm, &xs, &ys, &zs, &xm, &ym, &zm); >>>>>> CHKERRQ(ierr); >>>>>> >>>>>> // Get the coordinates vector >>>>>> ierr = DMGetCoordinates(dm, &coordinates); >>>>>> CHKERRQ(ierr); >>>>>> ierr = VecGetArray(coordinates, &coordArray); >>>>>> CHKERRQ(ierr); >>>>>> >>>>>> // Update the coordinates based on the desired transformation >>>>>> for (k = zs; k < zs + zm; k++) { >>>>>> for (j = ys; j < ys + ym; j++) { >>>>>> for (i = xs; i < xs + xm; i++) { >>>>>> PetscInt idx = >>>>>> ((k * N + j) * M + i) * dim; // Index for the i, j, k point >>>>>> coordArray[idx] = coordArray[idx] * F(0,0); // Update x-coordinate >>>>>> coordArray[idx + 1] = coordArray[idx + 1] * F(1,1); // Update >>>>>> y-coordinate >>>>>> coordArray[idx + 2] = coordArray[idx + 2] * F(2,2); // Update >>>>>> z-coordinate >>>>>> } >>>>>> } >>>>>> } >>>>>> >>>>>> // Restore the coordinates vector >>>>>> ierr = VecRestoreArray(coordinates, &coordArray); >>>>>> CHKERRQ(ierr); >>>>>> >>>>>> // Set the updated coordinates back to the DMDA >>>>>> ierr = DMSetCoordinates(dm, coordinates); >>>>>> CHKERRQ(ierr); >>>>>> >>>>>> PetscFunctionReturn(0); >>>>>> } >>>>>> >>>>>> >>>>>> /************************************************************************/ >>>>>> >>>>>> On 17 Jan 2025, at 16:00, Matthew Knepley wrote: >>>>>> >>>>>> On Fri, Jan 17, 2025 at 9:45?AM MIGUEL MOLINOS PEREZ >>>>>> wrote: >>>>>> >>>>>>> I tried what you suggested, but still I got this error message. >>>>>>> Maybe I should use main release? >>>>>>> >>>>>> >>>>>> No. I suspect something is wrong with the way you are setting >>>>>> coordinates. Can you share the code? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Miguel >>>>>>> >>>>>>> [4]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [4]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>>>>>> Violation, probably memory access out of range >>>>>>> [4]PETSC ERROR: Try option -start_in_debugger or >>>>>>> -on_error_attach_debugger >>>>>>> [4]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUxxZ4fooO$ and >>>>>>> https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx15Kn1xQ$ >>>>>>> [4]PETSC ERROR: --------------------- Stack Frames >>>>>>> ------------------------------------ >>>>>>> [4]PETSC ERROR: The line numbers in the error traceback are not >>>>>>> always exact. >>>>>>> [4]PETSC ERROR: #1 Pack_PetscReal_1_0() at >>>>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:373 >>>>>>> [4]PETSC ERROR: #2 PetscSFLinkPackRootData_Private() at >>>>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:932 >>>>>>> [4]PETSC ERROR: #3 PetscSFLinkPackRootData() at >>>>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfpack.c:966 >>>>>>> [4]PETSC ERROR: #4 PetscSFBcastBegin_Basic() at >>>>>>> /Users/migmolper/petsc/src/vec/is/sf/impls/basic/sfbasic.c:357 >>>>>>> [4]PETSC ERROR: #5 PetscSFBcastWithMemTypeBegin() at >>>>>>> /Users/migmolper/petsc/src/vec/is/sf/interface/sf.c:1513 >>>>>>> [4]PETSC ERROR: #6 VecScatterBegin_Internal() at >>>>>>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:70 >>>>>>> [4]PETSC ERROR: #7 VecScatterBegin() at >>>>>>> /Users/migmolper/petsc/src/vec/is/sf/interface/vscat.c:1316 >>>>>>> [4]PETSC ERROR: #8 DMGlobalToLocalBegin_DA() at >>>>>>> /Users/migmolper/petsc/src/dm/impls/da/dagtol.c:15 >>>>>>> [4]PETSC ERROR: #9 DMGlobalToLocalBegin() at >>>>>>> /Users/migmolper/petsc/src/dm/interface/dm.c:2844 >>>>>>> [4]PETSC ERROR: #10 DMGetCoordinatesLocalSetUp() at >>>>>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:565 >>>>>>> [4]PETSC ERROR: #11 DMGetCoordinatesLocal() at >>>>>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:599 >>>>>>> [4]PETSC ERROR: #12 _DMLocatePoints_DMDARegular_IS() at >>>>>>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:531 >>>>>>> [4]PETSC ERROR: #13 DMLocatePoints_DMDARegular() at >>>>>>> /Users/migmolper/DMD/SOLERA/Atoms/Atom.cpp:586 >>>>>>> [4]PETSC ERROR: #14 DMLocatePoints() at >>>>>>> /Users/migmolper/petsc/src/dm/interface/dmcoordinates.c:1194 >>>>>>> [4]PETSC ERROR: #15 DMSwarmMigrate_CellDMScatter() at >>>>>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm_migrate.c:219 >>>>>>> [4]PETSC ERROR: #16 DMSwarmMigrate() at >>>>>>> /Users/migmolper/petsc/src/dm/impls/swarm/swarm.c:1349 >>>>>>> [4]PETSC ERROR: #17 main() at >>>>>>> /Users/migmolper/DMD/driver-tasting-SOLERA.cpp:41 >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Jan 15, 2025, at 4:56?PM, MIGUEL MOLINOS PEREZ >>>>>>> wrote: >>>>>>> >>>>>>> Thank you Matt for the useful info. I?ll try your idea. >>>>>>> >>>>>>> Miguel >>>>>>> >>>>>>> On 15 Jan 2025, at 16:48, Matthew Knepley wrote: >>>>>>> >>>>>>> On Wed, Jan 15, 2025 at 10:41?AM MIGUEL MOLINOS PEREZ < >>>>>>> mmolinos at us.es> wrote: >>>>>>> >>>>>>>> Thank you Matt. >>>>>>>> >>>>>>>> Yes, I am getting the "CellDM" from the DMSwarm. >>>>>>>> >>>>>>>> 1. I have recently overhauled this functionality because it was not >>>>>>>> flexible enough for the plasma simulation we do. Thus main and release work >>>>>>>> differently. >>>>>>>> >>>>>>>> >>>>>>>> Nice to hear that. Should I move to main? >>>>>>>> >>>>>>> >>>>>>> The changes allow you to have several cell DMs. I want to bin >>>>>>> particles in space, but also in velocity, and then in the tensor product of >>>>>>> space and velocity. Moreover, sometimes I want to use different Swarm >>>>>>> fields as the DM field for the solver. You can do all that with main now. >>>>>>> If you just need a single DM with the same DM fields, release is fine. >>>>>>> >>>>>>> >>>>>>>> 2. I assume you are using release >>>>>>>> >>>>>>>> >>>>>>>> You are correct. >>>>>>>> >>>>>>>> 3. In both main and release, if you change the coordinates of your >>>>>>>> CellDM mesh, you need to rebin the particles. The easiest way to do this is >>>>>>>> to call DMSwarmMigrate(sw, PETSC_FALSE). >>>>>>>> >>>>>>>> >>>>>>>> What do you mean by rebin? >>>>>>>> >>>>>>> >>>>>>> When you provide the cell DM, Swrm makes a "sort context" that bins >>>>>>> the particles into DM cells. If you change the coordinates, this binning >>>>>>> will change, so you need it to "rebin" or recreate the sort context. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Miguel >>>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Matt >>>>>>>> >>>>>>>> >>>>>>>>> Best, >>>>>>>>> Miguel >>>>>>>>> >>>>>>>> -- >>>>>>>> What most experimenters take for granted before they begin their >>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>> experiments lead. >>>>>>>> -- Norbert Wiener >>>>>>>> >>>>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx5LqFJcU$ >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx5LqFJcU$ >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx5LqFJcU$ >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx5LqFJcU$ >>>>> >>>>> >>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx5LqFJcU$ >>>> >>>> >>>> >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx5LqFJcU$ >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx5LqFJcU$ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx5LqFJcU$ > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ee2AOndSc3M8PXEGVjufaWIa3rs_TPHCCbPc7Y2s8ri6DAQ5tamfRac_Wqy3I_kP85xjohwRbXcUx5LqFJcU$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Jan 29 15:47:25 2025 From: jed at jedbrown.org (Jed Brown) Date: Wed, 29 Jan 2025 14:47:25 -0700 Subject: [petsc-users] Visualizing higher order finite element output in ParaView In-Reply-To: References: Message-ID: <875xlxgumq.fsf@jedbrown.org> I like the CGNS workflow for this, at least with quadratic and cubic elements. You can use options like -snes_view_solution cgns:solution.cgns (configure with --download-cgns). It can also monitor transient solves with flexible batch sizes (geometry and connectivity are stored only once within a batch of output frames). Anna Dalklint via petsc-users writes: > Hello, > > We have created a finite element code in PETSc for unstructured meshes using DMPlex. The first order meshes are created in gmsh and loaded into PETSc. To introduce higher order elements, e.g. 10 node tetrahedral elements, we start from scratch using PetscSection and loop over the relevant points it the DM to introduce additional degrees-of-freedom (example; for 10 node tets we have 4 vertices ?nodes? and 6 edge ?nodes?). The coordinates of the new ?nodes? are obtained by interpolation using the finite element basis functions. > > The simulations seem to run well, but we face issues when trying to visualize the results in ParaView. We have tried to use both CGNS and HDF5+XDMF file formats for e.g. VecView. CGNS works, but the edge degrees-of-freedom appear to not be interpolated correctly (we observe oscillations in the fields, don?t know if this is a PETSc och ParaView issue). Also, we would prefer to use another file format than CGNS since it does not appear to directly allow timeseries (at least ParaView doesn?t recognize it). We haven?t got the HDF5+XDMF file format to work at all when running on more than one core (the mesh is highly distorted when saving using VecView and DMView + running the ?petsc_gen_xdmf.py? script on the .h5 output file). > > VTU format works but then only the vertices? degrees-of-freedom are visualized. As far as we have understood it, this is because VTU/VTK only supports degrees-of-freedom on vertices/cell level. > > Does anyone have any idea of how to visualize fields generated from higher order elements in ParaView? Or understand what we might be doing wrong? > > Best regards, > Anna From knepley at gmail.com Wed Jan 29 17:38:59 2025 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 29 Jan 2025 18:38:59 -0500 Subject: [petsc-users] Visualizing higher order finite element output in ParaView In-Reply-To: <875xlxgumq.fsf@jedbrown.org> References: <875xlxgumq.fsf@jedbrown.org> Message-ID: That is all true. If you want lower level pieces to make it yourself, I have -dm_plex_high_order_view, which activates DMPlexCreateHighOrderSurrogate_Internal(). This is a simple function that refines the mesh lg(p) times to try and resolve the high order behavior. Thanks, Matt On Wed, Jan 29, 2025 at 4:55?PM Jed Brown wrote: > I like the CGNS workflow for this, at least with quadratic and cubic > elements. You can use options like -snes_view_solution cgns:solution.cgns > (configure with --download-cgns). It can also monitor transient solves with > flexible batch sizes (geometry and connectivity are stored only once within > a batch of output frames). > > Anna Dalklint via petsc-users writes: > > > Hello, > > > > We have created a finite element code in PETSc for unstructured meshes > using DMPlex. The first order meshes are created in gmsh and loaded into > PETSc. To introduce higher order elements, e.g. 10 node tetrahedral > elements, we start from scratch using PetscSection and loop over the > relevant points it the DM to introduce additional degrees-of-freedom > (example; for 10 node tets we have 4 vertices ?nodes? and 6 edge ?nodes?). > The coordinates of the new ?nodes? are obtained by interpolation using the > finite element basis functions. > > > > The simulations seem to run well, but we face issues when trying to > visualize the results in ParaView. We have tried to use both CGNS and > HDF5+XDMF file formats for e.g. VecView. CGNS works, but the edge > degrees-of-freedom appear to not be interpolated correctly (we observe > oscillations in the fields, don?t know if this is a PETSc och ParaView > issue). Also, we would prefer to use another file format than CGNS since it > does not appear to directly allow timeseries (at least ParaView doesn?t > recognize it). We haven?t got the HDF5+XDMF file format to work at all when > running on more than one core (the mesh is highly distorted when saving > using VecView and DMView + running the ?petsc_gen_xdmf.py? script on the > .h5 output file). > > > > VTU format works but then only the vertices? degrees-of-freedom are > visualized. As far as we have understood it, this is because VTU/VTK only > supports degrees-of-freedom on vertices/cell level. > > > > Does anyone have any idea of how to visualize fields generated from > higher order elements in ParaView? Or understand what we might be doing > wrong? > > > > Best regards, > > Anna > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bARGOByHrNW1kx6GeTEkq8OOmkrXte9cSpdwEz_FYq-Qc2FVXoBXoFOjJtKExsznxY8rJjfzPN_HpyXH_ubn$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.diehl at kuleuven.be Thu Jan 30 05:13:24 2025 From: martin.diehl at kuleuven.be (Martin Diehl) Date: Thu, 30 Jan 2025 11:13:24 +0000 Subject: [petsc-users] Fortran interfaces: Google Summer of Code 2025? Message-ID: <51e996a3b06a4f3f7146fed18b928c3f86762b77.camel@kuleuven.be> Dear PETSc team, dear Barry, applications for the Google Summer of Code will start again and I was wondering if help for the re-factoring of the Fortran interfaces is still needed.?Whether this makes sense depends on the progress of https://gitlab.com/petsc/petsc/-/merge_requests/7517 In contrast to the failed attempt last year, I have a student interested in working on this topic. Martin -- KU Leuven Department of Computer Science Department of Materials Engineering Celestijnenlaan 200a 3001 Leuven, Belgium -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: This is a digitally signed message part URL: From anna.dalklint at solid.lth.se Thu Jan 30 08:43:07 2025 From: anna.dalklint at solid.lth.se (Anna Dalklint) Date: Thu, 30 Jan 2025 14:43:07 +0000 Subject: [petsc-users] Visualizing higher order finite element output in ParaView In-Reply-To: References: <875xlxgumq.fsf@jedbrown.org> Message-ID: I looked deeper into the petsc codebase regarding HDF5. From what I understood (which of course can be wrong), the current version of petsc does not save edge degrees-of-freedom to HDF5? Is this something you plan to allow? Otherwise I?m fine with using CGNS. But could you please explain how I could save timeseries that paraview recognizes using this format? Right now I?m saving files e.g. file0001.cgns, file0002.cgns, ? where each .cgns file is written using VecView (i.e. it stores a discretized field). But paraview cannot load this as a timeseries. Also, do you have any documentation regarding node (vertex, edge, face, cell) numbering? E.g. how would a 10 node tetrahedral be numbered? From the documentation on your webpage (https://urldefense.us/v3/__https://petsc.org/release/manual/dmplex/__;!!G_uCfscf7eWS!YN1YSNd9hXq_llC7ZDmL9mgPHk9MSj5qzY_48p_GdnmxA7t1x_WN35JB-m5nhb7JF8vE9pBC4VWkpIwr2fYbQdjJUVd3h3-Pag$ ) it looks like cell dofs -> vertex dofs-> face dofs-> edge dofs. Is this correct? Thanks, Anna From: Matthew Knepley Date: Thursday, 30 January 2025 at 00:39 To: Jed Brown Cc: Anna Dalklint , petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Visualizing higher order finite element output in ParaView That is all true. If you want lower level pieces to make it yourself, I have -dm_plex_high_order_view, which activates DMPlexCreateHighOrderSurrogate_Internal(). This is a simple function that refines the mesh lg(p) times to try and resolve the high order behavior. Thanks, Matt On Wed, Jan 29, 2025 at 4:55?PM Jed Brown > wrote: I like the CGNS workflow for this, at least with quadratic and cubic elements. You can use options like -snes_view_solution cgns:solution.cgns (configure with --download-cgns). It can also monitor transient solves with flexible batch sizes (geometry and connectivity are stored only once within a batch of output frames). Anna Dalklint via petsc-users > writes: > Hello, > > We have created a finite element code in PETSc for unstructured meshes using DMPlex. The first order meshes are created in gmsh and loaded into PETSc. To introduce higher order elements, e.g. 10 node tetrahedral elements, we start from scratch using PetscSection and loop over the relevant points it the DM to introduce additional degrees-of-freedom (example; for 10 node tets we have 4 vertices ?nodes? and 6 edge ?nodes?). The coordinates of the new ?nodes? are obtained by interpolation using the finite element basis functions. > > The simulations seem to run well, but we face issues when trying to visualize the results in ParaView. We have tried to use both CGNS and HDF5+XDMF file formats for e.g. VecView. CGNS works, but the edge degrees-of-freedom appear to not be interpolated correctly (we observe oscillations in the fields, don?t know if this is a PETSc och ParaView issue). Also, we would prefer to use another file format than CGNS since it does not appear to directly allow timeseries (at least ParaView doesn?t recognize it). We haven?t got the HDF5+XDMF file format to work at all when running on more than one core (the mesh is highly distorted when saving using VecView and DMView + running the ?petsc_gen_xdmf.py? script on the .h5 output file). > > VTU format works but then only the vertices? degrees-of-freedom are visualized. As far as we have understood it, this is because VTU/VTK only supports degrees-of-freedom on vertices/cell level. > > Does anyone have any idea of how to visualize fields generated from higher order elements in ParaView? Or understand what we might be doing wrong? > > Best regards, > Anna -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YN1YSNd9hXq_llC7ZDmL9mgPHk9MSj5qzY_48p_GdnmxA7t1x_WN35JB-m5nhb7JF8vE9pBC4VWkpIwr2fYbQdjJUVdLtH1U2A$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jan 30 08:54:33 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 30 Jan 2025 09:54:33 -0500 Subject: [petsc-users] Fortran interfaces: Google Summer of Code 2025? In-Reply-To: <51e996a3b06a4f3f7146fed18b928c3f86762b77.camel@kuleuven.be> References: <51e996a3b06a4f3f7146fed18b928c3f86762b77.camel@kuleuven.be> Message-ID: <857D8A26-115E-4AB8-91A8-2F8FC71C17D1@petsc.dev> Martin, I have restarted in the last week on 7517 and plan for it to be in the March release. As part of the work I have developed new Pythoncode that scraps the code for signatures for all the functions, enums, objects etc and from this constructs the Fortran binding. The same scraping could be used for other languages so I am hoping automatic bindings can be done for other languages, for example Rust, even Python. So perhaps we should consider a summer of code project for other such languages? Barry > On Jan 30, 2025, at 6:13?AM, Martin Diehl wrote: > > Dear PETSc team, dear Barry, > > applications for the Google Summer of Code will start again and I was > wondering if help for the re-factoring of the Fortran interfaces is > still needed. Whether this makes sense depends on the progress of > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7517__;!!G_uCfscf7eWS!eR1hE_WVEnUq17f91xnvKs9cn0XCVtm_jKv9-8uY2rzhTPfIfyhCXWq3QPfQ6IJOJ4A_Qjuw10EvM1MDSJLLwMo$ > > In contrast to the failed attempt last year, I have a student > interested in working on this topic. > > Martin > -- > KU Leuven > Department of Computer Science > Department of Materials Engineering > Celestijnenlaan 200a > 3001 Leuven, Belgium > From knepley at gmail.com Thu Jan 30 09:19:13 2025 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Jan 2025 10:19:13 -0500 Subject: [petsc-users] Visualizing higher order finite element output in ParaView In-Reply-To: References: <875xlxgumq.fsf@jedbrown.org> Message-ID: On Thu, Jan 30, 2025 at 9:43?AM Anna Dalklint wrote: > I looked deeper into the petsc codebase regarding HDF5. From what I > understood (which of course can be wrong), the current version of petsc > does not save edge degrees-of-freedom to HDF5? Is this something you plan > to allow? > We write two different outputs (by default). One has all the data, and one has only cell and vertex data because Paraview does not understand anything else. This can be customized with options. What do you want to save? > Otherwise I?m fine with using CGNS. But could you please explain how I > could save timeseries that paraview recognizes using this format? Right now > I?m saving files e.g. file0001.cgns, file0002.cgns, ? where each .cgns file > is written using VecView (i.e. it stores a discretized field). But paraview > cannot load this as a timeseries. > Jed can explain how this works. > Also, do you have any documentation regarding node (vertex, edge, face, > cell) numbering? E.g. how would a 10 node tetrahedral be numbered? From the > documentation on your webpage (https://urldefense.us/v3/__https://petsc.org/release/manual/dmplex/__;!!G_uCfscf7eWS!ejLt_rGcvE6uln2mYtMTHg6zBUMpobt0KviK7ZOeyB9vp_BRV_c7m_xMnDdSnqjEerY4ApgAANbxBmrzLVYe$ ) > it looks like cell dofs -> vertex dofs-> face dofs-> edge dofs. Is this > correct? > When you call DMPlexVecGetClosure(), the closure follows the point numbering, in that for each point, we lookup the dofs in the local Section, and push them into the array in order. So then you need the point ordering. For the closure, it goes by dimension, so cell dofs, face dofs, edge dofs, vertex dofs. You can see the definition of faces (and edges) here: https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/plexinterpolate.c?ref_type=heads*L196__;Iw!!G_uCfscf7eWS!ejLt_rGcvE6uln2mYtMTHg6zBUMpobt0KviK7ZOeyB9vp_BRV_c7m_xMnDdSnqjEerY4ApgAANbxBonw9GhL$ and triangles are ordered here https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/plexinterpolate.c?ref_type=heads*L115__;Iw!!G_uCfscf7eWS!ejLt_rGcvE6uln2mYtMTHg6zBUMpobt0KviK7ZOeyB9vp_BRV_c7m_xMnDdSnqjEerY4ApgAANbxBmRfa-Ea$ The idea is that DMPlexVecGetClosure() delivers the dofs in a standard order on the element, so that you can write your residual function once. Also, for multiple fields, they are stacked contiguously, so the numbering is [field, point, dof on point]. Let me know if that does not make sense. Thanks, Matt > Thanks, > > Anna > > > > *From: *Matthew Knepley > *Date: *Thursday, 30 January 2025 at 00:39 > *To: *Jed Brown > *Cc: *Anna Dalklint , petsc-users at mcs.anl.gov > > *Subject: *Re: [petsc-users] Visualizing higher order finite element > output in ParaView > > That is all true. If you want lower level pieces to make it yourself, I > have -dm_plex_high_order_view, which activates > > DMPlexCreateHighOrderSurrogate_Internal(). This is a simple function that > refines the mesh lg(p) times to try and > > resolve the high order behavior. > > > > Thanks, > > > > Matt > > > > On Wed, Jan 29, 2025 at 4:55?PM Jed Brown wrote: > > I like the CGNS workflow for this, at least with quadratic and cubic > elements. You can use options like -snes_view_solution cgns:solution.cgns > (configure with --download-cgns). It can also monitor transient solves with > flexible batch sizes (geometry and connectivity are stored only once within > a batch of output frames). > > Anna Dalklint via petsc-users writes: > > > Hello, > > > > We have created a finite element code in PETSc for unstructured meshes > using DMPlex. The first order meshes are created in gmsh and loaded into > PETSc. To introduce higher order elements, e.g. 10 node tetrahedral > elements, we start from scratch using PetscSection and loop over the > relevant points it the DM to introduce additional degrees-of-freedom > (example; for 10 node tets we have 4 vertices ?nodes? and 6 edge ?nodes?). > The coordinates of the new ?nodes? are obtained by interpolation using the > finite element basis functions. > > > > The simulations seem to run well, but we face issues when trying to > visualize the results in ParaView. We have tried to use both CGNS and > HDF5+XDMF file formats for e.g. VecView. CGNS works, but the edge > degrees-of-freedom appear to not be interpolated correctly (we observe > oscillations in the fields, don?t know if this is a PETSc och ParaView > issue). Also, we would prefer to use another file format than CGNS since it > does not appear to directly allow timeseries (at least ParaView doesn?t > recognize it). We haven?t got the HDF5+XDMF file format to work at all when > running on more than one core (the mesh is highly distorted when saving > using VecView and DMView + running the ?petsc_gen_xdmf.py? script on the > .h5 output file). > > > > VTU format works but then only the vertices? degrees-of-freedom are > visualized. As far as we have understood it, this is because VTU/VTK only > supports degrees-of-freedom on vertices/cell level. > > > > Does anyone have any idea of how to visualize fields generated from > higher order elements in ParaView? Or understand what we might be doing > wrong? > > > > Best regards, > > Anna > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ejLt_rGcvE6uln2mYtMTHg6zBUMpobt0KviK7ZOeyB9vp_BRV_c7m_xMnDdSnqjEerY4ApgAANbxBqOUos6t$ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ejLt_rGcvE6uln2mYtMTHg6zBUMpobt0KviK7ZOeyB9vp_BRV_c7m_xMnDdSnqjEerY4ApgAANbxBqOUos6t$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.semplice at uninsubria.it Thu Jan 30 11:41:53 2025 From: matteo.semplice at uninsubria.it (Matteo Semplice) Date: Thu, 30 Jan 2025 18:41:53 +0100 Subject: [petsc-users] local/global DMPlex Vec output In-Reply-To: References: <00e2b913-4a38-4d7c-9ae6-90c5dc86647b@uninsubria.it> Message-ID: <07c78ea5-74a7-4e6d-9a0a-dcfcb95a3c8f@uninsubria.it> Dear Matt Il 25/10/24 01:02, Matthew Knepley ha scritto: > On Thu, Oct 24, 2024 at 6:04?PM Matteo Semplice > wrote: > > Hi. The HDF5 solution looks good to me, but I now get this error > > Okay, I can make a workaround for this. Here is what is happening. > > When you output solutions, you really want the essential boundary > conditions included in the > output, and the only way I have to do that is for you to tell me about > the discretization, so I > require the DS. > > What I can do is ignore this step if there is no DS. Let me do that > and mail you with the branch. Sorry for the long delay, but now I am taking this up again. In my code I compute all fields in all cells and I have no boundary conditions. I tried to just call DMCreateDS on the mesh, but the error does not change. I guess that I need to do a minimal setup on the DS to have it working. If you have some time, could you create the branch you mentioned in your message or tell me what is the minimal action I have to do to setup a "fake" ds? Whichever is the best for you is fine with me. If you need more info, in my Section I have 3 fields attached to cells. I have also another dm with a field attached to vertices, but this latter is less important and I don't really need it in output for production runs. Thanks ??? Matteo > > ? Thanks! > > ? ? ?Matt > > $ ../src/saveDemo > Creating mesh with (10,10) faces > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: Need to call DMCreateDS() before calling DMGetDS() > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!f3xcC-P6ZKjh4GKu4oz9-xo0-soAZzH97UszdXrhFrCLfp14uEixVvm8ME-t8J1qTOJY50bopUUkRuerZImYSYMcYOyMJjVPxoR1-Q$ for trouble > shooting. > [0]PETSC ERROR: Petsc Release Version 3.21.5, unknown > [0]PETSC ERROR: ../src/saveDemo on a ?named dentdherens by matteo > Fri Oct 25 00:00:44 2024 > [0]PETSC ERROR: Configure options --COPTFLAGS="-O3 -march=native > -mtune=native -mavx2" --CXXOPTFLAGS="-O3 -march=native > -mtune=native -mavx2" --FOPTFLAGS="-O3 -march=native -mtune=native > -mavx2" --PETSC_ARCH=op > t --with-strict-petscerrorcode --download-hdf5 > --prefix=/home/matteo/software/petsc/3.21-opt/ --with-debugging=0 > --with-gmsh --with-metis --with-parmetis --with-triangle > PETSC_DIR=/home/matteo/software/petsc -- > force > [0]PETSC ERROR: #1 DMGetDS() at > /home/matteo/software/petsc/src/dm/interface/dm.c:5525 > [0]PETSC ERROR: #2 DMPlexInsertBoundaryValues_Plex() at > /home/matteo/software/petsc/src/dm/impls/plex/plexfem.c:1136 > [0]PETSC ERROR: #3 DMPlexInsertBoundaryValues() at > /home/matteo/software/petsc/src/dm/impls/plex/plexfem.c:1274 > [0]PETSC ERROR: #4 VecView_Plex_HDF5_Internal() at > /home/matteo/software/petsc/src/dm/impls/plex/plexhdf5.c:477 > [0]PETSC ERROR: #5 VecView_Plex() at > /home/matteo/software/petsc/src/dm/impls/plex/plex.c:656 > [0]PETSC ERROR: #6 VecView() at > /home/matteo/software/petsc/src/vec/vec/interface/vector.c:806 > [0]PETSC ERROR: #7 main() at saveDemo.cpp:123 > [0]PETSC ERROR: No PETSc Option Table entries > [0]PETSC ERROR: ----------------End of Error Message -------send > entire error message to petsc-maint at mcs.anl.gov---------- > -------------------------------------------------------------------------- > > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF > with errorcode 73. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > > I attach the modified sample code that produced the above error. > > Thanks > > ??? Matteo > > Il 24/10/24 22:20, Matthew Knepley ha scritto: >> I just looked at the code. The VTK code is very old, and does not >> check for cell overlap. >> >> We have been recommending that people use either HDF5 or CGNS, >> both of which work in this case >> I believe. I can fix VTK if that is what you want, but it might >> take me a little while as it is very busy at >> work right now. However, if you output HDF5, then you can run >> >> ? ./lib/petsc/bin/petsc_gen_xdmf.py mesh.h5 >> >> and it will generate an XDMF file so you can load it into >> ParaView. Or you can output CGNS which I think >> ParaView understands. >> >> ? Thanks, >> >> ? ? ?Matt >> >> On Thu, Oct 24, 2024 at 4:02?PM Semplice Matteo >> wrote: >> >> Hi, >> ??????I tried again today and have (re?)discovered this >> example >> https://urldefense.us/v3/__https://petsc.org/release/src/dm/impls/plex/tutorials/ex14.c.html__;!!G_uCfscf7eWS!f3xcC-P6ZKjh4GKu4oz9-xo0-soAZzH97UszdXrhFrCLfp14uEixVvm8ME-t8J1qTOJY50bopUUkRuerZImYSYMcYOyMJjVM1k_vVQ$ , >> but I cannot understand if in my case I should call >> PetscSFCreateSectionSF >> ?and, >> if so, how should I then activate the returned SF. >> Matteo >> ------------------------------------------------------------------------ >> *Da:* Semplice Matteo >> *Inviato:* marted? 22 ottobre 2024 00:24 >> *A:* Matthew Knepley >> *Cc:* PETSc >> *Oggetto:* Re: [petsc-users] local/global DMPlex Vec output >> >> Dear Matt, >> >> ??? I guess you're right: thresholding by rank==0 and rank==1 >> in paraview reveals that it is indeed the overlap cells that >> are appear twice in the output. >> >> The attached file is not exactly minimal but hopefully short >> enough. If I run it in serial, all is ok, but with >> >> ??? mpirun -np 2 ./saveDemo >> >> it creates a 10x10 grid, but I get "output.vtu" with a total >> of 120 cells. However the pointSF of the DMPlex seems correct. >> >> Thanks >> >> ??? Matteo >> >> Il 21/10/24 19:15, Matthew Knepley ha scritto: >>> On Mon, Oct 21, 2024 at 12:22?PM Matteo Semplice via >>> petsc-users wrote: >>> >>> Dear petsc-users, >>> >>> ??? I am having issues with output of parallel data >>> attached to a DMPlex (or maybe more fundamental ones >>> about DMPlex...). >>> >>> So I currently >>> >>> 1. create a DMPlex (DMPlexCreateGmshFromFile or >>> DMPlexCreateBoxMesh) >>> 2. partition it >>> 3. and create a section for my data layout with >>> DMPlexCreateSection(ctx.dmMesh, NULL, numComp, >>> numDof, numBC, NULL, NULL, NULL, NULL, &sUavg) >>> 4. DMSetLocalSection(ctx.dmMesh, sUavg) >>> 5. create solLoc and solGlob vectors with >>> DMCreateGlobalVector and DMCreateLocalVector >>> 6. solve .... >>> 7. VecView(ctx.solGlob, vtkViewer) on a .vtu file >>> >>> but when I load data in ParaView I get more cells than >>> expected and it is as if the cells in the halo are put >>> twice in output. (I could create a MWE if the above is >>> not clear) >>> >>> I think we need an MWE here, because from the explanation >>> above, it should work. >>> >>> However, I can try to guess the problem. When you partition >>> the mesh, I am guessing that you have cells in the overlap. >>> These cells >>> must be in the point SF in order for the global section to >>> give them a unique owner. Perhaps something has gone wrong here. >>> >>> ? Thanks, >>> >>> ? ? ?Matt >>> >>> I guess that the culprit is point (4), but if I replace >>> it with DMSetGlobalSection then I cannot create the >>> local vector at point (5). >>> >>> How should I handle this properly? In my code I need to >>> create both local and global vectors, to perform at >>> least GlobalToLocal and to save the global data. >>> >>> (On a side note, I tried also HDF5 but then it complains >>> about the DM not having a DS...; really, any working >>> solution that allows data to be explored with Paraview >>> is fine) >>> >>> Thanks for any advice! >>> >>> Matteo Semplice >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f3xcC-P6ZKjh4GKu4oz9-xo0-soAZzH97UszdXrhFrCLfp14uEixVvm8ME-t8J1qTOJY50bopUUkRuerZImYSYMcYOyMJjVfVZa3zQ$ >>> >> >> -- >> --- >> Professore Associato in Analisi Numerica >> Dipartimento di Scienza e Alta Tecnologia >> Universit? degli Studi dell'Insubria >> Via Valleggio, 11 - Como >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f3xcC-P6ZKjh4GKu4oz9-xo0-soAZzH97UszdXrhFrCLfp14uEixVvm8ME-t8J1qTOJY50bopUUkRuerZImYSYMcYOyMJjVfVZa3zQ$ >> > > -- > --- > Professore Associato in Analisi Numerica > Dipartimento di Scienza e Alta Tecnologia > Universit? degli Studi dell'Insubria > Via Valleggio, 11 - Como > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f3xcC-P6ZKjh4GKu4oz9-xo0-soAZzH97UszdXrhFrCLfp14uEixVvm8ME-t8J1qTOJY50bopUUkRuerZImYSYMcYOyMJjVfVZa3zQ$ > -- --- Professore Associato in Analisi Numerica Dipartimento di Scienza e Alta Tecnologia Universit? degli Studi dell'Insubria Via Valleggio, 11 - Como -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Jan 31 00:54:18 2025 From: jed at jedbrown.org (Jed Brown) Date: Thu, 30 Jan 2025 23:54:18 -0700 Subject: [petsc-users] Visualizing higher order finite element output in ParaView In-Reply-To: References: <875xlxgumq.fsf@jedbrown.org> Message-ID: <878qqrean9.fsf@jedbrown.org> Matthew Knepley writes: > On Thu, Jan 30, 2025 at 9:43?AM Anna Dalklint > wrote: > >> Otherwise I?m fine with using CGNS. But could you please explain how I >> could save timeseries that paraview recognizes using this format? Right now >> I?m saving files e.g. file0001.cgns, file0002.cgns, ? where each .cgns file >> is written using VecView (i.e. it stores a discretized field). But paraview >> cannot load this as a timeseries. >> > > Jed can explain how this works. You can use DMSetOutputSequenceNumber(dm, step, time) to make it collate correctly. Paraview can handle any combination of time steps within the same file (as -ts_monitor_solution cgns:sol.cgns) and sequenced files (each of which may contained one or more frames). You can open it with `paraview file..cgns` or in the interface. From anna.dalklint at solid.lth.se Fri Jan 31 01:59:24 2025 From: anna.dalklint at solid.lth.se (Anna Dalklint) Date: Fri, 31 Jan 2025 07:59:24 +0000 Subject: [petsc-users] Visualizing higher order finite element output in ParaView In-Reply-To: References: <875xlxgumq.fsf@jedbrown.org> Message-ID: I want to save e.g. the discretized displacement field obtained from a quasi-static non-linear finite element simulation using 10 node tetrahedral elements (i.e. which has edge dofs). As mentioned, I use PetscSection to add the additional dofs on edges. I have also written my own Newton solver, i.e. I do not use SNES. In conclusion, what I want is to be able to save the discretized displacement field in each outer iteration of the Newton loop (where I increase the pseudo-time, i.e. scaling of the load). I would then preferably be able to load a stack of these files (call them u001, u002, u003? for each ?load-step?) and step in ?time? in ParaView. Thanks, Anna From: Matthew Knepley Date: Thursday, 30 January 2025 at 16:19 To: Anna Dalklint Cc: Jed Brown , petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Visualizing higher order finite element output in ParaView On Thu, Jan 30, 2025 at 9:43?AM Anna Dalklint > wrote: I looked deeper into the petsc codebase regarding HDF5. From what I understood (which of course can be wrong), the current version of petsc does not save edge degrees-of-freedom to HDF5? Is this something you plan to allow? We write two different outputs (by default). One has all the data, and one has only cell and vertex data because Paraview does not understand anything else. This can be customized with options. What do you want to save? Otherwise I?m fine with using CGNS. But could you please explain how I could save timeseries that paraview recognizes using this format? Right now I?m saving files e.g. file0001.cgns, file0002.cgns, ? where each .cgns file is written using VecView (i.e. it stores a discretized field). But paraview cannot load this as a timeseries. Jed can explain how this works. Also, do you have any documentation regarding node (vertex, edge, face, cell) numbering? E.g. how would a 10 node tetrahedral be numbered? From the documentation on your webpage (https://urldefense.us/v3/__https://petsc.org/release/manual/dmplex/__;!!G_uCfscf7eWS!frd8lJGukGrnZYqiJQyrOszbkCacSP2EftDVAAiYClDx1Ll0dEd7q8th5yDSI1bJgVmvCAGtgrNVnkfcw0i47jUVzWH0vcxIcg$ ) it looks like cell dofs -> vertex dofs-> face dofs-> edge dofs. Is this correct? When you call DMPlexVecGetClosure(), the closure follows the point numbering, in that for each point, we lookup the dofs in the local Section, and push them into the array in order. So then you need the point ordering. For the closure, it goes by dimension, so cell dofs, face dofs, edge dofs, vertex dofs. You can see the definition of faces (and edges) here: https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/plexinterpolate.c?ref_type=heads*L196__;Iw!!G_uCfscf7eWS!frd8lJGukGrnZYqiJQyrOszbkCacSP2EftDVAAiYClDx1Ll0dEd7q8th5yDSI1bJgVmvCAGtgrNVnkfcw0i47jUVzWFlsiC2ww$ and triangles are ordered here https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/plexinterpolate.c?ref_type=heads*L115__;Iw!!G_uCfscf7eWS!frd8lJGukGrnZYqiJQyrOszbkCacSP2EftDVAAiYClDx1Ll0dEd7q8th5yDSI1bJgVmvCAGtgrNVnkfcw0i47jUVzWFvuUov2w$ The idea is that DMPlexVecGetClosure() delivers the dofs in a standard order on the element, so that you can write your residual function once. Also, for multiple fields, they are stacked contiguously, so the numbering is [field, point, dof on point]. Let me know if that does not make sense. Thanks, Matt Thanks, Anna From: Matthew Knepley > Date: Thursday, 30 January 2025 at 00:39 To: Jed Brown > Cc: Anna Dalklint >, petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Visualizing higher order finite element output in ParaView That is all true. If you want lower level pieces to make it yourself, I have -dm_plex_high_order_view, which activates DMPlexCreateHighOrderSurrogate_Internal(). This is a simple function that refines the mesh lg(p) times to try and resolve the high order behavior. Thanks, Matt On Wed, Jan 29, 2025 at 4:55?PM Jed Brown > wrote: I like the CGNS workflow for this, at least with quadratic and cubic elements. You can use options like -snes_view_solution cgns:solution.cgns (configure with --download-cgns). It can also monitor transient solves with flexible batch sizes (geometry and connectivity are stored only once within a batch of output frames). Anna Dalklint via petsc-users > writes: > Hello, > > We have created a finite element code in PETSc for unstructured meshes using DMPlex. The first order meshes are created in gmsh and loaded into PETSc. To introduce higher order elements, e.g. 10 node tetrahedral elements, we start from scratch using PetscSection and loop over the relevant points it the DM to introduce additional degrees-of-freedom (example; for 10 node tets we have 4 vertices ?nodes? and 6 edge ?nodes?). The coordinates of the new ?nodes? are obtained by interpolation using the finite element basis functions. > > The simulations seem to run well, but we face issues when trying to visualize the results in ParaView. We have tried to use both CGNS and HDF5+XDMF file formats for e.g. VecView. CGNS works, but the edge degrees-of-freedom appear to not be interpolated correctly (we observe oscillations in the fields, don?t know if this is a PETSc och ParaView issue). Also, we would prefer to use another file format than CGNS since it does not appear to directly allow timeseries (at least ParaView doesn?t recognize it). We haven?t got the HDF5+XDMF file format to work at all when running on more than one core (the mesh is highly distorted when saving using VecView and DMView + running the ?petsc_gen_xdmf.py? script on the .h5 output file). > > VTU format works but then only the vertices? degrees-of-freedom are visualized. As far as we have understood it, this is because VTU/VTK only supports degrees-of-freedom on vertices/cell level. > > Does anyone have any idea of how to visualize fields generated from higher order elements in ParaView? Or understand what we might be doing wrong? > > Best regards, > Anna -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!frd8lJGukGrnZYqiJQyrOszbkCacSP2EftDVAAiYClDx1Ll0dEd7q8th5yDSI1bJgVmvCAGtgrNVnkfcw0i47jUVzWE2MSTqog$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!frd8lJGukGrnZYqiJQyrOszbkCacSP2EftDVAAiYClDx1Ll0dEd7q8th5yDSI1bJgVmvCAGtgrNVnkfcw0i47jUVzWE2MSTqog$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.diehl at kuleuven.be Fri Jan 31 06:16:26 2025 From: martin.diehl at kuleuven.be (Martin Diehl) Date: Fri, 31 Jan 2025 12:16:26 +0000 Subject: [petsc-users] Fortran interfaces: Google Summer of Code 2025? In-Reply-To: <857D8A26-115E-4AB8-91A8-2F8FC71C17D1@petsc.dev> References: <51e996a3b06a4f3f7146fed18b928c3f86762b77.camel@kuleuven.be> <857D8A26-115E-4AB8-91A8-2F8FC71C17D1@petsc.dev> Message-ID: yes, that would be an option for Tapashree (in CC). But my original plan was to submit through Fortran-Lang, that will certainly not work for other languages. NumFOCUS should work, do you have any experience with that? Martin On Thu, 2025-01-30 at 09:54 -0500, Barry Smith wrote: > > ?? Martin, > > ?? I have restarted in the last week on 7517 and plan for it to be in > the March release. > > ?? As part of the work I have developed new Pythoncode? that scraps > the code for signatures for all the functions, enums, objects etc and > from this constructs the Fortran binding. The same scraping could be > used for other languages so I am hoping automatic bindings can be > done for other languages, for example Rust, even Python. So perhaps > we should consider a summer of code project for other such languages? > > ?? Barry > > > > On Jan 30, 2025, at 6:13?AM, Martin Diehl > > wrote: > > > > Dear PETSc team, dear Barry, > > > > applications for the Google Summer of Code will start again and I > > was > > wondering if help for the re-factoring of the Fortran interfaces is > > still needed. Whether this makes sense depends on the progress of > > https://gitlab.com/petsc/petsc/-/merge_requests/7517 > > > > In contrast to the failed attempt last year, I have a student > > interested in working on this topic. > > > > Martin > > -- > > KU Leuven > > Department of Computer Science > > Department of Materials Engineering > > Celestijnenlaan 200a > > 3001 Leuven, Belgium > > > -- KU Leuven Department of Computer Science Department of Materials Engineering Celestijnenlaan 200a 3001 Leuven, Belgium -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: This is a digitally signed message part URL: From s_g at berkeley.edu Fri Jan 31 11:53:00 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Fri, 31 Jan 2025 09:53:00 -0800 Subject: [petsc-users] Visualizing higher order finite element output in ParaView In-Reply-To: References: <875xlxgumq.fsf@jedbrown.org> Message-ID: <5dcee857-edf5-48b3-99b1-30b672f66d0f@berkeley.edu> We do exactly this by using the same prefix for each file and bump the number with each load step, then paraview does the stacking automagically for us.? However we write out VTU files for our FEA computations. Perhaps you could examine some of the other formats that paraview can read and see if they do the trick. -sanjay ------------------------------------------------------------------- On 1/30/25 11:59 PM, Anna Dalklint via petsc-users wrote: > > I want to save e.g. the discretized displacement field obtained from a > quasi-static non-linear finite element simulation using 10 node > tetrahedral elements (i.e. which has edge dofs). As mentioned, I use > PetscSection to add the additional dofs on edges. I have also written > my own Newton solver, i.e. I do not use SNES. In conclusion, what I > want is to be able to save the discretized displacement field in each > outer iteration of the Newton loop (where I increase the pseudo-time, > i.e. scaling of the load). I would then preferably be able to load a > stack of these files (call them u001, u002, u003? for each > ?load-step?) and step in ?time? in ParaView. > > Thanks, > > Anna > > *From: *Matthew Knepley > *Date: *Thursday, 30 January 2025 at 16:19 > *To: *Anna Dalklint > *Cc: *Jed Brown , petsc-users at mcs.anl.gov > > *Subject: *Re: [petsc-users] Visualizing higher order finite element > output in ParaView > > On Thu, Jan 30, 2025 at 9:43?AM Anna Dalklint > wrote: > > I looked deeper into the petsc codebase regarding HDF5. From what > I understood (which of course can be wrong), the current version > of petsc does not save edge degrees-of-freedom to HDF5? Is this > something you plan to allow? > > We write two different outputs (by default). One has all the data, > and?one has only cell and vertex data because Paraview does not > understand anything else. This can be customized with options. What do > you want to save? > > Otherwise I?m fine with using CGNS. But could you please explain > how I could save timeseries that paraview recognizes using this > format? Right now I?m saving files e.g. file0001.cgns, > file0002.cgns, ? where each .cgns file is written using VecView > (i.e. it stores a discretized field). But paraview cannot load > this as a timeseries. > > Jed can explain how this works. > > Also, do you have any documentation regarding node (vertex, edge, > face, cell) numbering? E.g. how would a 10 node tetrahedral be > numbered? From the documentation on your webpage > (https://urldefense.us/v3/__https://petsc.org/release/manual/dmplex/__;!!G_uCfscf7eWS!a4sytQFaGerzzN_lTE_veovAiiD5PEbkkSsBQtmJVRaCNlwnj_8aLFmgbxiIUYOHqbF1vxAkTv9vYSz9cFks3w$ > ) > it looks like cell dofs -> vertex dofs-> face dofs-> edge dofs. Is > this correct? > > When you call DMPlexVecGetClosure(), the closure follows the point > numbering, in that for each point, we lookup the dofs in the local > Section, and push them into the array in order. So then you need the > point ordering. For the closure, it goes by dimension, so cell dofs, > face dofs, edge dofs, vertex dofs. You can see the definition of faces > (and edges) here: > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/plexinterpolate.c?ref_type=heads*L196__;Iw!!G_uCfscf7eWS!a4sytQFaGerzzN_lTE_veovAiiD5PEbkkSsBQtmJVRaCNlwnj_8aLFmgbxiIUYOHqbF1vxAkTv9vYSwPYMmhlg$ > > > and triangles are ordered here > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/plexinterpolate.c?ref_type=heads*L115__;Iw!!G_uCfscf7eWS!a4sytQFaGerzzN_lTE_veovAiiD5PEbkkSsBQtmJVRaCNlwnj_8aLFmgbxiIUYOHqbF1vxAkTv9vYSwzO7AdZg$ > > > The idea is that?DMPlexVecGetClosure() delivers the dofs in a standard > order on the element, so that you can write > > your residual function once. Also, for multiple fields, they are > stacked contiguously, so the numbering is [field, point, dof on point]. > > Let me know if that does not make sense. > > ? Thanks, > > ? ? ?Matt > > Thanks, > > Anna > > *From: *Matthew Knepley > *Date: *Thursday, 30 January 2025 at 00:39 > *To: *Jed Brown > *Cc: *Anna Dalklint , > petsc-users at mcs.anl.gov > *Subject: *Re: [petsc-users] Visualizing higher order finite > element output in ParaView > > That is all true. If you want lower level pieces to make > it?yourself, I have -dm_plex_high_order_view, which activates > > DMPlexCreateHighOrderSurrogate_Internal(). This is a simple > function that refines the mesh lg(p) times to try and > > resolve the high order behavior. > > ? Thanks, > > ? ? ?Matt > > On Wed, Jan 29, 2025 at 4:55?PM Jed Brown wrote: > > I like the CGNS workflow for this, at least with quadratic and > cubic elements. You can use options like -snes_view_solution > cgns:solution.cgns (configure with --download-cgns). It can > also monitor transient solves with flexible batch sizes > (geometry and connectivity are stored only once within a batch > of output frames). > > Anna Dalklint via petsc-users writes: > > > Hello, > > > > We have created a finite element code in PETSc for > unstructured meshes using DMPlex. The first order meshes are > created in gmsh and loaded into PETSc. To introduce higher > order elements, e.g. 10 node tetrahedral elements, we start > from scratch using PetscSection and loop over the relevant > points it the DM to introduce additional degrees-of-freedom > (example; for 10 node tets we have 4 vertices ?nodes? and 6 > edge ?nodes?). The coordinates of the new ?nodes? are obtained > by interpolation using the finite element basis functions. > > > > The simulations seem to run well, but we face issues when > trying to visualize the results in ParaView. We have tried to > use both CGNS and HDF5+XDMF file formats for e.g. VecView. > CGNS works, but the edge degrees-of-freedom appear to not be > interpolated correctly (we observe oscillations in the fields, > don?t know if this is a PETSc och ParaView issue). Also, we > would prefer to use another file format than CGNS since it > does not appear to directly allow timeseries (at least > ParaView doesn?t recognize it). We haven?t got the HDF5+XDMF > file format to work at all when running on more than one core > (the mesh is highly distorted when saving using VecView and > DMView + running the ?petsc_gen_xdmf.py? script on the .h5 > output file). > > > > VTU format works but then only the vertices? > degrees-of-freedom are visualized. As far as we have > understood it, this is because VTU/VTK only supports > degrees-of-freedom on vertices/cell level. > > > > Does anyone have any idea of how to visualize fields > generated from higher order elements in ParaView? Or > understand what we might be doing wrong? > > > > Best regards, > > Anna > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to > which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a4sytQFaGerzzN_lTE_veovAiiD5PEbkkSsBQtmJVRaCNlwnj_8aLFmgbxiIUYOHqbF1vxAkTv9vYSx-ftOXcA$ > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a4sytQFaGerzzN_lTE_veovAiiD5PEbkkSsBQtmJVRaCNlwnj_8aLFmgbxiIUYOHqbF1vxAkTv9vYSx-ftOXcA$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Jan 31 11:58:25 2025 From: jed at jedbrown.org (Jed Brown) Date: Fri, 31 Jan 2025 10:58:25 -0700 Subject: [petsc-users] Visualizing higher order finite element output in ParaView In-Reply-To: References: <875xlxgumq.fsf@jedbrown.org> Message-ID: <87a5b6lvb2.fsf@jedbrown.org> Anna Dalklint writes: > I want to save e.g. the discretized displacement field obtained from a quasi-static non-linear finite element simulation using 10 node tetrahedral elements (i.e. which has edge dofs). As mentioned, I use PetscSection to add the additional dofs on edges. I have also written my own Newton solver, i.e. I do not use SNES. In conclusion, what I want is to be able to save the discretized displacement field in each outer iteration of the Newton loop (where I increase the pseudo-time, i.e. scaling of the load). I would then preferably be able to load a stack of these files (call them u001, u002, u003? for each ?load-step?) and step in ?time? in ParaView. Please use DMSetOutputSequenceNumber to record step number. You can either use one PetscViewer of type CGNS and call VecView in your loading loop or you can write a sequence of files by creating a new PetscViewer each time.