[petsc-users] High Memory Usage with PETSc BDF Solver Compared to scikits.odes CVODE
Art
mac3bar at gmail.com
Mon Jun 16 13:05:43 CDT 2025
Hi everyone,
I’m porting a code from scikits.odes.sundials.cvode to PETSc, since it is
compatible with FEniCSx and can run in parallel with MPI. First, I used
Petsc with the "rk" solver, and it worked well, both serially and in
parallel for a system with 14000 nodes (42000 dofs). However, when using
an implicit solver like bdf, the solver takes up all the memory (16 gb),
even on a small system. To do this, use this:
def ifunction(self, ts, t, y, ydot, f):
y.ghostUpdate(PETSc.InsertMode.INSERT_VALUES,PETSc.ScatterMode.FORWARD)
y.copy(result=self.yv.x.petsc_vec)
self.yv.x.scatter_forward()
dydt = self.rhs(self.yv)
dydt.x.scatter_forward()
ydot.copy(result=f)
f.axpy(-1.0, dydt.x.petsc_vec)
return 0
y = y0.petsc_vec.copy()
ts.setType(ts.Type.BDF)
ts.setIFunction(ifunction)
ts.setTime(0.0)
ts.setTimeStep(1e-14)
ts.setStepLimits(1e-17,1e-12)
ts.setMaxTime(1.0e-12)
ts.setExactFinalTime(PETSc.TS.ExactFinalTime.STEPOVER)
ts.setTolerances(rtol=1e-6, atol=1e-6)
snes = ts.getSNES()
ksp = snes.getKSP()
ksp.setType("gmres")
ksp.getPC().setType("none")
ksp.setFromOptions()
For the scikits.odes.sundials.cvode library, in serial mode, I have used:
solver = CVODE(rhs,
old_api=False,
linsolver='spgmr',
rtol=1e-6,
atol=1e-6,
max_steps=5000,
order=2)
In this case, the solver worked perfectly and obtained similar results to
the rk solver in PETSC. I suspect the issue might be related to the way the
Jacobian is built in PETSC, but scikits.odes.sundials.cvode works perfectly
without requiring the Jacobian. I would greatly appreciate any suggestions
or examples on how to properly set up the BDF solver with PETSc.
Thanks
Art
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20250616/5007e311/attachment.html>
More information about the petsc-users
mailing list