[petsc-users] Suggestions for code with dof >> 1 ?
Christophe Ortiz
christophe.ortiz at ciemat.es
Tue Oct 15 09:15:30 CDT 2013
On Tue, Oct 15, 2013 at 2:53 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> Christophe Ortiz <christophe.ortiz at ciemat.es> writes:
>
> > I have a doubt...
> > Let's imagine I have a system of 2 diffusion equations (dof=2) with one
> > coupling, both with Dirichlet boundary conditions:
> >
> > u_t - D u_xx = -k*u*v u(x=0,t)=u(x=L,t)=0
> > v_t - D v_xx = -k*u*v v(x=0,t)=v(x=L,t)=0
> >
> > If I write everything under IFunction, then the IJacobian is:
> >
> > 1 0 |
> > 0 1 |
> >
> > -D/dx^2 0 | a+2D/dx^2+kv +ku | -D/dx^2 0
> > 0 -D/dx^2 | +kv +ku | 0
> > -D/dx^2
> >
> >
> > The first block on top represents Dirichlet boundary conditions. The
> three
> > other blocks are for interior nodes.
> >
> > My question is on how to set dfill matrix. For boundary conditions, the
> > diagonal block does not have the same pattern as the one for interior
> node.
> > We see that for node i=0, components are just coupled with themselves
> while
> > for the interior diagonal block, they are coupled one with each other.
> > Then, how to set dfill ? Should it be
> >
> > {1, 0,
> > 0, 1 }
> >
> > or
> >
> > { 1, 1,
> > 1, 1} ?
>
> You have to preallocate for the max number of entries in each kind of
> block. The slight bit of extra memory for the boundary conditions will
> be squeezed out and not affect performance.
>
Thanks. Ok, so I guess that I should use {1, 1, 1, 1} to preallocate for
the max number of entries.
>
> > BTW, does DMDASetBlockFills help during solving or is it just for
> managing
> > memory ?
>
> Allocated nonzeros that have never been "set" are squeezed out in
> assembly, so they only affect memory. Allocated entries that are
> explicitly set to 0.0 are not squeezed out, though there is a Mat option
> to do that.
>
I see.
Before I go on implementing all the coupling between all the dof, I am
doing some tests with the system described above and I see some strange
behavior...
To do some tests, as initial conditions I set two gaussian distributions
for u and v. When the peak value of the gaussians is in the order of unity,
no problem, ARKIMEX as well as ROSW work fine. However, when I use peak
values in the order of 1e15-1e20 (which is of interest for me), then
troubles arise. I see it takes much more time to do one timestep for
ARKIMEX. This, even when I switch off the coupling and there is only
diffusion. It takes too long.
I tried KSPCG and KSPGMRES and PCNONE/PCILU.
- is it due to some problem with ARKIMEX and large values ?
- How do you use ts_adapt_basic_clip in the run command to decrease the
fastest increase of timestep ? I would like to set 1.5 for the fastest
increase instead of 10, which is the default. I tried but PETSc complains
saying:
[0]PETSC ERROR: Argument out of range!
[0]PETSC ERROR: Must give exactly two values to -ts_adapt_basic_clip!
- ARKIMEX seems to work better if I decrease the number of nodes, ie if I
increase the dx.
- I found out with ts_view that the ARKIMEX type is always ARKIMEX3, even
when I use
ierr = TSARKIMEXSetType(ts,TSARKIMEX1BEE);
Is it normal ?
- With TSROSW, much better. Any idea why ?
- Do you recommend a particular KSP/PC for this type of problem ?
Christophe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20131015/d77e7743/attachment.html>
More information about the petsc-users
mailing list