[petsc-users] performance issue
Jed Brown
jedbrown at mcs.anl.gov
Sat Mar 10 10:10:51 CST 2012
On Sat, Mar 10, 2012 at 09:59, Xavier Garnaud <
xavier.garnaud at ladhyx.polytechnique.fr> wrote:
> am solving the compressible Navier--Stokes equations in compressible form,
> so in order to apply the operator, I
>
> 1. apply BCs on the flow field
> 2. compute the flux
> 3. take the derivative using finite differences
> 4. apply BCs on the derivatives of the flux
>
>
> In order to apply the linearized operator, I wish to linearize steps 2 and
> 4 (the other are linear). For this I assemble sparse matrices (MPIAIJ). The
> matrices should be block diagonal -- with square or rectangular blocks --
> so I preallocate the whole diagonal blocks (but I only use MatSetValues for
> nonzero entries). When I do this, the linearized code runs approximately
> 50% slower (the computation of derivatives takes more that 70% of the time
> in the non-linear code), so steps 2 and 4 are much slower for the linear
> operator although the number of operations is very similar. Is this be due
> to the poor preallocation? Is there a way to improve the performance?
>
It's not clear to me from this description if you are even using an
implicit method. Is the linearization for use in a Newton iteration? How
often do you have to reassemble? Please always send -log_summary output
with performance questions.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120310/15f27201/attachment.htm>
More information about the petsc-users
mailing list