[petsc-dev] Fwd: GPU preconditioners
Andrea Lani
andrea.lani at gmail.com
Fri Jan 17 15:00:25 CST 2014
When I speak about totally inconsistent results between single and
multi-GPU in my previous e-mail, I mean with PCASM. I don't see the same
with PCJACOBI (even though, in this case, this is useless since it crashes
after a few iterations), so I'm doubting about what could be the source of
the problem.
When constructing the matrix for the multi-GPU case, I use this sequence of
calls:
MatSetType(m_mat, MATMPIAIJCUSP);
MatSetBlockSize(m_mat, blockSize);
MatMPIAIJSetPreallocation(m_mat, dnz, dnnz, onz, onnz);
and my sparsity pattern is computed accordingly, considering that I have an
AIJ matrix instead of the BAIJ I use on the "CPU version".
---------- Forwarded message ----------
From: Andrea Lani <andrea.lani at gmail.com>
Date: Fri, Jan 17, 2014 at 9:47 PM
Subject: Re: [petsc-dev] GPU preconditioners
To: Matthew Knepley <knepley at gmail.com>
Cc: For users of the development version of PETSc <petsc-dev at mcs.anl.gov>
Ok, thanks.
In fact, I have another major problem: when running on multi-GPU with PETSc
my results are totally inconsistent compared to a single GPU .
In my code, for now, I'm assuming a 1-1 correspondence between CPU and GPU:
I run on 8 cores and 8 GPUs (4 K10). How can I enforce this in the PETSc
solver? Is it automatically done or do I have to specify some options?
Thanks again
Andrea
On Jan 17, 2014, at 8:48 PM, Matthew Knepley <knepley at gmail.com> wrote:
On Fri, Jan 17, 2014 at 1:29 PM, Andrea Lani < <andrea.lani at gmail.com>
andrea.lani at gmail.com> wrote:
> Dear Devs,
>
> is the PCBJACOBI solver fully ported to GPU in the latest petsc-dev
> version? if not, is there any intention to do so in the near future?
>
> I have a convection-dominated (with strong discontinuities in the flow
> field) MHD problem where PCASM and PCBJACOBI both work fine with KSPGMRES.
>
These can be done on the GPU, but the key is the inner PC. Right now, we
have no ILU0 or equivalent, which I think is what
you want. You can try the AMG variants, but I am guessing they would not be
great for convection dominated flow.
Maybe Karl has a better suggestion?
Thanks,
Matt
> Looking for a speedup in my code (which is using a petsc-dev version about
> two-months old), the only GPU alternative I found was PCJACOBI. This is
> indeed considerably faster on GPU, but does not converge (neither does,
> consistently, its CPU counterpart) if not for a few iterations before
> blowing up. Any other alternative for GPU-based preconditioners or solvers
> worth trying at the moment?
>
> Thanks in advance for your advice
>
> Andrea
>
>
>
> --
> Dr. Andrea
> Lani
> Senior Research Engineer, PhD
> Aeronautics & Aerospace dept., CFD group
> Von Karman Institute for Fluid Dynamics
> Chausse de Waterloo 72,
> B-1640, Rhode-Saint-Genese, Belgium
> fax : +32-2-3599600
> work : +32-2-3599769
> * <lani at vki.ac.be>lani at vki.ac.be <lani at vki.ac.be>*
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
--
Dr. Andrea
Lani
Senior Research Engineer, PhD
Aeronautics & Aerospace dept., CFD group
Von Karman Institute for Fluid Dynamics
Chausse de Waterloo 72,
B-1640, Rhode-Saint-Genese, Belgium
fax : +32-2-3599600
work : +32-2-3599769
*lani at vki.ac.be <lani at vki.ac.be>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20140117/69ce3a08/attachment.html>
More information about the petsc-dev
mailing list