[petsc-users] Some papers for additive schwarz and blocked jacobi?
Jed Brown
jed at 59A2.org
Mon Jun 6 09:42:46 CDT 2011
2011/6/6 Dürrwang, Jürgen <Juergen.Duerrwang at iosb.fraunhofer.de>
> Yes, I tried some PETSC examples and I modified one for my stuff. It works
> very well on my Xeon quadcore, but my intention is to mix CPU and GPU code.
> I want a paralell domain decomposition using jacobi block method for runing
> ILU(0) on each block(number of blocks = number of CPU cores). Then I want to
> take the results of each blocksolution as a preconditioner for a cg solver
> on GPU.
>
What is the GPU going to do while this is taking place on the CPU? I don't
see much point doing CG on the GPU if you don't also move the matrix and
preconditioner there. (The performance may even be worse than doing
everything on the CPU.)
Have you read the docs on running PETSc on GPUs?
http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#gpus
http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/installation.html#CUDA
There is no ILU on the GPU because nobody has written it (because it seems
to be ill-suited to the execution model).
>
>
> At the moment I can decompose my matrix in four jacobi block matrices. I
> compared my results with petsc and they are the same. But now I don’t know
> if I have to run my cg solver on each block or could I put the results of
> each blocked-ILU together and the use this as preconditioner for the non
> blocked matrix(my large input matrix).
>
You can do either of these; -pc_type asm -sub_ksp_type cg -sub_pc_type icc,
for example. Be careful about symmetry and remember to use FGMRES if you
make the preconditioner nonlinear.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110606/30fe8cfc/attachment.htm>
More information about the petsc-users
mailing list