[petsc-users] Combining petsc with libocca

Matthew Knepley knepley at gmail.com
Sat Dec 15 09:42:45 CST 2018


On Fri, Dec 14, 2018 at 2:47 PM Samuel Miller via petsc-users <
petsc-users at mcs.anl.gov> wrote:

> I’m in the process of planning a basic fluids code based on the lattice
> Boltzmann method (LBM) and would like some input from the petsc user base.
> All the computation in LBM is local, so parallelization is straightforward
> and can be simply done in a threaded application; a simple google search
> yields numerous of simple LBM codes on GitHub. I’d like to possibly use
> petsc's dmda to facilitate splitting the grid across multiple compute nodes
> and use a framework like libocca (cuda/opencl/openmp backend) to handle the
> on-node computation through kernels. In my limited knowledge, it looks like
> libraries like libCEED do something similar to this. I’ve also looked
> through a LANL code called Taxila that uses fortran and petsc for LBM, but
> that doesn’t use kernels.
>
> Is it overkill (or recommended even) to have petsc split the grid across
> nodes, while the majority of the computation is handled by a
> cuda/opencl/openmp kernel?
>

The streaming operation is not completely local, because it connects
adjacent vertices. In parallel, this may mean
moving between processes. This can be managed by DMDA efficiently for
regular grids. This is exactly how
PetCLAW (https://github.com/clawpack/pyclaw) uses DMDA.

  THanks,

    Matt


> Thanks for your input,
>
> Sam
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181215/a7d5fd1b/attachment.html>


More information about the petsc-users mailing list