[petsc-users] Regarding the status of VecSetValues(Blocked) for GPU vectors
Matthew Knepley
knepley at gmail.com
Thu Mar 17 18:18:46 CDT 2022
On Thu, Mar 17, 2022 at 4:46 PM Sajid Ali Syed <sasyed at fnal.gov> wrote:
> Hi PETSc-developers,
>
> Is it possible to use VecSetValues with distributed-memory CUDA & Kokkos
> vectors from the device, i.e. can I call VecSetValues with GPU memory
> pointers and expect PETSc to figure out how to stash on the device it until
> I call VecAssemblyBegin (at which point PETSc could use GPU-aware MPI to
> populate off-process values) ?
>
> If this is not currently supported, is supporting this on the roadmap?
> Thanks in advance!
>
VecSetValues() will fall back to the CPU vector, so I do not think this
will work on device.
Usually, our assembly computes all values and puts them in a "local"
vector, which you can access explicitly as Mark said. Then
we call LocalToGlobal() to communicate the values, which does work directly
on device using specialized code in VecScatter/PetscSF.
What are you trying to do?
THanks,
Matt
> Thank You,
> Sajid Ali (he/him) | Research Associate
> Scientific Computing Division
> Fermi National Accelerator Laboratory
> s-sajid-ali.github.io
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220317/525d0e2e/attachment-0001.html>
More information about the petsc-users
mailing list