[petsc-users] MatCreateSeqAIJWithArrays for GPU / cusparse

Junchao Zhang junchao.zhang at gmail.com
Fri Jan 6 20:44:39 CST 2023


On Fri, Jan 6, 2023 at 7:35 PM Mark Lohry <mlohry at gmail.com> wrote:

> Well, I think it's a moderately crazy idea unless it's less painful to
> implement than I'm thinking. Is there a use case for a mixed device system
> where one petsc executable might be addressing both a HIP and CUDA device
> beyond some frankenstein test system somebody cooked up? In all my code I
> implicitly assume I have either have one host with one device or one host
> with zero devices. I guess you can support these weird scenarios, but why?
> Life is hard enough supporting one device compiler with one host compiler.
>
> Many thanks Junchao -- with combinations of SetPreallocation I was able to
> grab allocated pointers out of petsc. Now I have all the jacobian
> construction on device with no copies.
>
Hi, Mark, could you say a few words about how you assemble matrices on
GPUs?  We ported MatSetValues like routines to GPUs but did not continue
this approach since we have to resolve data races between GPU threads.


>
> On Fri, Jan 6, 2023 at 12:27 AM Barry Smith <bsmith at petsc.dev> wrote:
>
>>
>>   So Jed's "everyone" now consists of "no one" and Jed can stop
>> complaining that "everyone" thinks it is a bad idea.
>>
>>
>>
>> On Jan 5, 2023, at 11:50 PM, Junchao Zhang <junchao.zhang at gmail.com>
>> wrote:
>>
>>
>>
>>
>> On Thu, Jan 5, 2023 at 10:32 PM Barry Smith <bsmith at petsc.dev> wrote:
>>
>>>
>>>
>>> > On Jan 5, 2023, at 3:42 PM, Jed Brown <jed at jedbrown.org> wrote:
>>> >
>>> > Mark Adams <mfadams at lbl.gov> writes:
>>> >
>>> >> Support of HIP and CUDA hardware together would be crazy,
>>> >
>>> > I don't think it's remotely crazy. libCEED supports both together and
>>> it's very convenient when testing on a development machine that has one of
>>> each brand GPU and simplifies binary distribution for us and every package
>>> that uses us. Every day I wish PETSc could build with both simultaneously,
>>> but everyone tells me it's silly.
>>>
>>>   Not everyone at all; just a subset of everyone. Junchao is really the
>>> hold-out :-)
>>>
>> I am not, instead I think we should try (I fully agree it can ease binary
>> distribution).  But satish needs to install such a machine first :)
>> There are issues out of our control if we want to mix GPUs in execution.
>> For example, how to do VexAXPY on a cuda vector and a hip vector? Shall we
>> do it on the host? Also, there are no gpu-aware MPI implementations
>> supporting messages between cuda memory and hip memory.
>>
>>>
>>>   I just don't care about "binary packages" :-); I think they are an
>>> archaic and bad way of thinking about code distribution (but yes the
>>> alternatives need lots of work to make them flawless, but I think that is
>>> where the work should go in the packaging world.)
>>>
>>>    I go further and think one should be able to automatically use a CUDA
>>> vector on a HIP device as well, it is not hard in theory but requires
>>> thinking about how we handle classes and subclasses a little to make it
>>> straightforward; or perhaps Jacob has fixed that also?
>>
>>
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230106/224a3509/attachment.html>


More information about the petsc-users mailing list