[petsc-users] Requesting multi-GPUs

Manuel Valera mvalera-w at sdsu.edu
Thu Sep 13 13:33:15 CDT 2018


So, from what i get here the round robin assignation to each GPU device is
done automatically by PETSc, from mapping the system? or do i have to pass
a command line argument to do that?

Thanks,


On Wed, Sep 12, 2018 at 2:38 PM, Matthew Knepley <knepley at gmail.com> wrote:

> On Wed, Sep 12, 2018 at 5:31 PM Manuel Valera <mvalera-w at sdsu.edu> wrote:
>
>> Ok then, how can i try getting more than one GPU with the same number of
>> MPI processes?
>>
>
> I do not believe we handle more than one GPU per MPI process. Is that what
> you are asking?
>
>   Thanks,
>
>     Matt
>
>
>> Thanks,
>>
>> On Wed, Sep 12, 2018 at 2:20 PM, Matthew Knepley <knepley at gmail.com>
>> wrote:
>>
>>> On Wed, Sep 12, 2018 at 5:13 PM Manuel Valera <mvalera-w at sdsu.edu>
>>> wrote:
>>>
>>>> Hello guys,
>>>>
>>>> I am working in a multi-gpu cluster and i want to request 2 or more
>>>> GPUs, how can i do that from PETSc? evidently mpirun -n # is for requesting
>>>> processors, but what if i want to use one mpi processor but several GPUs
>>>> instead?
>>>>
>>>
>>> We do not do that. You would run the same number of MPI processes as
>>> GPUs. Note that
>>> you can have more than 1 MPI process on a processor.
>>>
>>>   Matt
>>>
>>>
>>>> Also, i understand the GPU handles the linear system solver, but what
>>>> about the data management? can i do DMs for other than the linear solver
>>>> using the GPUs
>>>> ?
>>>>
>>>> Thanks once more,
>>>>
>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180913/355b0000/attachment-0001.html>


More information about the petsc-users mailing list