<div dir="ltr"><div><div><div>Hi Dominic,<br><br></div>We use external libraries such as MAGMA and cuSPARSE. It looks like they use the runtime API as you mentioned above. At the moment, conflict is between the two instances of PETSc that we run (one each for real and complex). We are planning to write some code in CUDA and will use the driver API if need be.<br>
<br></div>Does moving to the driver API look like something that can be included in PETSc 3.5?<br><br></div>Harshad<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jan 20, 2014 at 11:27 AM, Dominic Meiser <span dir="ltr"><<a href="mailto:dmeiser@txcorp.com" target="_blank">dmeiser@txcorp.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Jed, Harshad,<br>
<br>
A different solution to the problem of PETSc and a user code stepping on each other's feet with cudaSetDevice might be to use the CUDA driver API for device selection rather than the runtime API. If we were to explicitly manage a PETSc CUDA context using the driver API we can control what devices are being used without interfering with the mechanisms used by other parts of a client code for CUDA device selection (e.g. cudaSetDevice). PETSc's device management would be completely decoupled from the rest of an application.<br>
<br>
Of course this approach can be combined with lazy initialization as proposed by Karl. Whenever the first device function is called we create the PETSc CUDA context. The advantages of lazy initialization mentioned by Karl and Jed ensue (e.g. ability to run on machines without GPUs provided one is not using GPU functionality).<br>
<br>
Another advantage of a solution using the driver API is that device and context management would be very similar between CUDA and OpenCL backends.<br>
<br>
I realize that this proposal might be impractical as a near term solution since it involves a pretty major refactor of the CUDA context infrastructure. Furthermore, as far as I can tell, third party libraries that we rely on (e.g. cusp and cusparse) assume the runtime api. Perhaps these difficulties can be overcome?<br>
<br>
A possible near term solution would be to turn this around and to have applications with advanced device selection requirements use the driver API. Harshad, I'm not familiar with your code but would it be possible for you to use the driver API on your end to avoid conflicts with cudaSetDevice calls inside PETSc?<br>
<br>
Cheers,<br>
Dominic<div class="HOEnZb"><div class="h5"><br>
<br>
On 01/14/2014 09:27 AM, Harshad Sahasrabudhe wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Jed,<br>
<br>
Sometime back we talked about an interface which could handle other libraries calling cudaSetDevice simultaneously with PETSc. For example, in our case 2 different instances of PETSc calling cudaSetDevice.<br>
<br>
>Sure, but how will we actually share the device between libraries? What<br>
>if the other library was not PETSc, but something else, and they also<br>
>called cudaSetDevice, but with a different default mapping strategy?<br>
<br>
>We need an interface that handles this case.<br>
<br>
Do we already have any solution for this? If not, can we start looking at this case?<br>
<br>
Thanks,<br>
Harshad<br>
</blockquote>
<br>
<br></div></div><span class="HOEnZb"><font color="#888888">
-- <br>
Dominic Meiser<br>
Tech-X Corporation<br>
5621 Arapahoe Avenue<br>
Boulder, CO 80303<br>
USA<br>
Telephone: 303-996-2036<br>
Fax: 303-448-7756<br>
<a href="http://www.txcorp.com" target="_blank">www.txcorp.com</a><br>
<br>
</font></span></blockquote></div><br></div>