<!-- BaNnErBlUrFlE-BoDy-start -->
<!-- Preheader Text : BEGIN -->
<div style="display:none !important;display:none;visibility:hidden;mso-hide:all;font-size:1px;color:#ffffff;line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;">
800k is a pretty small problem for GPUs. We would need to see the runs with output from -ksp_view -log_view to see if the timing results are reasonable. On Apr 12, 2024, at 1: 48 PM, Ng, Cho-Kuen <cho@ slac. stanford. edu> wrote: I performed
</div>
<!-- Preheader Text : END -->
<!-- Email Banner : BEGIN -->
<div style="display:none !important;display:none;visibility:hidden;mso-hide:all;font-size:1px;color:#ffffff;line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;">ZjQcmQRYFpfptBannerStart</div>
<!--[if ((ie)|(mso))]>
<table border="0" cellspacing="0" cellpadding="0" width="100%" style="padding: 16px 0px 16px 0px; direction: ltr" ><tr><td>
<table border="0" cellspacing="0" cellpadding="0" style="padding: 0px 10px 5px 6px; width: 100%; border-radius:4px; border-top:4px solid #90a4ae;background-color:#D0D8DC;"><tr><td valign="top">
<table align="left" border="0" cellspacing="0" cellpadding="0" style="padding: 4px 8px 4px 8px">
<tr><td style="color:#000000; font-family: 'Arial', sans-serif; font-weight:bold; font-size:14px; direction: ltr">
This Message Is From an External Sender
</td></tr>
<tr><td style="color:#000000; font-weight:normal; font-family: 'Arial', sans-serif; font-size:12px; direction: ltr">
This message came from outside your organization.
</td></tr>
</table>
</td></tr></table>
</td></tr></table>
<![endif]-->
<![if !((ie)|(mso))]>
<div dir="ltr" id="pfptBannervpqp5ou" style="all: revert !important; display:block !important; text-align: left !important; margin:16px 0px 16px 0px !important; padding:8px 16px 8px 16px !important; border-radius: 4px !important; min-width: 200px !important; background-color: #D0D8DC !important; background-color: #D0D8DC; border-top: 4px solid #90a4ae !important; border-top: 4px solid #90a4ae;">
<div id="pfptBannervpqp5ou" style="all: unset !important; float:left !important; display:block !important; margin: 0px 0px 1px 0px !important; max-width: 600px !important;">
<div id="pfptBannervpqp5ou" style="all: unset !important; display:block !important; visibility: visible !important; background-color: #D0D8DC !important; color:#000000 !important; color:#000000; font-family: 'Arial', sans-serif !important; font-family: 'Arial', sans-serif; font-weight:bold !important; font-weight:bold; font-size:14px !important; line-height:18px !important; line-height:18px">
This Message Is From an External Sender
</div>
<div id="pfptBannervpqp5ou" style="all: unset !important; display:block !important; visibility: visible !important; background-color: #D0D8DC !important; color:#000000 !important; color:#000000; font-weight:normal; font-family: 'Arial', sans-serif !important; font-family: 'Arial', sans-serif; font-size:12px !important; line-height:18px !important; line-height:18px; margin-top:2px !important;">
This message came from outside your organization.
</div>
</div>
<div style="clear: both !important; display: block !important; visibility: hidden !important; line-height: 0 !important; font-size: 0.01px !important; height: 0px"> </div>
</div>
<![endif]>
<div style="display:none !important;display:none;visibility:hidden;mso-hide:all;font-size:1px;color:#ffffff;line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;">ZjQcmQRYFpfptBannerEnd</div>
<!-- Email Banner : END -->
<!-- BaNnErBlUrFlE-BoDy-end -->
<html><head><!-- BaNnErBlUrFlE-HeAdEr-start -->
<style>
#pfptBannervpqp5ou { all: revert !important; display: block !important;
visibility: visible !important; opacity: 1 !important;
background-color: #D0D8DC !important;
max-width: none !important; max-height: none !important }
.pfptPrimaryButtonvpqp5ou:hover, .pfptPrimaryButtonvpqp5ou:focus {
background-color: #b4c1c7 !important; }
.pfptPrimaryButtonvpqp5ou:active {
background-color: #90a4ae !important; }
</style>
<!-- BaNnErBlUrFlE-HeAdEr-end -->
<meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div><br></div> 800k is a pretty small problem for GPUs. <div><br></div><div> We would need to see the runs with output from -ksp_view -log_view to see if the timing results are reasonable.<br id="lineBreakAtBeginningOfMessage"><div><br><blockquote type="cite"><div>On Apr 12, 2024, at 1:48 PM, Ng, Cho-Kuen <cho@slac.stanford.edu> wrote:</div><br class="Apple-interchange-newline"><div><meta charset="UTF-8"><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;">I performed tests on comparison using KSP with and without cuda backend on NERSC's Perlmutter. For a finite element solve with 800k degrees of freedom, the best times obtained using MPI and MPI+GPU were</div><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;"><br></div><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;">o MPI - 128 MPI tasks, 27 s</div><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;"><br></div><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;">o MPI+GPU - 4 MPI tasks, 4 GPU's, 32 s</div><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;"><br></div><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;">Is that the performance one would expect using the hybrid mode of computation. Attached image shows the scaling on a single node.</div><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;"><br></div><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;">Thanks,</div><div class="elementToProof" style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;">Cho</div><div id="appendonsend" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"></div><hr tabindex="-1" style="font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; display: inline-block; width: 1120.125px;"><span style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; float: none; display: inline !important;"></span><div id="divRplyFwdMsg" dir="ltr" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><font face="Calibri, sans-serif" style="font-size: 11pt;"><b>From:</b><span class="Apple-converted-space"> </span>Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br><b>Sent:</b><span class="Apple-converted-space"> </span>Saturday, August 12, 2023 8:08 AM<br><b>To:</b><span class="Apple-converted-space"> </span>Jacob Faibussowitsch <<a href="mailto:jacob.fai@gmail.com">jacob.fai@gmail.com</a>><br><b>Cc:</b><span class="Apple-converted-space"> </span>Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>>; petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br><b>Subject:</b><span class="Apple-converted-space"> </span>Re: [petsc-users] Using PETSc GPU backend</font><div> </div></div><div dir="ltr" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><div class="x_elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;">Thanks Jacob.<br></div><div id="x_appendonsend"></div><hr tabindex="-1" style="display: inline-block; width: 1120.125px;"><div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size: 11pt;"><b>From:</b><span class="Apple-converted-space"> </span>Jacob Faibussowitsch <<a href="mailto:jacob.fai@gmail.com">jacob.fai@gmail.com</a>><br><b>Sent:</b><span class="Apple-converted-space"> </span>Saturday, August 12, 2023 5:02 AM<br><b>To:</b><span class="Apple-converted-space"> </span>Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br><b>Cc:</b><span class="Apple-converted-space"> </span>Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>>; petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br><b>Subject:</b><span class="Apple-converted-space"> </span>Re: [petsc-users] Using PETSc GPU backend</font><div> </div></div><div class="x_BodyFragment"><font size="2"><span style="font-size: 11pt;"><div class="x_PlainText">> Can petsc show the number of GPUs used?<br><br>-device_view<br><br>Best regards,<br><br>Jacob Faibussowitsch<br>(Jacob Fai - booss - oh - vitch)<br><br>> On Aug 12, 2023, at 00:53, Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br>><span class="Apple-converted-space"> </span><br>> Barry,<br>><span class="Apple-converted-space"> </span><br>> I tried again today on Perlmutter and running on multiple GPU nodes worked. Likely, I had messed up something the other day. Also, I was able to have multiple MPI tasks on a GPU using Nvidia MPS. The petsc output shows the number of MPI tasks:<br>><span class="Apple-converted-space"> </span><br>> KSP Object: 32 MPI processes<br>><span class="Apple-converted-space"> </span><br>> Can petsc show the number of GPUs used?<br>><span class="Apple-converted-space"> </span><br>> Thanks,<br>> Cho<br>><span class="Apple-converted-space"> </span><br>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>><br>> Sent: Wednesday, August 9, 2023 4:09 PM<br>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>> Cc:<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>> <span class="Apple-converted-space"> </span><br>> We would need more information about "hanging". Do PETSc examples and tiny problems "hang" on multiple nodes? If you run with -info what are the last messages printed? Can you run with a debugger to see where it is "hanging"?<br>><span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>>> On Aug 9, 2023, at 5:59 PM, Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>> wrote:<br>>><span class="Apple-converted-space"> </span><br>>> Barry and Matt,<br>>><span class="Apple-converted-space"> </span><br>>> Thanks for your help. Now I can use petsc GPU backend on Perlmutter: 1 node, 4 MPI tasks and 4 GPUs. However, I ran into problems with multiple nodes: 2 nodes, 8 MPI tasks and 8 GPUs. The run hung on KSPSolve. How can I fix this?<br>>><span class="Apple-converted-space"> </span><br>>> Best,<br>>> Cho<br>>><span class="Apple-converted-space"> </span><br>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>><br>>> Sent: Monday, July 17, 2023 6:58 AM<br>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>> Cc:<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>> <span class="Apple-converted-space"> </span><br>>> The examples that use DM, in particular DMDA all trivially support using the GPU with -dm_mat_type aijcusparse -dm_vec_type cuda<br>>><span class="Apple-converted-space"> </span><br>>><span class="Apple-converted-space"> </span><br>>><span class="Apple-converted-space"> </span><br>>>> On Jul 17, 2023, at 1:45 AM, Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>> wrote:<br>>>><span class="Apple-converted-space"> </span><br>>>> Barry,<br>>>><span class="Apple-converted-space"> </span><br>>>> Thank you so much for the clarification.<span class="Apple-converted-space"> </span><br>>>><span class="Apple-converted-space"> </span><br>>>> I see that ex104.c and ex300.c use MatXAIJSetPreallocation(). Are there other tutorials available?<br>>>><span class="Apple-converted-space"> </span><br>>>> Cho<br>>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>><br>>>> Sent: Saturday, July 15, 2023 8:36 AM<br>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>> Cc:<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>> <span class="Apple-converted-space"> </span><br>>>> Cho,<br>>>><span class="Apple-converted-space"> </span><br>>>> We currently have a crappy API for turning on GPU support, and our documentation is misleading in places.<br>>>><span class="Apple-converted-space"> </span><br>>>> People constantly say "to use GPU's with PETSc you only need to use -mat_type aijcusparse (for example)" This is incorrect.<br>>>><span class="Apple-converted-space"> </span><br>>>> This does not work with code that uses the convenience Mat constructors such as MatCreateAIJ(), MatCreateAIJWithArrays etc. It only works if you use the constructor approach of MatCreate(), MatSetSizes(), MatSetFromOptions(), MatXXXSetPreallocation(). ... Similarly you need to use VecCreate(), VecSetSizes(), VecSetFromOptions() and -vec_type cuda<br>>>><span class="Apple-converted-space"> </span><br>>>> If you use DM to create the matrices and vectors then you can use -dm_mat_type aijcusparse -dm_vec_type cuda<br>>>><span class="Apple-converted-space"> </span><br>>>> Sorry for the confusion.<br>>>><span class="Apple-converted-space"> </span><br>>>> Barry<br>>>><span class="Apple-converted-space"> </span><br>>>><span class="Apple-converted-space"> </span><br>>>><span class="Apple-converted-space"> </span><br>>>><span class="Apple-converted-space"> </span><br>>>>> On Jul 15, 2023, at 8:03 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>>>>><span class="Apple-converted-space"> </span><br>>>>> On Sat, Jul 15, 2023 at 1:44 AM Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>> wrote:<br>>>>> Matt,<br>>>>><span class="Apple-converted-space"> </span><br>>>>> After inserting 2 lines in the code:<br>>>>><span class="Apple-converted-space"> </span><br>>>>> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); <br>>>>> ierr = MatSetFromOptions(A);CHKERRQ(ierr);<br>>>>> ierr = MatCreateAIJ(PETSC_COMM_WORLD,mlocal,mlocal,m,n,<br>>>>> d_nz,PETSC_NULL,o_nz,PETSC_NULL,&A);;CHKERRQ(ierr);<br>>>>><span class="Apple-converted-space"> </span><br>>>>> "There are no unused options." However, there is no improvement on the GPU performance.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> 1. MatCreateAIJ() sets the type, and in fact it overwrites the Mat you created in steps 1 and 2. This is detailed in the manual.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> 2. You should replace MatCreateAIJ(), with MatSetSizes() before MatSetFromOptions().<br>>>>><span class="Apple-converted-space"> </span><br>>>>> THanks,<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Matt<br>>>>> Thanks,<br>>>>> Cho<br>>>>><span class="Apple-converted-space"> </span><br>>>>> From: Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>><br>>>>> Sent: Friday, July 14, 2023 5:57 PM<br>>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>>> Cc: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>>; Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>>;<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> On Fri, Jul 14, 2023 at 7:57 PM Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>> wrote:<br>>>>> I managed to pass the following options to PETSc using a GPU node on Perlmutter.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Below is a summary of the test using 4 MPI tasks and 1 GPU per task.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> o #PETSc Option Table entries:<br>>>>> -log_view<br>>>>> -mat_type aijcusparse<br>>>>> -options_left<br>>>>> -vec_type cuda<br>>>>> #End of PETSc Option Table entries<br>>>>> WARNING! There are options you set that were not used!<br>>>>> WARNING! could be spelling mistake, etc!<br>>>>> There is one unused database option. It is:<br>>>>> Option left: name:-mat_type value: aijcusparse<br>>>>><span class="Apple-converted-space"> </span><br>>>>> The -mat_type option has not been used. In the application code, we use<br>>>>><span class="Apple-converted-space"> </span><br>>>>> ierr = MatCreateAIJ(PETSC_COMM_WORLD,mlocal,mlocal,m,n,<br>>>>> d_nz,PETSC_NULL,o_nz,PETSC_NULL,&A);;CHKERRQ(ierr);<br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><br>>>>> If you create the Mat this way, then you need MatSetFromOptions() in order to set the type from the command line.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Thanks,<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Matt<br>>>>> o The percent flops on the GPU for KSPSolve is 17%.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> In comparison with a CPU run using 16 MPI tasks, the GPU run is an order of magnitude slower. How can I improve the GPU performance?<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Thanks,<br>>>>> Cho<br>>>>> From: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>>> Sent: Friday, June 30, 2023 7:57 AM<br>>>>> To: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>>; Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>>>>> Cc: Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>>;<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> Barry, Mark and Matt,<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Thank you all for the suggestions. I will modify the code so we can pass runtime options.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Cho<br>>>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>><br>>>>> Sent: Friday, June 30, 2023 7:01 AM<br>>>>> To: Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>>>>> Cc: Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>>; Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>>;<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> <span class="Apple-converted-space"> </span><br>>>>> Note that options like -mat_type aijcusparse -vec_type cuda only work if the program is set up to allow runtime swapping of matrix and vector types. If you have a call to MatCreateMPIAIJ() or other specific types then then these options do nothing but because Mark had you use -options_left the program will tell you at the end that it did not use the option so you will know.<br>>>>><span class="Apple-converted-space"> </span><br>>>>>> On Jun 30, 2023, at 9:30 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> wrote:<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> PetscCall(PetscInitialize(&argc, &argv, NULL, help)); gives us the args and you run:<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> a.out -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Mark<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> On Fri, Jun 30, 2023 at 6:16 AM Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>>>>>> On Fri, Jun 30, 2023 at 1:13 AM Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br>>>>>> Mark,<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> The application code reads in parameters from an input file, where we can put the PETSc runtime options. Then we pass the options to PetscInitialize(...). Does that sounds right?<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> PETSc will read command line argument automatically in PetscInitialize() unless you shut it off.<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Thanks,<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Matt<br>>>>>> Cho<br>>>>>> From: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>>>> Sent: Thursday, June 29, 2023 8:32 PM<br>>>>>> To: Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>>>>>> Cc:<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>>> Mark,<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Thanks for the information. How do I put the runtime options for the executable, say, a.out, which does not have the provision to append arguments? Do I need to change the C++ main to read in the options?<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Cho<br>>>>>> From: Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>>>>>> Sent: Thursday, June 29, 2023 5:55 PM<br>>>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>>>> Cc:<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>>> Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>>> The last column of the performance data (from -log_view) will be the percent flops on the GPU. Check that that is > 0.<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> The end of the output will list the options that were used and options that were _not_ used (if any). Check that there are no options left.<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Mark<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> On Thu, Jun 29, 2023 at 7:50 PM Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br>>>>>> I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend?<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Thanks,<br>>>>>> Cho<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> --<span class="Apple-converted-space"> </span><br>>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>>> -- Norbert Wiener<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>><span class="Apple-converted-space"> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$">https://www.cse.buffalo.edu/~knepley/</a><br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><br>>>>> --<span class="Apple-converted-space"> </span><br>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>> -- Norbert Wiener<br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$">https://www.cse.buffalo.edu/~knepley/</a><br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><br>>>>> --<span class="Apple-converted-space"> </span><br>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>> -- Norbert Wiener<br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$">https://www.cse.buffalo.edu/~knepley/</a><br>>><span class="Apple-converted-space"> </span><br>>><span class="Apple-converted-space"> </span><br>>><span class="Apple-converted-space"> </span><br>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>><br>>> Sent: Monday, July 17, 2023 6:58 AM<br>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>> Cc:<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>> <span class="Apple-converted-space"> </span><br>>> The examples that use DM, in particular DMDA all trivially support using the GPU with -dm_mat_type aijcusparse -dm_vec_type cuda<br>>><span class="Apple-converted-space"> </span><br>>><span class="Apple-converted-space"> </span><br>>><span class="Apple-converted-space"> </span><br>>>> On Jul 17, 2023, at 1:45 AM, Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>> wrote:<br>>>><span class="Apple-converted-space"> </span><br>>>> Barry,<br>>>><span class="Apple-converted-space"> </span><br>>>> Thank you so much for the clarification.<span class="Apple-converted-space"> </span><br>>>><span class="Apple-converted-space"> </span><br>>>> I see that ex104.c and ex300.c use MatXAIJSetPreallocation(). Are there other tutorials available?<br>>>><span class="Apple-converted-space"> </span><br>>>> Cho<br>>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>><br>>>> Sent: Saturday, July 15, 2023 8:36 AM<br>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>> Cc:<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>> <span class="Apple-converted-space"> </span><br>>>> Cho,<br>>>><span class="Apple-converted-space"> </span><br>>>> We currently have a crappy API for turning on GPU support, and our documentation is misleading in places.<br>>>><span class="Apple-converted-space"> </span><br>>>> People constantly say "to use GPU's with PETSc you only need to use -mat_type aijcusparse (for example)" This is incorrect.<br>>>><span class="Apple-converted-space"> </span><br>>>> This does not work with code that uses the convenience Mat constructors such as MatCreateAIJ(), MatCreateAIJWithArrays etc. It only works if you use the constructor approach of MatCreate(), MatSetSizes(), MatSetFromOptions(), MatXXXSetPreallocation(). ... Similarly you need to use VecCreate(), VecSetSizes(), VecSetFromOptions() and -vec_type cuda<br>>>><span class="Apple-converted-space"> </span><br>>>> If you use DM to create the matrices and vectors then you can use -dm_mat_type aijcusparse -dm_vec_type cuda<br>>>><span class="Apple-converted-space"> </span><br>>>> Sorry for the confusion.<br>>>><span class="Apple-converted-space"> </span><br>>>> Barry<br>>>><span class="Apple-converted-space"> </span><br>>>><span class="Apple-converted-space"> </span><br>>>><span class="Apple-converted-space"> </span><br>>>><span class="Apple-converted-space"> </span><br>>>>> On Jul 15, 2023, at 8:03 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>>>>><span class="Apple-converted-space"> </span><br>>>>> On Sat, Jul 15, 2023 at 1:44 AM Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>> wrote:<br>>>>> Matt,<br>>>>><span class="Apple-converted-space"> </span><br>>>>> After inserting 2 lines in the code:<br>>>>><span class="Apple-converted-space"> </span><br>>>>> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); <br>>>>> ierr = MatSetFromOptions(A);CHKERRQ(ierr);<br>>>>> ierr = MatCreateAIJ(PETSC_COMM_WORLD,mlocal,mlocal,m,n,<br>>>>> d_nz,PETSC_NULL,o_nz,PETSC_NULL,&A);;CHKERRQ(ierr);<br>>>>><span class="Apple-converted-space"> </span><br>>>>> "There are no unused options." However, there is no improvement on the GPU performance.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> 1. MatCreateAIJ() sets the type, and in fact it overwrites the Mat you created in steps 1 and 2. This is detailed in the manual.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> 2. You should replace MatCreateAIJ(), with MatSetSizes() before MatSetFromOptions().<br>>>>><span class="Apple-converted-space"> </span><br>>>>> THanks,<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Matt<br>>>>> Thanks,<br>>>>> Cho<br>>>>><span class="Apple-converted-space"> </span><br>>>>> From: Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>><br>>>>> Sent: Friday, July 14, 2023 5:57 PM<br>>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>>> Cc: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>>; Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>>;<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> On Fri, Jul 14, 2023 at 7:57 PM Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>> wrote:<br>>>>> I managed to pass the following options to PETSc using a GPU node on Perlmutter.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Below is a summary of the test using 4 MPI tasks and 1 GPU per task.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> o #PETSc Option Table entries:<br>>>>> -log_view<br>>>>> -mat_type aijcusparse<br>>>>> -options_left<br>>>>> -vec_type cuda<br>>>>> #End of PETSc Option Table entries<br>>>>> WARNING! There are options you set that were not used!<br>>>>> WARNING! could be spelling mistake, etc!<br>>>>> There is one unused database option. It is:<br>>>>> Option left: name:-mat_type value: aijcusparse<br>>>>><span class="Apple-converted-space"> </span><br>>>>> The -mat_type option has not been used. In the application code, we use<br>>>>><span class="Apple-converted-space"> </span><br>>>>> ierr = MatCreateAIJ(PETSC_COMM_WORLD,mlocal,mlocal,m,n,<br>>>>> d_nz,PETSC_NULL,o_nz,PETSC_NULL,&A);;CHKERRQ(ierr);<br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><br>>>>> If you create the Mat this way, then you need MatSetFromOptions() in order to set the type from the command line.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Thanks,<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Matt<br>>>>> o The percent flops on the GPU for KSPSolve is 17%.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> In comparison with a CPU run using 16 MPI tasks, the GPU run is an order of magnitude slower. How can I improve the GPU performance?<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Thanks,<br>>>>> Cho<br>>>>> From: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>>> Sent: Friday, June 30, 2023 7:57 AM<br>>>>> To: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>>; Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>>>>> Cc: Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>>;<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> Barry, Mark and Matt,<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Thank you all for the suggestions. I will modify the code so we can pass runtime options.<br>>>>><span class="Apple-converted-space"> </span><br>>>>> Cho<br>>>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>><br>>>>> Sent: Friday, June 30, 2023 7:01 AM<br>>>>> To: Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>>>>> Cc: Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>>; Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>>;<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> <span class="Apple-converted-space"> </span><br>>>>> Note that options like -mat_type aijcusparse -vec_type cuda only work if the program is set up to allow runtime swapping of matrix and vector types. If you have a call to MatCreateMPIAIJ() or other specific types then then these options do nothing but because Mark had you use -options_left the program will tell you at the end that it did not use the option so you will know.<br>>>>><span class="Apple-converted-space"> </span><br>>>>>> On Jun 30, 2023, at 9:30 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> wrote:<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> PetscCall(PetscInitialize(&argc, &argv, NULL, help)); gives us the args and you run:<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> a.out -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Mark<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> On Fri, Jun 30, 2023 at 6:16 AM Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>>>>>> On Fri, Jun 30, 2023 at 1:13 AM Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br>>>>>> Mark,<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> The application code reads in parameters from an input file, where we can put the PETSc runtime options. Then we pass the options to PetscInitialize(...). Does that sounds right?<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> PETSc will read command line argument automatically in PetscInitialize() unless you shut it off.<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Thanks,<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Matt<br>>>>>> Cho<br>>>>>> From: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>>>> Sent: Thursday, June 29, 2023 8:32 PM<br>>>>>> To: Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>>>>>> Cc:<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>>> Mark,<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Thanks for the information. How do I put the runtime options for the executable, say, a.out, which does not have the provision to append arguments? Do I need to change the C++ main to read in the options?<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Cho<br>>>>>> From: Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>><br>>>>>> Sent: Thursday, June 29, 2023 5:55 PM<br>>>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu">cho@slac.stanford.edu</a>><br>>>>>> Cc:<span class="Apple-converted-space"> </span><a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>>> Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>>> The last column of the performance data (from -log_view) will be the percent flops on the GPU. Check that that is > 0.<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> The end of the output will list the options that were used and options that were _not_ used (if any). Check that there are no options left.<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Mark<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> On Thu, Jun 29, 2023 at 7:50 PM Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br>>>>>> I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend?<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> Thanks,<br>>>>>> Cho<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>><span class="Apple-converted-space"> </span><br>>>>>> --<span class="Apple-converted-space"> </span><br>>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>>> -- Norbert Wiener<br>>>>>><span class="Apple-converted-space"> </span><br>>>>>><span class="Apple-converted-space"> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$">https://www.cse.buffalo.edu/~knepley/</a><br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><br>>>>> --<span class="Apple-converted-space"> </span><br>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>> -- Norbert Wiener<br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$">https://www.cse.buffalo.edu/~knepley/</a><br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><br>>>>> --<span class="Apple-converted-space"> </span><br>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>> -- Norbert Wiener<br>>>>><span class="Apple-converted-space"> </span><br>>>>><span class="Apple-converted-space"> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$">https://www.cse.buffalo.edu/~knepley/</a><br><br><br></div></span></font></div></div><span id="cid:86D489B8-2056-4F42-910A-0F39CB72E049"><Untitled.png></span></div></blockquote></div><br></div></body></html>