<head><!-- BaNnErBlUrFlE-HeAdEr-start -->
<style>
#pfptBannerq59mtar { all: revert !important; display: block !important;
visibility: visible !important; opacity: 1 !important;
background-color: #D0D8DC !important;
max-width: none !important; max-height: none !important }
.pfptPrimaryButtonq59mtar:hover, .pfptPrimaryButtonq59mtar:focus {
background-color: #b4c1c7 !important; }
.pfptPrimaryButtonq59mtar:active {
background-color: #90a4ae !important; }
</style>
<!-- BaNnErBlUrFlE-HeAdEr-end -->
</head><!-- BaNnErBlUrFlE-BoDy-start -->
<!-- Preheader Text : BEGIN -->
<div style="display:none !important;display:none;visibility:hidden;mso-hide:all;font-size:1px;color:#ffffff;line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;">
As Barry said, this is a bit small but the performance looks reasonable. The solver does very badly, mathematically. I would try hypre to get another data point. You could also try 'cg' to check that the pipelined version is not a problem. MarkOn
</div>
<!-- Preheader Text : END -->
<!-- Email Banner : BEGIN -->
<div style="display:none !important;display:none;visibility:hidden;mso-hide:all;font-size:1px;color:#ffffff;line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;">ZjQcmQRYFpfptBannerStart</div>
<!--[if ((ie)|(mso))]>
<table border="0" cellspacing="0" cellpadding="0" width="100%" style="padding: 16px 0px 16px 0px; direction: ltr" ><tr><td>
<table border="0" cellspacing="0" cellpadding="0" style="padding: 0px 10px 5px 6px; width: 100%; border-radius:4px; border-top:4px solid #90a4ae;background-color:#D0D8DC;"><tr><td valign="top">
<table align="left" border="0" cellspacing="0" cellpadding="0" style="padding: 4px 8px 4px 8px">
<tr><td style="color:#000000; font-family: 'Arial', sans-serif; font-weight:bold; font-size:14px; direction: ltr">
This Message Is From an External Sender
</td></tr>
<tr><td style="color:#000000; font-weight:normal; font-family: 'Arial', sans-serif; font-size:12px; direction: ltr">
This message came from outside your organization.
</td></tr>
</table>
</td></tr></table>
</td></tr></table>
<![endif]-->
<![if !((ie)|(mso))]>
<div dir="ltr" id="pfptBannerq59mtar" style="all: revert !important; display:block !important; text-align: left !important; margin:16px 0px 16px 0px !important; padding:8px 16px 8px 16px !important; border-radius: 4px !important; min-width: 200px !important; background-color: #D0D8DC !important; background-color: #D0D8DC; border-top: 4px solid #90a4ae !important; border-top: 4px solid #90a4ae;">
<div id="pfptBannerq59mtar" style="all: unset !important; float:left !important; display:block !important; margin: 0px 0px 1px 0px !important; max-width: 600px !important;">
<div id="pfptBannerq59mtar" style="all: unset !important; display:block !important; visibility: visible !important; background-color: #D0D8DC !important; color:#000000 !important; color:#000000; font-family: 'Arial', sans-serif !important; font-family: 'Arial', sans-serif; font-weight:bold !important; font-weight:bold; font-size:14px !important; line-height:18px !important; line-height:18px">
This Message Is From an External Sender
</div>
<div id="pfptBannerq59mtar" style="all: unset !important; display:block !important; visibility: visible !important; background-color: #D0D8DC !important; color:#000000 !important; color:#000000; font-weight:normal; font-family: 'Arial', sans-serif !important; font-family: 'Arial', sans-serif; font-size:12px !important; line-height:18px !important; line-height:18px; margin-top:2px !important;">
This message came from outside your organization.
</div>
</div>
<div style="clear: both !important; display: block !important; visibility: hidden !important; line-height: 0 !important; font-size: 0.01px !important; height: 0px"> </div>
</div>
<![endif]>
<div style="display:none !important;display:none;visibility:hidden;mso-hide:all;font-size:1px;color:#ffffff;line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;">ZjQcmQRYFpfptBannerEnd</div>
<!-- Email Banner : END -->
<!-- BaNnErBlUrFlE-BoDy-end -->
<div dir="ltr">As Barry said, this is a bit small but the performance looks reasonable.<div>The solver does very badly, mathematically.</div><div>I would try hypre to get another data point.</div><div>You could also try 'cg' to check that the pipelined version is not a problem.</div><div>Mark</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 12, 2024 at 3:54 PM Barry Smith <<a href="mailto:bsmith@petsc.dev">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="msg-8746536082970255201">
<div style="font-size:1px;color:rgb(255,255,255);line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;display:none">
800k is a pretty small problem for GPUs. We would need to see the runs with output from -ksp_view -log_view to see if the timing results are reasonable. On Apr 12, 2024, at 1: 48 PM, Ng, Cho-Kuen <cho@ slac. stanford. edu> wrote: I performed
</div>
<div style="font-size:1px;color:rgb(255,255,255);line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;display:none">ZjQcmQRYFpfptBannerStart</div>
<u></u>
<div dir="ltr" id="m_-8746536082970255201pfptBannervpqp5ou" style="display:block;text-align:left;margin:16px 0px;padding:8px 16px;border-radius:4px;min-width:200px;background-color:rgb(208,216,220);border-top:4px solid rgb(144,164,174)">
<div id="m_-8746536082970255201pfptBannervpqp5ou" style="float:left;display:block;margin:0px 0px 1px;max-width:600px">
<div id="m_-8746536082970255201pfptBannervpqp5ou" style="display:block;background-color:rgb(208,216,220);color:rgb(0,0,0);font-family:Arial,sans-serif;font-weight:bold;font-size:14px;line-height:18px">
This Message Is From an External Sender
</div>
<div id="m_-8746536082970255201pfptBannervpqp5ou" style="font-weight:normal;display:block;background-color:rgb(208,216,220);color:rgb(0,0,0);font-family:Arial,sans-serif;font-size:12px;line-height:18px;margin-top:2px">
This message came from outside your organization.
</div>
</div>
<div style="height:0px;clear:both;display:block;line-height:0;font-size:0.01px"> </div>
</div>
<u></u>
<div style="font-size:1px;color:rgb(255,255,255);line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;display:none">ZjQcmQRYFpfptBannerEnd</div>
<div><div><br></div> 800k is a pretty small problem for GPUs. <div><br></div><div> We would need to see the runs with output from -ksp_view -log_view to see if the timing results are reasonable.<br id="m_-8746536082970255201lineBreakAtBeginningOfMessage"><div><br><blockquote type="cite"><div>On Apr 12, 2024, at 1:48 PM, Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>> wrote:</div><br><div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt">I performed tests on comparison using KSP with and without cuda backend on NERSC's Perlmutter. For a finite element solve with 800k degrees of freedom, the best times obtained using MPI and MPI+GPU were</div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt"><br></div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt">o MPI - 128 MPI tasks, 27 s</div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt"><br></div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt">o MPI+GPU - 4 MPI tasks, 4 GPU's, 32 s</div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt"><br></div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt">Is that the performance one would expect using the hybrid mode of computation. Attached image shows the scaling on a single node.</div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt"><br></div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt">Thanks,</div><div style="font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;font-family:Calibri,Helvetica,sans-serif;font-size:12pt">Cho</div><div id="m_-8746536082970255201appendonsend" style="font-family:Helvetica;font-size:18px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"></div><hr style="font-family:Helvetica;font-size:18px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;display:inline-block;width:1120.12px"><span style="font-family:Helvetica;font-size:18px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;float:none;display:inline"></span><div id="m_-8746536082970255201divRplyFwdMsg" dir="ltr" style="font-family:Helvetica;font-size:18px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><font face="Calibri, sans-serif" style="font-size:11pt"><b>From:</b><span> </span>Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br><b>Sent:</b><span> </span>Saturday, August 12, 2023 8:08 AM<br><b>To:</b><span> </span>Jacob Faibussowitsch <<a href="mailto:jacob.fai@gmail.com" target="_blank">jacob.fai@gmail.com</a>><br><b>Cc:</b><span> </span>Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>>; petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br><b>Subject:</b><span> </span>Re: [petsc-users] Using PETSc GPU backend</font><div> </div></div><div dir="ltr" style="font-family:Helvetica;font-size:18px;font-style:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><div style="font-family:Calibri,Helvetica,sans-serif;font-size:12pt">Thanks Jacob.<br></div><div id="m_-8746536082970255201x_appendonsend"></div><hr style="display:inline-block;width:1120.12px"><div id="m_-8746536082970255201x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt"><b>From:</b><span> </span>Jacob Faibussowitsch <<a href="mailto:jacob.fai@gmail.com" target="_blank">jacob.fai@gmail.com</a>><br><b>Sent:</b><span> </span>Saturday, August 12, 2023 5:02 AM<br><b>To:</b><span> </span>Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br><b>Cc:</b><span> </span>Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>>; petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br><b>Subject:</b><span> </span>Re: [petsc-users] Using PETSc GPU backend</font><div> </div></div><div><font size="2"><span style="font-size:11pt"><div>> Can petsc show the number of GPUs used?<br><br>-device_view<br><br>Best regards,<br><br>Jacob Faibussowitsch<br>(Jacob Fai - booss - oh - vitch)<br><br>> On Aug 12, 2023, at 00:53, Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>><span> </span><br>> Barry,<br>><span> </span><br>> I tried again today on Perlmutter and running on multiple GPU nodes worked. Likely, I had messed up something the other day. Also, I was able to have multiple MPI tasks on a GPU using Nvidia MPS. The petsc output shows the number of MPI tasks:<br>><span> </span><br>> KSP Object: 32 MPI processes<br>><span> </span><br>> Can petsc show the number of GPUs used?<br>><span> </span><br>> Thanks,<br>> Cho<br>><span> </span><br>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>><br>> Sent: Wednesday, August 9, 2023 4:09 PM<br>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>> Cc:<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>> <span> </span><br>> We would need more information about "hanging". Do PETSc examples and tiny problems "hang" on multiple nodes? If you run with -info what are the last messages printed? Can you run with a debugger to see where it is "hanging"?<br>><span> </span><br>><span> </span><br>><span> </span><br>>> On Aug 9, 2023, at 5:59 PM, Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>> wrote:<br>>><span> </span><br>>> Barry and Matt,<br>>><span> </span><br>>> Thanks for your help. Now I can use petsc GPU backend on Perlmutter: 1 node, 4 MPI tasks and 4 GPUs. However, I ran into problems with multiple nodes: 2 nodes, 8 MPI tasks and 8 GPUs. The run hung on KSPSolve. How can I fix this?<br>>><span> </span><br>>> Best,<br>>> Cho<br>>><span> </span><br>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>><br>>> Sent: Monday, July 17, 2023 6:58 AM<br>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>> Cc:<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>> <span> </span><br>>> The examples that use DM, in particular DMDA all trivially support using the GPU with -dm_mat_type aijcusparse -dm_vec_type cuda<br>>><span> </span><br>>><span> </span><br>>><span> </span><br>>>> On Jul 17, 2023, at 1:45 AM, Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>> wrote:<br>>>><span> </span><br>>>> Barry,<br>>>><span> </span><br>>>> Thank you so much for the clarification.<span> </span><br>>>><span> </span><br>>>> I see that ex104.c and ex300.c use MatXAIJSetPreallocation(). Are there other tutorials available?<br>>>><span> </span><br>>>> Cho<br>>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>><br>>>> Sent: Saturday, July 15, 2023 8:36 AM<br>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>> Cc:<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>> <span> </span><br>>>> Cho,<br>>>><span> </span><br>>>> We currently have a crappy API for turning on GPU support, and our documentation is misleading in places.<br>>>><span> </span><br>>>> People constantly say "to use GPU's with PETSc you only need to use -mat_type aijcusparse (for example)" This is incorrect.<br>>>><span> </span><br>>>> This does not work with code that uses the convenience Mat constructors such as MatCreateAIJ(), MatCreateAIJWithArrays etc. It only works if you use the constructor approach of MatCreate(), MatSetSizes(), MatSetFromOptions(), MatXXXSetPreallocation(). ... Similarly you need to use VecCreate(), VecSetSizes(), VecSetFromOptions() and -vec_type cuda<br>>>><span> </span><br>>>> If you use DM to create the matrices and vectors then you can use -dm_mat_type aijcusparse -dm_vec_type cuda<br>>>><span> </span><br>>>> Sorry for the confusion.<br>>>><span> </span><br>>>> Barry<br>>>><span> </span><br>>>><span> </span><br>>>><span> </span><br>>>><span> </span><br>>>>> On Jul 15, 2023, at 8:03 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>>>>><span> </span><br>>>>> On Sat, Jul 15, 2023 at 1:44 AM Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>> wrote:<br>>>>> Matt,<br>>>>><span> </span><br>>>>> After inserting 2 lines in the code:<br>>>>><span> </span><br>>>>> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); <br>>>>> ierr = MatSetFromOptions(A);CHKERRQ(ierr);<br>>>>> ierr = MatCreateAIJ(PETSC_COMM_WORLD,mlocal,mlocal,m,n,<br>>>>> d_nz,PETSC_NULL,o_nz,PETSC_NULL,&A);;CHKERRQ(ierr);<br>>>>><span> </span><br>>>>> "There are no unused options." However, there is no improvement on the GPU performance.<br>>>>><span> </span><br>>>>> 1. MatCreateAIJ() sets the type, and in fact it overwrites the Mat you created in steps 1 and 2. This is detailed in the manual.<br>>>>><span> </span><br>>>>> 2. You should replace MatCreateAIJ(), with MatSetSizes() before MatSetFromOptions().<br>>>>><span> </span><br>>>>> THanks,<br>>>>><span> </span><br>>>>> Matt<br>>>>> Thanks,<br>>>>> Cho<br>>>>><span> </span><br>>>>> From: Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>><br>>>>> Sent: Friday, July 14, 2023 5:57 PM<br>>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>>> Cc: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>>; Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>>;<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> On Fri, Jul 14, 2023 at 7:57 PM Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>> wrote:<br>>>>> I managed to pass the following options to PETSc using a GPU node on Perlmutter.<br>>>>><span> </span><br>>>>> -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>><span> </span><br>>>>> Below is a summary of the test using 4 MPI tasks and 1 GPU per task.<br>>>>><span> </span><br>>>>> o #PETSc Option Table entries:<br>>>>> -log_view<br>>>>> -mat_type aijcusparse<br>>>>> -options_left<br>>>>> -vec_type cuda<br>>>>> #End of PETSc Option Table entries<br>>>>> WARNING! There are options you set that were not used!<br>>>>> WARNING! could be spelling mistake, etc!<br>>>>> There is one unused database option. It is:<br>>>>> Option left: name:-mat_type value: aijcusparse<br>>>>><span> </span><br>>>>> The -mat_type option has not been used. In the application code, we use<br>>>>><span> </span><br>>>>> ierr = MatCreateAIJ(PETSC_COMM_WORLD,mlocal,mlocal,m,n,<br>>>>> d_nz,PETSC_NULL,o_nz,PETSC_NULL,&A);;CHKERRQ(ierr);<br>>>>><span> </span><br>>>>><span> </span><br>>>>> If you create the Mat this way, then you need MatSetFromOptions() in order to set the type from the command line.<br>>>>><span> </span><br>>>>> Thanks,<br>>>>><span> </span><br>>>>> Matt<br>>>>> o The percent flops on the GPU for KSPSolve is 17%.<br>>>>><span> </span><br>>>>> In comparison with a CPU run using 16 MPI tasks, the GPU run is an order of magnitude slower. How can I improve the GPU performance?<br>>>>><span> </span><br>>>>> Thanks,<br>>>>> Cho<br>>>>> From: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>>> Sent: Friday, June 30, 2023 7:57 AM<br>>>>> To: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>>; Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>>>>> Cc: Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>>;<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> Barry, Mark and Matt,<br>>>>><span> </span><br>>>>> Thank you all for the suggestions. I will modify the code so we can pass runtime options.<br>>>>><span> </span><br>>>>> Cho<br>>>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>><br>>>>> Sent: Friday, June 30, 2023 7:01 AM<br>>>>> To: Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>>>>> Cc: Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>>; Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>>;<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> <span> </span><br>>>>> Note that options like -mat_type aijcusparse -vec_type cuda only work if the program is set up to allow runtime swapping of matrix and vector types. If you have a call to MatCreateMPIAIJ() or other specific types then then these options do nothing but because Mark had you use -options_left the program will tell you at the end that it did not use the option so you will know.<br>>>>><span> </span><br>>>>>> On Jun 30, 2023, at 9:30 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br>>>>>><span> </span><br>>>>>> PetscCall(PetscInitialize(&argc, &argv, NULL, help)); gives us the args and you run:<br>>>>>><span> </span><br>>>>>> a.out -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>>><span> </span><br>>>>>> Mark<br>>>>>><span> </span><br>>>>>> On Fri, Jun 30, 2023 at 6:16 AM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>>>>>> On Fri, Jun 30, 2023 at 1:13 AM Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>>>>>> Mark,<br>>>>>><span> </span><br>>>>>> The application code reads in parameters from an input file, where we can put the PETSc runtime options. Then we pass the options to PetscInitialize(...). Does that sounds right?<br>>>>>><span> </span><br>>>>>> PETSc will read command line argument automatically in PetscInitialize() unless you shut it off.<br>>>>>><span> </span><br>>>>>> Thanks,<br>>>>>><span> </span><br>>>>>> Matt<br>>>>>> Cho<br>>>>>> From: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>>>> Sent: Thursday, June 29, 2023 8:32 PM<br>>>>>> To: Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>>>>>> Cc:<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>>> Mark,<br>>>>>><span> </span><br>>>>>> Thanks for the information. How do I put the runtime options for the executable, say, a.out, which does not have the provision to append arguments? Do I need to change the C++ main to read in the options?<br>>>>>><span> </span><br>>>>>> Cho<br>>>>>> From: Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>>>>>> Sent: Thursday, June 29, 2023 5:55 PM<br>>>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>>>> Cc:<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>>> Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>>> The last column of the performance data (from -log_view) will be the percent flops on the GPU. Check that that is > 0.<br>>>>>><span> </span><br>>>>>> The end of the output will list the options that were used and options that were _not_ used (if any). Check that there are no options left.<br>>>>>><span> </span><br>>>>>> Mark<br>>>>>><span> </span><br>>>>>> On Thu, Jun 29, 2023 at 7:50 PM Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>>>>>> I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend?<br>>>>>><span> </span><br>>>>>> Thanks,<br>>>>>> Cho<br>>>>>><span> </span><br>>>>>><span> </span><br>>>>>> --<span> </span><br>>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>>> -- Norbert Wiener<br>>>>>><span> </span><br>>>>>><span> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>>>>><span> </span><br>>>>><span> </span><br>>>>><span> </span><br>>>>> --<span> </span><br>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>> -- Norbert Wiener<br>>>>><span> </span><br>>>>><span> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>>>>><span> </span><br>>>>><span> </span><br>>>>> --<span> </span><br>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>> -- Norbert Wiener<br>>>>><span> </span><br>>>>><span> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>>><span> </span><br>>><span> </span><br>>><span> </span><br>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>><br>>> Sent: Monday, July 17, 2023 6:58 AM<br>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>> Cc:<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>> <span> </span><br>>> The examples that use DM, in particular DMDA all trivially support using the GPU with -dm_mat_type aijcusparse -dm_vec_type cuda<br>>><span> </span><br>>><span> </span><br>>><span> </span><br>>>> On Jul 17, 2023, at 1:45 AM, Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>> wrote:<br>>>><span> </span><br>>>> Barry,<br>>>><span> </span><br>>>> Thank you so much for the clarification.<span> </span><br>>>><span> </span><br>>>> I see that ex104.c and ex300.c use MatXAIJSetPreallocation(). Are there other tutorials available?<br>>>><span> </span><br>>>> Cho<br>>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>><br>>>> Sent: Saturday, July 15, 2023 8:36 AM<br>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>> Cc:<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>> <span> </span><br>>>> Cho,<br>>>><span> </span><br>>>> We currently have a crappy API for turning on GPU support, and our documentation is misleading in places.<br>>>><span> </span><br>>>> People constantly say "to use GPU's with PETSc you only need to use -mat_type aijcusparse (for example)" This is incorrect.<br>>>><span> </span><br>>>> This does not work with code that uses the convenience Mat constructors such as MatCreateAIJ(), MatCreateAIJWithArrays etc. It only works if you use the constructor approach of MatCreate(), MatSetSizes(), MatSetFromOptions(), MatXXXSetPreallocation(). ... Similarly you need to use VecCreate(), VecSetSizes(), VecSetFromOptions() and -vec_type cuda<br>>>><span> </span><br>>>> If you use DM to create the matrices and vectors then you can use -dm_mat_type aijcusparse -dm_vec_type cuda<br>>>><span> </span><br>>>> Sorry for the confusion.<br>>>><span> </span><br>>>> Barry<br>>>><span> </span><br>>>><span> </span><br>>>><span> </span><br>>>><span> </span><br>>>>> On Jul 15, 2023, at 8:03 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>>>>><span> </span><br>>>>> On Sat, Jul 15, 2023 at 1:44 AM Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>> wrote:<br>>>>> Matt,<br>>>>><span> </span><br>>>>> After inserting 2 lines in the code:<br>>>>><span> </span><br>>>>> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); <br>>>>> ierr = MatSetFromOptions(A);CHKERRQ(ierr);<br>>>>> ierr = MatCreateAIJ(PETSC_COMM_WORLD,mlocal,mlocal,m,n,<br>>>>> d_nz,PETSC_NULL,o_nz,PETSC_NULL,&A);;CHKERRQ(ierr);<br>>>>><span> </span><br>>>>> "There are no unused options." However, there is no improvement on the GPU performance.<br>>>>><span> </span><br>>>>> 1. MatCreateAIJ() sets the type, and in fact it overwrites the Mat you created in steps 1 and 2. This is detailed in the manual.<br>>>>><span> </span><br>>>>> 2. You should replace MatCreateAIJ(), with MatSetSizes() before MatSetFromOptions().<br>>>>><span> </span><br>>>>> THanks,<br>>>>><span> </span><br>>>>> Matt<br>>>>> Thanks,<br>>>>> Cho<br>>>>><span> </span><br>>>>> From: Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>><br>>>>> Sent: Friday, July 14, 2023 5:57 PM<br>>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>>> Cc: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>>; Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>>;<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> On Fri, Jul 14, 2023 at 7:57 PM Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>> wrote:<br>>>>> I managed to pass the following options to PETSc using a GPU node on Perlmutter.<br>>>>><span> </span><br>>>>> -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>><span> </span><br>>>>> Below is a summary of the test using 4 MPI tasks and 1 GPU per task.<br>>>>><span> </span><br>>>>> o #PETSc Option Table entries:<br>>>>> -log_view<br>>>>> -mat_type aijcusparse<br>>>>> -options_left<br>>>>> -vec_type cuda<br>>>>> #End of PETSc Option Table entries<br>>>>> WARNING! There are options you set that were not used!<br>>>>> WARNING! could be spelling mistake, etc!<br>>>>> There is one unused database option. It is:<br>>>>> Option left: name:-mat_type value: aijcusparse<br>>>>><span> </span><br>>>>> The -mat_type option has not been used. In the application code, we use<br>>>>><span> </span><br>>>>> ierr = MatCreateAIJ(PETSC_COMM_WORLD,mlocal,mlocal,m,n,<br>>>>> d_nz,PETSC_NULL,o_nz,PETSC_NULL,&A);;CHKERRQ(ierr);<br>>>>><span> </span><br>>>>><span> </span><br>>>>> If you create the Mat this way, then you need MatSetFromOptions() in order to set the type from the command line.<br>>>>><span> </span><br>>>>> Thanks,<br>>>>><span> </span><br>>>>> Matt<br>>>>> o The percent flops on the GPU for KSPSolve is 17%.<br>>>>><span> </span><br>>>>> In comparison with a CPU run using 16 MPI tasks, the GPU run is an order of magnitude slower. How can I improve the GPU performance?<br>>>>><span> </span><br>>>>> Thanks,<br>>>>> Cho<br>>>>> From: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>>> Sent: Friday, June 30, 2023 7:57 AM<br>>>>> To: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>>; Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>>>>> Cc: Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>>;<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> Barry, Mark and Matt,<br>>>>><span> </span><br>>>>> Thank you all for the suggestions. I will modify the code so we can pass runtime options.<br>>>>><span> </span><br>>>>> Cho<br>>>>> From: Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>><br>>>>> Sent: Friday, June 30, 2023 7:01 AM<br>>>>> To: Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>>>>> Cc: Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>>; Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>>;<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>> <span> </span><br>>>>> Note that options like -mat_type aijcusparse -vec_type cuda only work if the program is set up to allow runtime swapping of matrix and vector types. If you have a call to MatCreateMPIAIJ() or other specific types then then these options do nothing but because Mark had you use -options_left the program will tell you at the end that it did not use the option so you will know.<br>>>>><span> </span><br>>>>>> On Jun 30, 2023, at 9:30 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br>>>>>><span> </span><br>>>>>> PetscCall(PetscInitialize(&argc, &argv, NULL, help)); gives us the args and you run:<br>>>>>><span> </span><br>>>>>> a.out -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>>><span> </span><br>>>>>> Mark<br>>>>>><span> </span><br>>>>>> On Fri, Jun 30, 2023 at 6:16 AM Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>>>>>> On Fri, Jun 30, 2023 at 1:13 AM Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>>>>>> Mark,<br>>>>>><span> </span><br>>>>>> The application code reads in parameters from an input file, where we can put the PETSc runtime options. Then we pass the options to PetscInitialize(...). Does that sounds right?<br>>>>>><span> </span><br>>>>>> PETSc will read command line argument automatically in PetscInitialize() unless you shut it off.<br>>>>>><span> </span><br>>>>>> Thanks,<br>>>>>><span> </span><br>>>>>> Matt<br>>>>>> Cho<br>>>>>> From: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>>>> Sent: Thursday, June 29, 2023 8:32 PM<br>>>>>> To: Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>>>>>> Cc:<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>>> Mark,<br>>>>>><span> </span><br>>>>>> Thanks for the information. How do I put the runtime options for the executable, say, a.out, which does not have the provision to append arguments? Do I need to change the C++ main to read in the options?<br>>>>>><span> </span><br>>>>>> Cho<br>>>>>> From: Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>>>>>> Sent: Thursday, June 29, 2023 5:55 PM<br>>>>>> To: Ng, Cho-Kuen <<a href="mailto:cho@slac.stanford.edu" target="_blank">cho@slac.stanford.edu</a>><br>>>>>> Cc:<span> </span><a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a><span> </span><<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>>>>>> Subject: Re: [petsc-users] Using PETSc GPU backend<br>>>>>> Run with options: -mat_type aijcusparse -vec_type cuda -log_view -options_left<br>>>>>> The last column of the performance data (from -log_view) will be the percent flops on the GPU. Check that that is > 0.<br>>>>>><span> </span><br>>>>>> The end of the output will list the options that were used and options that were _not_ used (if any). Check that there are no options left.<br>>>>>><span> </span><br>>>>>> Mark<br>>>>>><span> </span><br>>>>>> On Thu, Jun 29, 2023 at 7:50 PM Ng, Cho-Kuen via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>>>>>> I installed PETSc on Perlmutter using "spack install petsc+cuda+zoltan" and used it by "spack load petsc/fwge6pf". Then I compiled the application code (purely CPU code) linking to the petsc package, hoping that I can get performance improvement using the petsc GPU backend. However, the timing was the same using the same number of MPI tasks with and without GPU accelerators. Have I missed something in the process, for example, setting up PETSc options at runtime to use the GPU backend?<br>>>>>><span> </span><br>>>>>> Thanks,<br>>>>>> Cho<br>>>>>><span> </span><br>>>>>><span> </span><br>>>>>> --<span> </span><br>>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>>> -- Norbert Wiener<br>>>>>><span> </span><br>>>>>><span> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>>>>><span> </span><br>>>>><span> </span><br>>>>><span> </span><br>>>>> --<span> </span><br>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>> -- Norbert Wiener<br>>>>><span> </span><br>>>>><span> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>>>>><span> </span><br>>>>><span> </span><br>>>>> --<span> </span><br>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>>>>> -- Norbert Wiener<br>>>>><span> </span><br>>>>><span> </span><a href="https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fMikK_wvRIVkv5jLV6EHt_rPhWLibqlxAAYjRVMbAEGOUp417LWCH59TvzCtcD3j4dOd4xR_tUy2MRnqU1N7kew$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br><br><br></div></span></font></div></div><span id="m_-8746536082970255201cid:86D489B8-2056-4F42-910A-0F39CB72E049"><Untitled.png></span></div></blockquote></div><br></div></div></div></blockquote></div>