<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Oct 12, 2021 at 1:45 PM Chang Liu <<a href="mailto:cliu@pppl.gov">cliu@pppl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Mark,<br>
<br>
The option I use is like<br>
<br>
-pc_type bjacobi -pc_bjacobi_blocks 16 -ksp_type fgmres -mat_type <br>
aijcusparse <b>-sub_pc_factor_mat_solver_type cusparse </b>-sub_ksp_type <br>
preonly <b>-sub_pc_type lu</b> -ksp_max_it 2000 -ksp_rtol 1.e-300 -ksp_atol 1.e-300<br>
<br></blockquote><div><br></div><div>Note, If you use -log_view the last column (rows are the method like MatFactorNumeric) has the percent of work in the GPU.</div><div><br></div><div>Junchau: <b>This</b> implies that we have a cuSparse LU factorization. Is that correct? (I don't think we do)</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
I think this one do both factorization and solve on gpu.<br>
<br>
You can check the runex72_aijcusparse.sh file in petsc install <br>
directory, and try it your self (this is only lu factorization without <br>
iterative solve).<br>
<br>
Chang<br>
<br>
On 10/12/21 1:17 PM, Mark Adams wrote:<br>
> <br>
> <br>
> On Tue, Oct 12, 2021 at 11:19 AM Chang Liu <<a href="mailto:cliu@pppl.gov" target="_blank">cliu@pppl.gov</a> <br>
> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank">cliu@pppl.gov</a>>> wrote:<br>
> <br>
> Hi Junchao,<br>
> <br>
> No I only needs it to be transferred within a node. I use block-Jacobi<br>
> method and GMRES to solve the sparse matrix, so each direct solver will<br>
> take care of a sub-block of the whole matrix. In this way, I can use<br>
> one<br>
> GPU to solve one sub-block, which is stored within one node.<br>
> <br>
> It was stated in the documentation that cusparse solver is slow.<br>
> However, in my test using ex72.c, the cusparse solver is faster than<br>
> mumps or superlu_dist on CPUs.<br>
> <br>
> <br>
> Are we talking about the factorization, the solve, or both?<br>
> <br>
> We do not have an interface to cuSparse's LU factorization (I just <br>
> learned that it exists a few weeks ago).<br>
> Perhaps your fast "cusparse solver" is '-pc_type lu -mat_type <br>
> aijcusparse' ? This would be the CPU factorization, which is the <br>
> dominant cost.<br>
> <br>
> <br>
> Chang<br>
> <br>
> On 10/12/21 10:24 AM, Junchao Zhang wrote:<br>
> > Hi, Chang,<br>
> > For the mumps solver, we usually transfers matrix and vector<br>
> data<br>
> > within a compute node. For the idea you propose, it looks like<br>
> we need<br>
> > to gather data within MPI_COMM_WORLD, right?<br>
> ><br>
> > Mark, I remember you said cusparse solve is slow and you would<br>
> > rather do it on CPU. Is it right?<br>
> ><br>
> > --Junchao Zhang<br>
> ><br>
> ><br>
> > On Mon, Oct 11, 2021 at 10:25 PM Chang Liu via petsc-users<br>
> > <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>
> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>>>><br>
> wrote:<br>
> ><br>
> > Hi,<br>
> ><br>
> > Currently, it is possible to use mumps solver in PETSC with<br>
> > -mat_mumps_use_omp_threads option, so that multiple MPI<br>
> processes will<br>
> > transfer the matrix and rhs data to the master rank, and then<br>
> master<br>
> > rank will call mumps with OpenMP to solve the matrix.<br>
> ><br>
> > I wonder if someone can develop similar option for cusparse<br>
> solver.<br>
> > Right now, this solver does not work with mpiaijcusparse. I<br>
> think a<br>
> > possible workaround is to transfer all the matrix data to one MPI<br>
> > process, and then upload the data to GPU to solve. In this<br>
> way, one can<br>
> > use cusparse solver for a MPI program.<br>
> ><br>
> > Chang<br>
> > --<br>
> > Chang Liu<br>
> > Staff Research Physicist<br>
> > +1 609 243 3438<br>
> > <a href="mailto:cliu@pppl.gov" target="_blank">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank">cliu@pppl.gov</a><br>
> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank">cliu@pppl.gov</a>>><br>
> > Princeton Plasma Physics Laboratory<br>
> > 100 Stellarator Rd, Princeton NJ 08540, USA<br>
> ><br>
> <br>
> -- <br>
> Chang Liu<br>
> Staff Research Physicist<br>
> +1 609 243 3438<br>
> <a href="mailto:cliu@pppl.gov" target="_blank">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank">cliu@pppl.gov</a>><br>
> Princeton Plasma Physics Laboratory<br>
> 100 Stellarator Rd, Princeton NJ 08540, USA<br>
> <br>
<br>
-- <br>
Chang Liu<br>
Staff Research Physicist<br>
+1 609 243 3438<br>
<a href="mailto:cliu@pppl.gov" target="_blank">cliu@pppl.gov</a><br>
Princeton Plasma Physics Laboratory<br>
100 Stellarator Rd, Princeton NJ 08540, USA<br>
</blockquote></div></div>