<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
<div class=""><br class=""></div><div class=""> Yes, but the branch can be used to do telescoping inside the bjacobi as needed.<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Oct 20, 2021, at 2:59 PM, Junchao Zhang <<a href="mailto:junchao.zhang@gmail.com" class="">junchao.zhang@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">The MR <a href="https://gitlab.com/petsc/petsc/-/merge_requests/4471" class="">https://gitlab.com/petsc/petsc/-/merge_requests/4471</a> has not been merged yet.<div class=""><br clear="all" class=""><div class=""><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr" class="">--Junchao Zhang</div></div></div><br class=""></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Oct 20, 2021 at 1:47 PM Chang Liu via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" class="">petsc-users@mcs.anl.gov</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Barry,<br class="">
<br class="">
Are the fixes merged in the master? I was using bjacobi as a <br class="">
preconditioner. Using the latest version of petsc, I found that by calling<br class="">
<br class="">
mpiexec -n 32 --oversubscribe ./ex7 -m 1000 -ksp_view <br class="">
-ksp_monitor_true_residual -ksp_type fgmres -pc_type bjacobi -pc_bjacobi<br class="">
_blocks 4 -sub_ksp_type preonly -sub_pc_type telescope <br class="">
-sub_pc_telescope_reduction_factor 8 -sub_pc_telescope_subcomm_type <br class="">
contiguous -sub_telescope_pc_type lu -sub_telescope_ksp_type preonly <br class="">
-sub_telescope_pc_factor_mat_solver_type mumps -ksp_max_it 2000 <br class="">
-ksp_rtol 1.e-30 -ksp_atol 1.e-30<br class="">
<br class="">
The code is calling PCApply_BJacobi_Multiproc. If I use<br class="">
<br class="">
mpiexec -n 32 --oversubscribe ./ex7 -m 1000 -ksp_view <br class="">
-ksp_monitor_true_residual -telescope_ksp_monitor_true_residual <br class="">
-ksp_type preonly -pc_type telescope -pc_telescope_reduction_factor 8 <br class="">
-pc_telescope_subcomm_type contiguous -telescope_pc_type bjacobi <br class="">
-telescope_ksp_type fgmres -telescope_pc_bjacobi_blocks 4 <br class="">
-telescope_sub_ksp_type preonly -telescope_sub_pc_type lu <br class="">
-telescope_sub_pc_factor_mat_solver_type mumps -telescope_ksp_max_it <br class="">
2000 -telescope_ksp_rtol 1.e-30 -telescope_ksp_atol 1.e-30<br class="">
<br class="">
The code is calling PCApply_BJacobi_Singleblock. You can test it yourself.<br class="">
<br class="">
Regards,<br class="">
<br class="">
Chang<br class="">
<br class="">
On 10/20/21 1:14 PM, Barry Smith wrote:<br class="">
> <br class="">
> <br class="">
>> On Oct 20, 2021, at 12:48 PM, Chang Liu <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> wrote:<br class="">
>><br class="">
>> Hi Pierre,<br class="">
>><br class="">
>> I have another suggestion for telescope. I have achieved my goal by putting telescope outside bjacobi. But the code still does not work if I use telescope as a pc for subblock. I think the reason is that I want to use cusparse as the solver, which can only deal with seqaij matrix and not mpiaij matrix.<br class="">
> <br class="">
> <br class="">
> This is suppose to work with the recent fixes. The telescope should produce a seq matrix and for each solve map the parallel vector (over the subdomain) automatically down to the one rank with the GPU to solve it on the GPU. It is not clear to me where the process is going wrong.<br class="">
> <br class="">
> Barry<br class="">
> <br class="">
> <br class="">
> <br class="">
>> However, for telescope pc, it can put the matrix into one mpi rank, thus making it a seqaij for factorization stage, but then after factorization it will give the data back to the original comminicator. This will make the matrix back to mpiaij, and then cusparse cannot solve it.<br class="">
>><br class="">
>> I think a better option is to do the factorization on CPU with mpiaij, then then transform the preconditioner matrix to seqaij and do the matsolve GPU. But I am not sure if it can be achieved using telescope.<br class="">
>><br class="">
>> Regads,<br class="">
>><br class="">
>> Chang<br class="">
>><br class="">
>> On 10/15/21 5:29 AM, Pierre Jolivet wrote:<br class="">
>>> Hi Chang,<br class="">
>>> The output you sent with MUMPS looks alright to me, you can see that the MatType is properly set to seqaijcusparse (and not mpiaijcusparse).<br class="">
>>> I don’t know what is wrong with -sub_telescope_pc_factor_mat_solver_type cusparse, I don’t have a PETSc installation for testing this, hopefully Barry or Junchao can confirm this wrong behavior and get this fixed.<br class="">
>>> As for permuting PCTELESCOPE and PCBJACOBI, in your case, the outer PC will be equivalent, yes.<br class="">
>>> However, it would be more efficient to do PCBJACOBI and then PCTELESCOPE.<br class="">
>>> PCBJACOBI prunes the operator by basically removing all coefficients outside of the diagonal blocks.<br class="">
>>> Then, PCTELESCOPE "groups everything together”.<br class="">
>>> If you do it the other way around, PCTELESCOPE will “group everything together” and then PCBJACOBI will prune the operator.<br class="">
>>> So the PCTELESCOPE SetUp will be costly for nothing since some coefficients will be thrown out afterwards in the PCBJACOBI SetUp.<br class="">
>>> I hope I’m clear enough, otherwise I can try do draw some pictures.<br class="">
>>> Thanks,<br class="">
>>> Pierre<br class="">
>>>> On 15 Oct 2021, at 4:39 AM, Chang Liu <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> wrote:<br class="">
>>>><br class="">
>>>> Hi Pierre and Barry,<br class="">
>>>><br class="">
>>>> I think maybe I should use telescope outside bjacobi? like this<br class="">
>>>><br class="">
>>>> mpiexec -n 16 --hostfile hostfile --oversubscribe ./ex7 -m 400 -ksp_view -ksp_monitor_true_residual -pc_type telescope -pc_telescope_reduction_factor 4 -t<br class="">
>>>> elescope_pc_type bjacobi -telescope_ksp_type fgmres -telescope_pc_bjacobi_blocks 4 -mat_type aijcusparse -telescope_sub_ksp_type preonly -telescope_sub_pc_type lu -telescope_sub_pc_factor_mat_solve<br class="">
>>>> r_type cusparse -ksp_max_it 2000 -ksp_rtol 1.e-20 -ksp_atol 1.e-9<br class="">
>>>><br class="">
>>>> But then I got an error that<br class="">
>>>><br class="">
>>>> [0]PETSC ERROR: MatSolverType cusparse does not support matrix type seqaij<br class="">
>>>><br class="">
>>>> But the mat type should be aijcusparse. I think telescope change the mat type.<br class="">
>>>><br class="">
>>>> Chang<br class="">
>>>><br class="">
>>>> On 10/14/21 10:11 PM, Chang Liu wrote:<br class="">
>>>>> For comparison, here is the output using mumps instead of cusparse<br class="">
>>>>> $ mpiexec -n 16 --hostfile hostfile --oversubscribe ./ex7 -m 400 -ksp_view -ksp_monitor_true_residual -pc_type bjacobi -pc_bjacobi_blocks 4 -ksp_type fgmres -mat_type aijcusparse -sub_pc_type telescope -sub_ksp_type preonly -sub_telescope_ksp_type preonly -sub_telescope_pc_type lu -sub_telescope_pc_factor_mat_solver_type mumps -sub_pc_telescope_reduction_factor 4 -sub_pc_telescope_subcomm_type contiguous -ksp_max_it 2000 -ksp_rtol 1.e-20 -ksp_atol 1.e-9<br class="">
>>>>> 0 KSP unpreconditioned resid norm 4.014971979977e+01 true resid norm 4.014971979977e+01 ||r(i)||/||b|| 1.000000000000e+00<br class="">
>>>>> 1 KSP unpreconditioned resid norm 2.439995191694e+00 true resid norm 2.439995191694e+00 ||r(i)||/||b|| 6.077240896978e-02<br class="">
>>>>> 2 KSP unpreconditioned resid norm 1.280694102588e+00 true resid norm 1.280694102588e+00 ||r(i)||/||b|| 3.189795866509e-02<br class="">
>>>>> 3 KSP unpreconditioned resid norm 1.041100266810e+00 true resid norm 1.041100266810e+00 ||r(i)||/||b|| 2.593044912896e-02<br class="">
>>>>> 4 KSP unpreconditioned resid norm 7.274347137268e-01 true resid norm 7.274347137268e-01 ||r(i)||/||b|| 1.811805206499e-02<br class="">
>>>>> 5 KSP unpreconditioned resid norm 5.429229329787e-01 true resid norm 5.429229329787e-01 ||r(i)||/||b|| 1.352245882876e-02<br class="">
>>>>> 6 KSP unpreconditioned resid norm 4.332970410353e-01 true resid norm 4.332970410353e-01 ||r(i)||/||b|| 1.079203150598e-02<br class="">
>>>>> 7 KSP unpreconditioned resid norm 3.948206050950e-01 true resid norm 3.948206050950e-01 ||r(i)||/||b|| 9.833707609019e-03<br class="">
>>>>> 8 KSP unpreconditioned resid norm 3.379580577269e-01 true resid norm 3.379580577269e-01 ||r(i)||/||b|| 8.417444988714e-03<br class="">
>>>>> 9 KSP unpreconditioned resid norm 2.875593971410e-01 true resid norm 2.875593971410e-01 ||r(i)||/||b|| 7.162176936105e-03<br class="">
>>>>> 10 KSP unpreconditioned resid norm 2.533983363244e-01 true resid norm 2.533983363244e-01 ||r(i)||/||b|| 6.311335112378e-03<br class="">
>>>>> 11 KSP unpreconditioned resid norm 2.389169921094e-01 true resid norm 2.389169921094e-01 ||r(i)||/||b|| 5.950651543793e-03<br class="">
>>>>> 12 KSP unpreconditioned resid norm 2.118961639089e-01 true resid norm 2.118961639089e-01 ||r(i)||/||b|| 5.277649880637e-03<br class="">
>>>>> 13 KSP unpreconditioned resid norm 1.885892030223e-01 true resid norm 1.885892030223e-01 ||r(i)||/||b|| 4.697148671593e-03<br class="">
>>>>> 14 KSP unpreconditioned resid norm 1.763510666948e-01 true resid norm 1.763510666948e-01 ||r(i)||/||b|| 4.392336175055e-03<br class="">
>>>>> 15 KSP unpreconditioned resid norm 1.638219366731e-01 true resid norm 1.638219366731e-01 ||r(i)||/||b|| 4.080275964317e-03<br class="">
>>>>> 16 KSP unpreconditioned resid norm 1.476792766432e-01 true resid norm 1.476792766432e-01 ||r(i)||/||b|| 3.678214378076e-03<br class="">
>>>>> 17 KSP unpreconditioned resid norm 1.349906937321e-01 true resid norm 1.349906937321e-01 ||r(i)||/||b|| 3.362182710248e-03<br class="">
>>>>> 18 KSP unpreconditioned resid norm 1.289673236836e-01 true resid norm 1.289673236836e-01 ||r(i)||/||b|| 3.212159993314e-03<br class="">
>>>>> 19 KSP unpreconditioned resid norm 1.167505658153e-01 true resid norm 1.167505658153e-01 ||r(i)||/||b|| 2.907879965230e-03<br class="">
>>>>> 20 KSP unpreconditioned resid norm 1.046037988999e-01 true resid norm 1.046037988999e-01 ||r(i)||/||b|| 2.605343185995e-03<br class="">
>>>>> 21 KSP unpreconditioned resid norm 9.832660514331e-02 true resid norm 9.832660514331e-02 ||r(i)||/||b|| 2.448998539309e-03<br class="">
>>>>> 22 KSP unpreconditioned resid norm 8.835618950141e-02 true resid norm 8.835618950142e-02 ||r(i)||/||b|| 2.200667649539e-03<br class="">
>>>>> 23 KSP unpreconditioned resid norm 7.563496650115e-02 true resid norm 7.563496650116e-02 ||r(i)||/||b|| 1.883823022386e-03<br class="">
>>>>> 24 KSP unpreconditioned resid norm 6.651291376834e-02 true resid norm 6.651291376834e-02 ||r(i)||/||b|| 1.656622115921e-03<br class="">
>>>>> 25 KSP unpreconditioned resid norm 5.890393227906e-02 true resid norm 5.890393227906e-02 ||r(i)||/||b|| 1.467106933070e-03<br class="">
>>>>> 26 KSP unpreconditioned resid norm 4.661992782780e-02 true resid norm 4.661992782780e-02 ||r(i)||/||b|| 1.161152009536e-03<br class="">
>>>>> 27 KSP unpreconditioned resid norm 3.690705358716e-02 true resid norm 3.690705358716e-02 ||r(i)||/||b|| 9.192356452602e-04<br class="">
>>>>> 28 KSP unpreconditioned resid norm 3.209680460188e-02 true resid norm 3.209680460188e-02 ||r(i)||/||b|| 7.994278605666e-04<br class="">
>>>>> 29 KSP unpreconditioned resid norm 2.354337626000e-02 true resid norm 2.354337626001e-02 ||r(i)||/||b|| 5.863895533373e-04<br class="">
>>>>> 30 KSP unpreconditioned resid norm 1.701296561785e-02 true resid norm 1.701296561785e-02 ||r(i)||/||b|| 4.237380908932e-04<br class="">
>>>>> 31 KSP unpreconditioned resid norm 1.509942937258e-02 true resid norm 1.509942937258e-02 ||r(i)||/||b|| 3.760780759588e-04<br class="">
>>>>> 32 KSP unpreconditioned resid norm 1.258274688515e-02 true resid norm 1.258274688515e-02 ||r(i)||/||b|| 3.133956338402e-04<br class="">
>>>>> 33 KSP unpreconditioned resid norm 9.805748771638e-03 true resid norm 9.805748771638e-03 ||r(i)||/||b|| 2.442295692359e-04<br class="">
>>>>> 34 KSP unpreconditioned resid norm 8.596552678160e-03 true resid norm 8.596552678160e-03 ||r(i)||/||b|| 2.141123953301e-04<br class="">
>>>>> 35 KSP unpreconditioned resid norm 6.936406707500e-03 true resid norm 6.936406707500e-03 ||r(i)||/||b|| 1.727635147167e-04<br class="">
>>>>> 36 KSP unpreconditioned resid norm 5.533741607932e-03 true resid norm 5.533741607932e-03 ||r(i)||/||b|| 1.378276519869e-04<br class="">
>>>>> 37 KSP unpreconditioned resid norm 4.982347757923e-03 true resid norm 4.982347757923e-03 ||r(i)||/||b|| 1.240942099414e-04<br class="">
>>>>> 38 KSP unpreconditioned resid norm 4.309608348059e-03 true resid norm 4.309608348059e-03 ||r(i)||/||b|| 1.073384414524e-04<br class="">
>>>>> 39 KSP unpreconditioned resid norm 3.729408303186e-03 true resid norm 3.729408303185e-03 ||r(i)||/||b|| 9.288753001974e-05<br class="">
>>>>> 40 KSP unpreconditioned resid norm 3.490003351128e-03 true resid norm 3.490003351128e-03 ||r(i)||/||b|| 8.692472496776e-05<br class="">
>>>>> 41 KSP unpreconditioned resid norm 3.069012426454e-03 true resid norm 3.069012426453e-03 ||r(i)||/||b|| 7.643919912166e-05<br class="">
>>>>> 42 KSP unpreconditioned resid norm 2.772928845284e-03 true resid norm 2.772928845284e-03 ||r(i)||/||b|| 6.906471225983e-05<br class="">
>>>>> 43 KSP unpreconditioned resid norm 2.561454192399e-03 true resid norm 2.561454192398e-03 ||r(i)||/||b|| 6.379756085902e-05<br class="">
>>>>> 44 KSP unpreconditioned resid norm 2.253662762802e-03 true resid norm 2.253662762802e-03 ||r(i)||/||b|| 5.613146926159e-05<br class="">
>>>>> 45 KSP unpreconditioned resid norm 2.086800523919e-03 true resid norm 2.086800523919e-03 ||r(i)||/||b|| 5.197546917701e-05<br class="">
>>>>> 46 KSP unpreconditioned resid norm 1.926028182896e-03 true resid norm 1.926028182896e-03 ||r(i)||/||b|| 4.797114880257e-05<br class="">
>>>>> 47 KSP unpreconditioned resid norm 1.769243808622e-03 true resid norm 1.769243808622e-03 ||r(i)||/||b|| 4.406615581492e-05<br class="">
>>>>> 48 KSP unpreconditioned resid norm 1.656654905964e-03 true resid norm 1.656654905964e-03 ||r(i)||/||b|| 4.126192945371e-05<br class="">
>>>>> 49 KSP unpreconditioned resid norm 1.572052627273e-03 true resid norm 1.572052627273e-03 ||r(i)||/||b|| 3.915475961260e-05<br class="">
>>>>> 50 KSP unpreconditioned resid norm 1.454960682355e-03 true resid norm 1.454960682355e-03 ||r(i)||/||b|| 3.623837699518e-05<br class="">
>>>>> 51 KSP unpreconditioned resid norm 1.375985053014e-03 true resid norm 1.375985053014e-03 ||r(i)||/||b|| 3.427134883820e-05<br class="">
>>>>> 52 KSP unpreconditioned resid norm 1.269325501087e-03 true resid norm 1.269325501087e-03 ||r(i)||/||b|| 3.161480347603e-05<br class="">
>>>>> 53 KSP unpreconditioned resid norm 1.184791772965e-03 true resid norm 1.184791772965e-03 ||r(i)||/||b|| 2.950934100844e-05<br class="">
>>>>> 54 KSP unpreconditioned resid norm 1.064535156080e-03 true resid norm 1.064535156080e-03 ||r(i)||/||b|| 2.651413662135e-05<br class="">
>>>>> 55 KSP unpreconditioned resid norm 9.639036688120e-04 true resid norm 9.639036688117e-04 ||r(i)||/||b|| 2.400773090370e-05<br class="">
>>>>> 56 KSP unpreconditioned resid norm 8.632359780260e-04 true resid norm 8.632359780260e-04 ||r(i)||/||b|| 2.150042347322e-05<br class="">
>>>>> 57 KSP unpreconditioned resid norm 7.613605783850e-04 true resid norm 7.613605783850e-04 ||r(i)||/||b|| 1.896303591113e-05<br class="">
>>>>> 58 KSP unpreconditioned resid norm 6.681073248348e-04 true resid norm 6.681073248349e-04 ||r(i)||/||b|| 1.664039819373e-05<br class="">
>>>>> 59 KSP unpreconditioned resid norm 5.656127908544e-04 true resid norm 5.656127908545e-04 ||r(i)||/||b|| 1.408758999254e-05<br class="">
>>>>> 60 KSP unpreconditioned resid norm 4.850863370767e-04 true resid norm 4.850863370767e-04 ||r(i)||/||b|| 1.208193580169e-05<br class="">
>>>>> 61 KSP unpreconditioned resid norm 4.374055762320e-04 true resid norm 4.374055762316e-04 ||r(i)||/||b|| 1.089436186387e-05<br class="">
>>>>> 62 KSP unpreconditioned resid norm 3.874398257079e-04 true resid norm 3.874398257077e-04 ||r(i)||/||b|| 9.649876204364e-06<br class="">
>>>>> 63 KSP unpreconditioned resid norm 3.364908694427e-04 true resid norm 3.364908694429e-04 ||r(i)||/||b|| 8.380902061609e-06<br class="">
>>>>> 64 KSP unpreconditioned resid norm 2.961034697265e-04 true resid norm 2.961034697268e-04 ||r(i)||/||b|| 7.374982221632e-06<br class="">
>>>>> 65 KSP unpreconditioned resid norm 2.640593092764e-04 true resid norm 2.640593092767e-04 ||r(i)||/||b|| 6.576865557059e-06<br class="">
>>>>> 66 KSP unpreconditioned resid norm 2.423231125743e-04 true resid norm 2.423231125745e-04 ||r(i)||/||b|| 6.035487016671e-06<br class="">
>>>>> 67 KSP unpreconditioned resid norm 2.182349471179e-04 true resid norm 2.182349471179e-04 ||r(i)||/||b|| 5.435528521898e-06<br class="">
>>>>> 68 KSP unpreconditioned resid norm 2.008438265031e-04 true resid norm 2.008438265028e-04 ||r(i)||/||b|| 5.002371809927e-06<br class="">
>>>>> 69 KSP unpreconditioned resid norm 1.838732863386e-04 true resid norm 1.838732863388e-04 ||r(i)||/||b|| 4.579690400226e-06<br class="">
>>>>> 70 KSP unpreconditioned resid norm 1.723786027645e-04 true resid norm 1.723786027645e-04 ||r(i)||/||b|| 4.293394913444e-06<br class="">
>>>>> 71 KSP unpreconditioned resid norm 1.580945192204e-04 true resid norm 1.580945192205e-04 ||r(i)||/||b|| 3.937624471826e-06<br class="">
>>>>> 72 KSP unpreconditioned resid norm 1.476687469671e-04 true resid norm 1.476687469671e-04 ||r(i)||/||b|| 3.677952117812e-06<br class="">
>>>>> 73 KSP unpreconditioned resid norm 1.385018526182e-04 true resid norm 1.385018526184e-04 ||r(i)||/||b|| 3.449634351350e-06<br class="">
>>>>> 74 KSP unpreconditioned resid norm 1.279712893541e-04 true resid norm 1.279712893541e-04 ||r(i)||/||b|| 3.187351991305e-06<br class="">
>>>>> 75 KSP unpreconditioned resid norm 1.202010411772e-04 true resid norm 1.202010411774e-04 ||r(i)||/||b|| 2.993820175504e-06<br class="">
>>>>> 76 KSP unpreconditioned resid norm 1.113459414198e-04 true resid norm 1.113459414200e-04 ||r(i)||/||b|| 2.773268206485e-06<br class="">
>>>>> 77 KSP unpreconditioned resid norm 1.042523036036e-04 true resid norm 1.042523036037e-04 ||r(i)||/||b|| 2.596588572066e-06<br class="">
>>>>> 78 KSP unpreconditioned resid norm 9.565176453232e-05 true resid norm 9.565176453227e-05 ||r(i)||/||b|| 2.382376888539e-06<br class="">
>>>>> 79 KSP unpreconditioned resid norm 8.896901670359e-05 true resid norm 8.896901670365e-05 ||r(i)||/||b|| 2.215931198209e-06<br class="">
>>>>> 80 KSP unpreconditioned resid norm 8.119298425803e-05 true resid norm 8.119298425824e-05 ||r(i)||/||b|| 2.022255314935e-06<br class="">
>>>>> 81 KSP unpreconditioned resid norm 7.544528309154e-05 true resid norm 7.544528309154e-05 ||r(i)||/||b|| 1.879098620558e-06<br class="">
>>>>> 82 KSP unpreconditioned resid norm 6.755385041138e-05 true resid norm 6.755385041176e-05 ||r(i)||/||b|| 1.682548489719e-06<br class="">
>>>>> 83 KSP unpreconditioned resid norm 6.158629300870e-05 true resid norm 6.158629300835e-05 ||r(i)||/||b|| 1.533915885727e-06<br class="">
>>>>> 84 KSP unpreconditioned resid norm 5.358756885754e-05 true resid norm 5.358756885765e-05 ||r(i)||/||b|| 1.334693470462e-06<br class="">
>>>>> 85 KSP unpreconditioned resid norm 4.774852370380e-05 true resid norm 4.774852370387e-05 ||r(i)||/||b|| 1.189261692037e-06<br class="">
>>>>> 86 KSP unpreconditioned resid norm 3.919358737908e-05 true resid norm 3.919358737930e-05 ||r(i)||/||b|| 9.761858258229e-07<br class="">
>>>>> 87 KSP unpreconditioned resid norm 3.434042319950e-05 true resid norm 3.434042319947e-05 ||r(i)||/||b|| 8.553091620745e-07<br class="">
>>>>> 88 KSP unpreconditioned resid norm 2.813699436281e-05 true resid norm 2.813699436302e-05 ||r(i)||/||b|| 7.008017615898e-07<br class="">
>>>>> 89 KSP unpreconditioned resid norm 2.462248069068e-05 true resid norm 2.462248069051e-05 ||r(i)||/||b|| 6.132665635851e-07<br class="">
>>>>> 90 KSP unpreconditioned resid norm 2.040558789626e-05 true resid norm 2.040558789626e-05 ||r(i)||/||b|| 5.082373674841e-07<br class="">
>>>>> 91 KSP unpreconditioned resid norm 1.888523204468e-05 true resid norm 1.888523204470e-05 ||r(i)||/||b|| 4.703702077842e-07<br class="">
>>>>> 92 KSP unpreconditioned resid norm 1.707071292484e-05 true resid norm 1.707071292474e-05 ||r(i)||/||b|| 4.251763900191e-07<br class="">
>>>>> 93 KSP unpreconditioned resid norm 1.498636454665e-05 true resid norm 1.498636454672e-05 ||r(i)||/||b|| 3.732619958859e-07<br class="">
>>>>> 94 KSP unpreconditioned resid norm 1.219393542993e-05 true resid norm 1.219393543006e-05 ||r(i)||/||b|| 3.037115947725e-07<br class="">
>>>>> 95 KSP unpreconditioned resid norm 1.059996963300e-05 true resid norm 1.059996963303e-05 ||r(i)||/||b|| 2.640110487917e-07<br class="">
>>>>> 96 KSP unpreconditioned resid norm 9.099659872548e-06 true resid norm 9.099659873214e-06 ||r(i)||/||b|| 2.266431725699e-07<br class="">
>>>>> 97 KSP unpreconditioned resid norm 8.147347587295e-06 true resid norm 8.147347587584e-06 ||r(i)||/||b|| 2.029241456283e-07<br class="">
>>>>> 98 KSP unpreconditioned resid norm 7.167226146744e-06 true resid norm 7.167226146783e-06 ||r(i)||/||b|| 1.785124823418e-07<br class="">
>>>>> 99 KSP unpreconditioned resid norm 6.552540209538e-06 true resid norm 6.552540209577e-06 ||r(i)||/||b|| 1.632026385802e-07<br class="">
>>>>> 100 KSP unpreconditioned resid norm 5.767783600111e-06 true resid norm 5.767783600320e-06 ||r(i)||/||b|| 1.436568830140e-07<br class="">
>>>>> 101 KSP unpreconditioned resid norm 5.261057430584e-06 true resid norm 5.261057431144e-06 ||r(i)||/||b|| 1.310359688033e-07<br class="">
>>>>> 102 KSP unpreconditioned resid norm 4.715498525786e-06 true resid norm 4.715498525947e-06 ||r(i)||/||b|| 1.174478564100e-07<br class="">
>>>>> 103 KSP unpreconditioned resid norm 4.380052669622e-06 true resid norm 4.380052669825e-06 ||r(i)||/||b|| 1.090929822591e-07<br class="">
>>>>> 104 KSP unpreconditioned resid norm 3.911664470060e-06 true resid norm 3.911664470226e-06 ||r(i)||/||b|| 9.742694319496e-08<br class="">
>>>>> 105 KSP unpreconditioned resid norm 3.652211458315e-06 true resid norm 3.652211458259e-06 ||r(i)||/||b|| 9.096480564430e-08<br class="">
>>>>> 106 KSP unpreconditioned resid norm 3.387532128049e-06 true resid norm 3.387532128358e-06 ||r(i)||/||b|| 8.437249737363e-08<br class="">
>>>>> 107 KSP unpreconditioned resid norm 3.234218880987e-06 true resid norm 3.234218880798e-06 ||r(i)||/||b|| 8.055395895481e-08<br class="">
>>>>> 108 KSP unpreconditioned resid norm 3.016905196388e-06 true resid norm 3.016905196492e-06 ||r(i)||/||b|| 7.514137611763e-08<br class="">
>>>>> 109 KSP unpreconditioned resid norm 2.858246441921e-06 true resid norm 2.858246441975e-06 ||r(i)||/||b|| 7.118969836476e-08<br class="">
>>>>> 110 KSP unpreconditioned resid norm 2.637118810847e-06 true resid norm 2.637118810750e-06 ||r(i)||/||b|| 6.568212241336e-08<br class="">
>>>>> 111 KSP unpreconditioned resid norm 2.494976088717e-06 true resid norm 2.494976088700e-06 ||r(i)||/||b|| 6.214180574966e-08<br class="">
>>>>> 112 KSP unpreconditioned resid norm 2.270639574272e-06 true resid norm 2.270639574200e-06 ||r(i)||/||b|| 5.655430686750e-08<br class="">
>>>>> 113 KSP unpreconditioned resid norm 2.104988663813e-06 true resid norm 2.104988664169e-06 ||r(i)||/||b|| 5.242847707696e-08<br class="">
>>>>> 114 KSP unpreconditioned resid norm 1.889361127301e-06 true resid norm 1.889361127526e-06 ||r(i)||/||b|| 4.705789073868e-08<br class="">
>>>>> 115 KSP unpreconditioned resid norm 1.732367008052e-06 true resid norm 1.732367007971e-06 ||r(i)||/||b|| 4.314767367271e-08<br class="">
>>>>> 116 KSP unpreconditioned resid norm 1.509288268391e-06 true resid norm 1.509288268645e-06 ||r(i)||/||b|| 3.759150191264e-08<br class="">
>>>>> 117 KSP unpreconditioned resid norm 1.359169217644e-06 true resid norm 1.359169217445e-06 ||r(i)||/||b|| 3.385252062089e-08<br class="">
>>>>> 118 KSP unpreconditioned resid norm 1.180146337735e-06 true resid norm 1.180146337908e-06 ||r(i)||/||b|| 2.939363820703e-08<br class="">
>>>>> 119 KSP unpreconditioned resid norm 1.067757039683e-06 true resid norm 1.067757039924e-06 ||r(i)||/||b|| 2.659438335433e-08<br class="">
>>>>> 120 KSP unpreconditioned resid norm 9.435833073736e-07 true resid norm 9.435833073736e-07 ||r(i)||/||b|| 2.350161625235e-08<br class="">
>>>>> 121 KSP unpreconditioned resid norm 8.749457237613e-07 true resid norm 8.749457236791e-07 ||r(i)||/||b|| 2.179207546261e-08<br class="">
>>>>> 122 KSP unpreconditioned resid norm 7.945760150897e-07 true resid norm 7.945760150444e-07 ||r(i)||/||b|| 1.979032528762e-08<br class="">
>>>>> 123 KSP unpreconditioned resid norm 7.141240839013e-07 true resid norm 7.141240838682e-07 ||r(i)||/||b|| 1.778652721438e-08<br class="">
>>>>> 124 KSP unpreconditioned resid norm 6.300566936733e-07 true resid norm 6.300566936607e-07 ||r(i)||/||b|| 1.569267971988e-08<br class="">
>>>>> 125 KSP unpreconditioned resid norm 5.628986997544e-07 true resid norm 5.628986995849e-07 ||r(i)||/||b|| 1.401999073448e-08<br class="">
>>>>> 126 KSP unpreconditioned resid norm 5.119018951602e-07 true resid norm 5.119018951837e-07 ||r(i)||/||b|| 1.274982484900e-08<br class="">
>>>>> 127 KSP unpreconditioned resid norm 4.664670343748e-07 true resid norm 4.664670344042e-07 ||r(i)||/||b|| 1.161818903670e-08<br class="">
>>>>> 128 KSP unpreconditioned resid norm 4.253264691112e-07 true resid norm 4.253264691948e-07 ||r(i)||/||b|| 1.059351027394e-08<br class="">
>>>>> 129 KSP unpreconditioned resid norm 3.868921150516e-07 true resid norm 3.868921150517e-07 ||r(i)||/||b|| 9.636234498800e-09<br class="">
>>>>> 130 KSP unpreconditioned resid norm 3.558445658540e-07 true resid norm 3.558445660061e-07 ||r(i)||/||b|| 8.862940209315e-09<br class="">
>>>>> 131 KSP unpreconditioned resid norm 3.268710273840e-07 true resid norm 3.268710272455e-07 ||r(i)||/||b|| 8.141302825416e-09<br class="">
>>>>> 132 KSP unpreconditioned resid norm 3.041273897592e-07 true resid norm 3.041273896694e-07 ||r(i)||/||b|| 7.574832182794e-09<br class="">
>>>>> 133 KSP unpreconditioned resid norm 2.851926677922e-07 true resid norm 2.851926674248e-07 ||r(i)||/||b|| 7.103229333782e-09<br class="">
>>>>> 134 KSP unpreconditioned resid norm 2.694708315072e-07 true resid norm 2.694708309500e-07 ||r(i)||/||b|| 6.711649104748e-09<br class="">
>>>>> 135 KSP unpreconditioned resid norm 2.534825559099e-07 true resid norm 2.534825557469e-07 ||r(i)||/||b|| 6.313432746507e-09<br class="">
>>>>> 136 KSP unpreconditioned resid norm 2.387342352458e-07 true resid norm 2.387342351804e-07 ||r(i)||/||b|| 5.946099658254e-09<br class="">
>>>>> 137 KSP unpreconditioned resid norm 2.200861667617e-07 true resid norm 2.200861665255e-07 ||r(i)||/||b|| 5.481636425438e-09<br class="">
>>>>> 138 KSP unpreconditioned resid norm 2.051415370616e-07 true resid norm 2.051415370614e-07 ||r(i)||/||b|| 5.109413915824e-09<br class="">
>>>>> 139 KSP unpreconditioned resid norm 1.887376429396e-07 true resid norm 1.887376426682e-07 ||r(i)||/||b|| 4.700845824315e-09<br class="">
>>>>> 140 KSP unpreconditioned resid norm 1.729743133005e-07 true resid norm 1.729743128342e-07 ||r(i)||/||b|| 4.308232129561e-09<br class="">
>>>>> 141 KSP unpreconditioned resid norm 1.541021130781e-07 true resid norm 1.541021128364e-07 ||r(i)||/||b|| 3.838186508023e-09<br class="">
>>>>> 142 KSP unpreconditioned resid norm 1.384631628565e-07 true resid norm 1.384631627735e-07 ||r(i)||/||b|| 3.448670712125e-09<br class="">
>>>>> 143 KSP unpreconditioned resid norm 1.223114405626e-07 true resid norm 1.223114403883e-07 ||r(i)||/||b|| 3.046383411846e-09<br class="">
>>>>> 144 KSP unpreconditioned resid norm 1.087313066223e-07 true resid norm 1.087313065117e-07 ||r(i)||/||b|| 2.708146085550e-09<br class="">
>>>>> 145 KSP unpreconditioned resid norm 9.181901998734e-08 true resid norm 9.181901984268e-08 ||r(i)||/||b|| 2.286915582489e-09<br class="">
>>>>> 146 KSP unpreconditioned resid norm 7.885850510808e-08 true resid norm 7.885850531446e-08 ||r(i)||/||b|| 1.964110975313e-09<br class="">
>>>>> 147 KSP unpreconditioned resid norm 6.483393946950e-08 true resid norm 6.483393931383e-08 ||r(i)||/||b|| 1.614804278515e-09<br class="">
>>>>> 148 KSP unpreconditioned resid norm 5.690132597004e-08 true resid norm 5.690132577518e-08 ||r(i)||/||b|| 1.417228465328e-09<br class="">
>>>>> 149 KSP unpreconditioned resid norm 5.023671521579e-08 true resid norm 5.023671502186e-08 ||r(i)||/||b|| 1.251234511035e-09<br class="">
>>>>> 150 KSP unpreconditioned resid norm 4.625371062660e-08 true resid norm 4.625371062660e-08 ||r(i)||/||b|| 1.152030720445e-09<br class="">
>>>>> 151 KSP unpreconditioned resid norm 4.349049084805e-08 true resid norm 4.349049089337e-08 ||r(i)||/||b|| 1.083207830846e-09<br class="">
>>>>> 152 KSP unpreconditioned resid norm 3.932593324498e-08 true resid norm 3.932593376918e-08 ||r(i)||/||b|| 9.794821474546e-10<br class="">
>>>>> 153 KSP unpreconditioned resid norm 3.504167649202e-08 true resid norm 3.504167638113e-08 ||r(i)||/||b|| 8.727751166356e-10<br class="">
>>>>> 154 KSP unpreconditioned resid norm 2.892726347747e-08 true resid norm 2.892726348583e-08 ||r(i)||/||b|| 7.204848160858e-10<br class="">
>>>>> 155 KSP unpreconditioned resid norm 2.477647033202e-08 true resid norm 2.477647041570e-08 ||r(i)||/||b|| 6.171019508795e-10<br class="">
>>>>> 156 KSP unpreconditioned resid norm 2.128504065757e-08 true resid norm 2.128504067423e-08 ||r(i)||/||b|| 5.301416991298e-10<br class="">
>>>>> 157 KSP unpreconditioned resid norm 1.879248809429e-08 true resid norm 1.879248818928e-08 ||r(i)||/||b|| 4.680602575310e-10<br class="">
>>>>> 158 KSP unpreconditioned resid norm 1.673649140073e-08 true resid norm 1.673649134005e-08 ||r(i)||/||b|| 4.168520085200e-10<br class="">
>>>>> 159 KSP unpreconditioned resid norm 1.497123388109e-08 true resid norm 1.497123365569e-08 ||r(i)||/||b|| 3.728851342016e-10<br class="">
>>>>> 160 KSP unpreconditioned resid norm 1.315982130162e-08 true resid norm 1.315982149329e-08 ||r(i)||/||b|| 3.277687007261e-10<br class="">
>>>>> 161 KSP unpreconditioned resid norm 1.182395864938e-08 true resid norm 1.182395868430e-08 ||r(i)||/||b|| 2.944966675550e-10<br class="">
>>>>> 162 KSP unpreconditioned resid norm 1.070204481679e-08 true resid norm 1.070204466432e-08 ||r(i)||/||b|| 2.665534085342e-10<br class="">
>>>>> 163 KSP unpreconditioned resid norm 9.969290307649e-09 true resid norm 9.969290432333e-09 ||r(i)||/||b|| 2.483028644297e-10<br class="">
>>>>> 164 KSP unpreconditioned resid norm 9.134440883306e-09 true resid norm 9.134440980976e-09 ||r(i)||/||b|| 2.275094577628e-10<br class="">
>>>>> 165 KSP unpreconditioned resid norm 8.593316427292e-09 true resid norm 8.593316413360e-09 ||r(i)||/||b|| 2.140317904139e-10<br class="">
>>>>> 166 KSP unpreconditioned resid norm 8.042173048464e-09 true resid norm 8.042173332848e-09 ||r(i)||/||b|| 2.003045942277e-10<br class="">
>>>>> 167 KSP unpreconditioned resid norm 7.655518522782e-09 true resid norm 7.655518879144e-09 ||r(i)||/||b|| 1.906742791064e-10<br class="">
>>>>> 168 KSP unpreconditioned resid norm 7.210283391815e-09 true resid norm 7.210283220312e-09 ||r(i)||/||b|| 1.795848951442e-10<br class="">
>>>>> 169 KSP unpreconditioned resid norm 6.793967416271e-09 true resid norm 6.793967448832e-09 ||r(i)||/||b|| 1.692158122825e-10<br class="">
>>>>> 170 KSP unpreconditioned resid norm 6.249160304588e-09 true resid norm 6.249160382647e-09 ||r(i)||/||b|| 1.556464257736e-10<br class="">
>>>>> 171 KSP unpreconditioned resid norm 5.794936438798e-09 true resid norm 5.794936332552e-09 ||r(i)||/||b|| 1.443331699811e-10<br class="">
>>>>> 172 KSP unpreconditioned resid norm 5.222337397128e-09 true resid norm 5.222337443277e-09 ||r(i)||/||b|| 1.300715788135e-10<br class="">
>>>>> 173 KSP unpreconditioned resid norm 4.755359110447e-09 true resid norm 4.755358888996e-09 ||r(i)||/||b|| 1.184406494668e-10<br class="">
>>>>> 174 KSP unpreconditioned resid norm 4.317537007873e-09 true resid norm 4.317537267718e-09 ||r(i)||/||b|| 1.075359252630e-10<br class="">
>>>>> 175 KSP unpreconditioned resid norm 3.924177535665e-09 true resid norm 3.924177629720e-09 ||r(i)||/||b|| 9.773860563138e-11<br class="">
>>>>> 176 KSP unpreconditioned resid norm 3.502843065115e-09 true resid norm 3.502843126359e-09 ||r(i)||/||b|| 8.724452234855e-11<br class="">
>>>>> 177 KSP unpreconditioned resid norm 3.083873232869e-09 true resid norm 3.083873352938e-09 ||r(i)||/||b|| 7.680933686007e-11<br class="">
>>>>> 178 KSP unpreconditioned resid norm 2.758980676473e-09 true resid norm 2.758980618096e-09 ||r(i)||/||b|| 6.871730691658e-11<br class="">
>>>>> 179 KSP unpreconditioned resid norm 2.510978240429e-09 true resid norm 2.510978327392e-09 ||r(i)||/||b|| 6.254036989334e-11<br class="">
>>>>> 180 KSP unpreconditioned resid norm 2.323000193205e-09 true resid norm 2.323000193205e-09 ||r(i)||/||b|| 5.785844097519e-11<br class="">
>>>>> 181 KSP unpreconditioned resid norm 2.167480159274e-09 true resid norm 2.167480113693e-09 ||r(i)||/||b|| 5.398493749153e-11<br class="">
>>>>> 182 KSP unpreconditioned resid norm 1.983545827983e-09 true resid norm 1.983546404840e-09 ||r(i)||/||b|| 4.940374216139e-11<br class="">
>>>>> 183 KSP unpreconditioned resid norm 1.794576286774e-09 true resid norm 1.794576224361e-09 ||r(i)||/||b|| 4.469710457036e-11<br class="">
>>>>> 184 KSP unpreconditioned resid norm 1.583490590644e-09 true resid norm 1.583490380603e-09 ||r(i)||/||b|| 3.943963715064e-11<br class="">
>>>>> 185 KSP unpreconditioned resid norm 1.412659866247e-09 true resid norm 1.412659832191e-09 ||r(i)||/||b|| 3.518479927722e-11<br class="">
>>>>> 186 KSP unpreconditioned resid norm 1.285613344939e-09 true resid norm 1.285612984761e-09 ||r(i)||/||b|| 3.202047215205e-11<br class="">
>>>>> 187 KSP unpreconditioned resid norm 1.168115133929e-09 true resid norm 1.168114766904e-09 ||r(i)||/||b|| 2.909397058634e-11<br class="">
>>>>> 188 KSP unpreconditioned resid norm 1.063377926053e-09 true resid norm 1.063377647554e-09 ||r(i)||/||b|| 2.648530681802e-11<br class="">
>>>>> 189 KSP unpreconditioned resid norm 9.548967728122e-10 true resid norm 9.548964523410e-10 ||r(i)||/||b|| 2.378339019807e-11<br class="">
>>>>> KSP Object: 16 MPI processes<br class="">
>>>>> type: fgmres<br class="">
>>>>> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br class="">
>>>>> happy breakdown tolerance 1e-30<br class="">
>>>>> maximum iterations=2000, initial guess is zero<br class="">
>>>>> tolerances: relative=1e-20, absolute=1e-09, divergence=10000.<br class="">
>>>>> right preconditioning<br class="">
>>>>> using UNPRECONDITIONED norm type for convergence test<br class="">
>>>>> PC Object: 16 MPI processes<br class="">
>>>>> type: bjacobi<br class="">
>>>>> number of blocks = 4<br class="">
>>>>> Local solver information for first block is in the following KSP and PC objects on rank 0:<br class="">
>>>>> Use -ksp_view ::ascii_info_detail to display information for all blocks<br class="">
>>>>> KSP Object: (sub_) 4 MPI processes<br class="">
>>>>> type: preonly<br class="">
>>>>> maximum iterations=10000, initial guess is zero<br class="">
>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br class="">
>>>>> left preconditioning<br class="">
>>>>> using NONE norm type for convergence test<br class="">
>>>>> PC Object: (sub_) 4 MPI processes<br class="">
>>>>> type: telescope<br class="">
>>>>> petsc subcomm: parent comm size reduction factor = 4<br class="">
>>>>> petsc subcomm: parent_size = 4 , subcomm_size = 1<br class="">
>>>>> petsc subcomm type = contiguous<br class="">
>>>>> linear system matrix = precond matrix:<br class="">
>>>>> Mat Object: (sub_) 4 MPI processes<br class="">
>>>>> type: mpiaij<br class="">
>>>>> rows=40200, cols=40200<br class="">
>>>>> total: nonzeros=199996, allocated nonzeros=203412<br class="">
>>>>> total number of mallocs used during MatSetValues calls=0<br class="">
>>>>> not using I-node (on process 0) routines<br class="">
>>>>> setup type: default<br class="">
>>>>> Parent DM object: NULL<br class="">
>>>>> Sub DM object: NULL<br class="">
>>>>> KSP Object: (sub_telescope_) 1 MPI processes<br class="">
>>>>> type: preonly<br class="">
>>>>> maximum iterations=10000, initial guess is zero<br class="">
>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br class="">
>>>>> left preconditioning<br class="">
>>>>> using NONE norm type for convergence test<br class="">
>>>>> PC Object: (sub_telescope_) 1 MPI processes<br class="">
>>>>> type: lu<br class="">
>>>>> out-of-place factorization<br class="">
>>>>> tolerance for zero pivot 2.22045e-14<br class="">
>>>>> matrix ordering: external<br class="">
>>>>> factor fill ratio given 0., needed 0.<br class="">
>>>>> Factored matrix follows:<br class="">
>>>>> Mat Object: 1 MPI processes<br class="">
>>>>> type: mumps<br class="">
>>>>> rows=40200, cols=40200<br class="">
>>>>> package used to perform factorization: mumps<br class="">
>>>>> total: nonzeros=1849788, allocated nonzeros=1849788<br class="">
>>>>> MUMPS run parameters:<br class="">
>>>>> SYM (matrix type): 0<br class="">
>>>>> PAR (host participation): 1<br class="">
>>>>> ICNTL(1) (output for error): 6<br class="">
>>>>> ICNTL(2) (output of diagnostic msg): 0<br class="">
>>>>> ICNTL(3) (output for global info): 0<br class="">
>>>>> ICNTL(4) (level of printing): 0<br class="">
>>>>> ICNTL(5) (input mat struct): 0<br class="">
>>>>> ICNTL(6) (matrix prescaling): 7<br class="">
>>>>> ICNTL(7) (sequential matrix ordering):7<br class="">
>>>>> ICNTL(8) (scaling strategy): 77<br class="">
>>>>> ICNTL(10) (max num of refinements): 0<br class="">
>>>>> ICNTL(11) (error analysis): 0<br class="">
>>>>> ICNTL(12) (efficiency control): 1<br class="">
>>>>> ICNTL(13) (sequential factorization of the root node): 0<br class="">
>>>>> ICNTL(14) (percentage of estimated workspace increase): 20<br class="">
>>>>> ICNTL(18) (input mat struct): 0<br class="">
>>>>> ICNTL(19) (Schur complement info): 0<br class="">
>>>>> ICNTL(20) (RHS sparse pattern): 0<br class="">
>>>>> ICNTL(21) (solution struct): 0<br class="">
>>>>> ICNTL(22) (in-core/out-of-core facility): 0<br class="">
>>>>> ICNTL(23) (max size of memory can be allocated locally):0<br class="">
>>>>> ICNTL(24) (detection of null pivot rows): 0<br class="">
>>>>> ICNTL(25) (computation of a null space basis): 0<br class="">
>>>>> ICNTL(26) (Schur options for RHS or solution): 0<br class="">
>>>>> ICNTL(27) (blocking size for multiple RHS): -32<br class="">
>>>>> ICNTL(28) (use parallel or sequential ordering): 1<br class="">
>>>>> ICNTL(29) (parallel ordering): 0<br class="">
>>>>> ICNTL(30) (user-specified set of entries in inv(A)): 0<br class="">
>>>>> ICNTL(31) (factors is discarded in the solve phase): 0<br class="">
>>>>> ICNTL(33) (compute determinant): 0<br class="">
>>>>> ICNTL(35) (activate BLR based factorization): 0<br class="">
>>>>> ICNTL(36) (choice of BLR factorization variant): 0<br class="">
>>>>> ICNTL(38) (estimated compression rate of LU factors): 333<br class="">
>>>>> CNTL(1) (relative pivoting threshold): 0.01<br class="">
>>>>> CNTL(2) (stopping criterion of refinement): 1.49012e-08<br class="">
>>>>> CNTL(3) (absolute pivoting threshold): 0.<br class="">
>>>>> CNTL(4) (value of static pivoting): -1.<br class="">
>>>>> CNTL(5) (fixation for null pivots): 0.<br class="">
>>>>> CNTL(7) (dropping parameter for BLR): 0.<br class="">
>>>>> RINFO(1) (local estimated flops for the elimination after analysis):<br class="">
>>>>> [0] 1.45525e+08<br class="">
>>>>> RINFO(2) (local estimated flops for the assembly after factorization):<br class="">
>>>>> [0] 2.89397e+06<br class="">
>>>>> RINFO(3) (local estimated flops for the elimination after factorization):<br class="">
>>>>> [0] 1.45525e+08<br class="">
>>>>> INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization):<br class="">
>>>>> [0] 29<br class="">
>>>>> INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization):<br class="">
>>>>> [0] 29<br class="">
>>>>> INFO(23) (num of pivots eliminated on this processor after factorization):<br class="">
>>>>> [0] 40200<br class="">
>>>>> RINFOG(1) (global estimated flops for the elimination after analysis): 1.45525e+08<br class="">
>>>>> RINFOG(2) (global estimated flops for the assembly after factorization): 2.89397e+06<br class="">
>>>>> RINFOG(3) (global estimated flops for the elimination after factorization): 1.45525e+08<br class="">
>>>>> (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0)<br class="">
>>>>> INFOG(3) (estimated real workspace for factors on all processors after analysis): 1849788<br class="">
>>>>> INFOG(4) (estimated integer workspace for factors on all processors after analysis): 879986<br class="">
>>>>> INFOG(5) (estimated maximum front size in the complete tree): 282<br class="">
>>>>> INFOG(6) (number of nodes in the complete tree): 23709<br class="">
>>>>> INFOG(7) (ordering option effectively used after analysis): 5<br class="">
>>>>> INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100<br class="">
>>>>> INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 1849788<br class="">
>>>>> INFOG(10) (total integer space store the matrix factors after factorization): 879986<br class="">
>>>>> INFOG(11) (order of largest frontal matrix after factorization): 282<br class="">
>>>>> INFOG(12) (number of off-diagonal pivots): 0<br class="">
>>>>> INFOG(13) (number of delayed pivots after factorization): 0<br class="">
>>>>> INFOG(14) (number of memory compress after factorization): 0<br class="">
>>>>> INFOG(15) (number of steps of iterative refinement after solution): 0<br class="">
>>>>> INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 29<br class="">
>>>>> INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 29<br class="">
>>>>> INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 29<br class="">
>>>>> INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 29<br class="">
>>>>> INFOG(20) (estimated number of entries in the factors): 1849788<br class="">
>>>>> INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 26<br class="">
>>>>> INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 26<br class="">
>>>>> INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0<br class="">
>>>>> INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1<br class="">
>>>>> INFOG(25) (after factorization: number of pivots modified by static pivoting): 0<br class="">
>>>>> INFOG(28) (after factorization: number of null pivots encountered): 0<br class="">
>>>>> INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 1849788<br class="">
>>>>> INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 29, 29<br class="">
>>>>> INFOG(32) (after analysis: type of analysis done): 1<br class="">
>>>>> INFOG(33) (value used for ICNTL(8)): 7<br class="">
>>>>> INFOG(34) (exponent of the determinant if determinant is requested): 0<br class="">
>>>>> INFOG(35) (after factorization: number of entries taking into account BLR factor compression - sum over all processors): 1849788<br class="">
>>>>> INFOG(36) (after analysis: estimated size of all MUMPS internal data for running BLR in-core - value on the most memory consuming processor): 0<br class="">
>>>>> INFOG(37) (after analysis: estimated size of all MUMPS internal data for running BLR in-core - sum over all processors): 0<br class="">
>>>>> INFOG(38) (after analysis: estimated size of all MUMPS internal data for running BLR out-of-core - value on the most memory consuming processor): 0<br class="">
>>>>> INFOG(39) (after analysis: estimated size of all MUMPS internal data for running BLR out-of-core - sum over all processors): 0<br class="">
>>>>> linear system matrix = precond matrix:<br class="">
>>>>> Mat Object: 1 MPI processes<br class="">
>>>>> type: seqaijcusparse<br class="">
>>>>> rows=40200, cols=40200<br class="">
>>>>> total: nonzeros=199996, allocated nonzeros=199996<br class="">
>>>>> total number of mallocs used during MatSetValues calls=0<br class="">
>>>>> not using I-node routines<br class="">
>>>>> linear system matrix = precond matrix:<br class="">
>>>>> Mat Object: 16 MPI processes<br class="">
>>>>> type: mpiaijcusparse<br class="">
>>>>> rows=160800, cols=160800<br class="">
>>>>> total: nonzeros=802396, allocated nonzeros=1608000<br class="">
>>>>> total number of mallocs used during MatSetValues calls=0<br class="">
>>>>> not using I-node (on process 0) routines<br class="">
>>>>> Norm of error 9.11684e-07 iterations 189<br class="">
>>>>> Chang<br class="">
>>>>> On 10/14/21 10:10 PM, Chang Liu wrote:<br class="">
>>>>>> Hi Barry,<br class="">
>>>>>><br class="">
>>>>>> No problem. Here is the output. It seems that the resid norm calculation is incorrect.<br class="">
>>>>>><br class="">
>>>>>> $ mpiexec -n 16 --hostfile hostfile --oversubscribe ./ex7 -m 400 -ksp_view -ksp_monitor_true_residual -pc_type bjacobi -pc_bjacobi_blocks 4 -ksp_type fgmres -mat_type aijcusparse -sub_pc_type telescope -sub_ksp_type preonly -sub_telescope_ksp_type preonly -sub_telescope_pc_type lu -sub_telescope_pc_factor_mat_solver_type cusparse -sub_pc_telescope_reduction_factor 4 -sub_pc_telescope_subcomm_type contiguous -ksp_max_it 2000 -ksp_rtol 1.e-20 -ksp_atol 1.e-9<br class="">
>>>>>> 0 KSP unpreconditioned resid norm 4.014971979977e+01 true resid norm 4.014971979977e+01 ||r(i)||/||b|| 1.000000000000e+00<br class="">
>>>>>> 1 KSP unpreconditioned resid norm 0.000000000000e+00 true resid norm 4.014971979977e+01 ||r(i)||/||b|| 1.000000000000e+00<br class="">
>>>>>> KSP Object: 16 MPI processes<br class="">
>>>>>> type: fgmres<br class="">
>>>>>> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br class="">
>>>>>> happy breakdown tolerance 1e-30<br class="">
>>>>>> maximum iterations=2000, initial guess is zero<br class="">
>>>>>> tolerances: relative=1e-20, absolute=1e-09, divergence=10000.<br class="">
>>>>>> right preconditioning<br class="">
>>>>>> using UNPRECONDITIONED norm type for convergence test<br class="">
>>>>>> PC Object: 16 MPI processes<br class="">
>>>>>> type: bjacobi<br class="">
>>>>>> number of blocks = 4<br class="">
>>>>>> Local solver information for first block is in the following KSP and PC objects on rank 0:<br class="">
>>>>>> Use -ksp_view ::ascii_info_detail to display information for all blocks<br class="">
>>>>>> KSP Object: (sub_) 4 MPI processes<br class="">
>>>>>> type: preonly<br class="">
>>>>>> maximum iterations=10000, initial guess is zero<br class="">
>>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br class="">
>>>>>> left preconditioning<br class="">
>>>>>> using NONE norm type for convergence test<br class="">
>>>>>> PC Object: (sub_) 4 MPI processes<br class="">
>>>>>> type: telescope<br class="">
>>>>>> petsc subcomm: parent comm size reduction factor = 4<br class="">
>>>>>> petsc subcomm: parent_size = 4 , subcomm_size = 1<br class="">
>>>>>> petsc subcomm type = contiguous<br class="">
>>>>>> linear system matrix = precond matrix:<br class="">
>>>>>> Mat Object: (sub_) 4 MPI processes<br class="">
>>>>>> type: mpiaij<br class="">
>>>>>> rows=40200, cols=40200<br class="">
>>>>>> total: nonzeros=199996, allocated nonzeros=203412<br class="">
>>>>>> total number of mallocs used during MatSetValues calls=0<br class="">
>>>>>> not using I-node (on process 0) routines<br class="">
>>>>>> setup type: default<br class="">
>>>>>> Parent DM object: NULL<br class="">
>>>>>> Sub DM object: NULL<br class="">
>>>>>> KSP Object: (sub_telescope_) 1 MPI processes<br class="">
>>>>>> type: preonly<br class="">
>>>>>> maximum iterations=10000, initial guess is zero<br class="">
>>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br class="">
>>>>>> left preconditioning<br class="">
>>>>>> using NONE norm type for convergence test<br class="">
>>>>>> PC Object: (sub_telescope_) 1 MPI processes<br class="">
>>>>>> type: lu<br class="">
>>>>>> out-of-place factorization<br class="">
>>>>>> tolerance for zero pivot 2.22045e-14<br class="">
>>>>>> matrix ordering: nd<br class="">
>>>>>> factor fill ratio given 5., needed 8.62558<br class="">
>>>>>> Factored matrix follows:<br class="">
>>>>>> Mat Object: 1 MPI processes<br class="">
>>>>>> type: seqaijcusparse<br class="">
>>>>>> rows=40200, cols=40200<br class="">
>>>>>> package used to perform factorization: cusparse<br class="">
>>>>>> total: nonzeros=1725082, allocated nonzeros=1725082<br class="">
>>>>>> not using I-node routines<br class="">
>>>>>> linear system matrix = precond matrix:<br class="">
>>>>>> Mat Object: 1 MPI processes<br class="">
>>>>>> type: seqaijcusparse<br class="">
>>>>>> rows=40200, cols=40200<br class="">
>>>>>> total: nonzeros=199996, allocated nonzeros=199996<br class="">
>>>>>> total number of mallocs used during MatSetValues calls=0<br class="">
>>>>>> not using I-node routines<br class="">
>>>>>> linear system matrix = precond matrix:<br class="">
>>>>>> Mat Object: 16 MPI processes<br class="">
>>>>>> type: mpiaijcusparse<br class="">
>>>>>> rows=160800, cols=160800<br class="">
>>>>>> total: nonzeros=802396, allocated nonzeros=1608000<br class="">
>>>>>> total number of mallocs used during MatSetValues calls=0<br class="">
>>>>>> not using I-node (on process 0) routines<br class="">
>>>>>> Norm of error 400.999 iterations 1<br class="">
>>>>>><br class="">
>>>>>> Chang<br class="">
>>>>>><br class="">
>>>>>><br class="">
>>>>>> On 10/14/21 9:47 PM, Barry Smith wrote:<br class="">
>>>>>>><br class="">
>>>>>>> Chang,<br class="">
>>>>>>><br class="">
>>>>>>> Sorry I did not notice that one. Please run that with -ksp_view -ksp_monitor_true_residual so we can see exactly how options are interpreted and solver used. At a glance it looks ok but something must be wrong to get the wrong answer.<br class="">
>>>>>>><br class="">
>>>>>>> Barry<br class="">
>>>>>>><br class="">
>>>>>>>> On Oct 14, 2021, at 6:02 PM, Chang Liu <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> wrote:<br class="">
>>>>>>>><br class="">
>>>>>>>> Hi Barry,<br class="">
>>>>>>>><br class="">
>>>>>>>> That is exactly what I was doing in the second example, in which the preconditioner works but the GMRES does not.<br class="">
>>>>>>>><br class="">
>>>>>>>> Chang<br class="">
>>>>>>>><br class="">
>>>>>>>> On 10/14/21 5:15 PM, Barry Smith wrote:<br class="">
>>>>>>>>> You need to use the PCTELESCOPE inside the block Jacobi, not outside it. So something like -pc_type bjacobi -sub_pc_type telescope -sub_telescope_pc_type lu<br class="">
>>>>>>>>>> On Oct 14, 2021, at 4:14 PM, Chang Liu <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> wrote:<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> Hi Pierre,<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> I wonder if the trick of PCTELESCOPE only works for preconditioner and not for the solver. I have done some tests, and find that for solving a small matrix using -telescope_ksp_type preonly, it does work for GPU with multiple MPI processes. However, for bjacobi and gmres, it does not work.<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> The command line options I used for small matrix is like<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> mpiexec -n 4 --oversubscribe ./ex7 -m 100 -ksp_monitor_short -pc_type telescope -mat_type aijcusparse -telescope_pc_type lu -telescope_pc_factor_mat_solver_type cusparse -telescope_ksp_type preonly -pc_telescope_reduction_factor 4<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> which gives the correct output. For iterative solver, I tried<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> mpiexec -n 16 --oversubscribe ./ex7 -m 400 -ksp_monitor_short -pc_type bjacobi -pc_bjacobi_blocks 4 -ksp_type fgmres -mat_type aijcusparse -sub_pc_type telescope -sub_ksp_type preonly -sub_telescope_ksp_type preonly -sub_telescope_pc_type lu -sub_telescope_pc_factor_mat_solver_type cusparse -sub_pc_telescope_reduction_factor 4 -ksp_max_it 2000 -ksp_rtol 1.e-9 -ksp_atol 1.e-20<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> for large matrix. The output is like<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> 0 KSP Residual norm 40.1497<br class="">
>>>>>>>>>> 1 KSP Residual norm < 1.e-11<br class="">
>>>>>>>>>> Norm of error 400.999 iterations 1<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> So it seems to call a direct solver instead of an iterative one.<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> Can you please help check these options?<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> Chang<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> On 10/14/21 10:04 AM, Pierre Jolivet wrote:<br class="">
>>>>>>>>>>>> On 14 Oct 2021, at 3:50 PM, Chang Liu <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> wrote:<br class="">
>>>>>>>>>>>><br class="">
>>>>>>>>>>>> Thank you Pierre. I was not aware of PCTELESCOPE before. This sounds exactly what I need. I wonder if PCTELESCOPE can transform a mpiaijcusparse to seqaircusparse? Or I have to do it manually?<br class="">
>>>>>>>>>>> PCTELESCOPE uses MatCreateMPIMatConcatenateSeqMat().<br class="">
>>>>>>>>>>> 1) I’m not sure this is implemented for cuSparse matrices, but it should be;<br class="">
>>>>>>>>>>> 2) at least for the implementations MatCreateMPIMatConcatenateSeqMat_MPIBAIJ() and MatCreateMPIMatConcatenateSeqMat_MPIAIJ(), the resulting MatType is MATBAIJ (resp. MATAIJ). Constructors are usually “smart” enough to detect if the MPI communicator on which the Mat lives is of size 1 (your case), and then the resulting Mat is of type MatSeqX instead of MatMPIX, so you would not need to worry about the transformation you are mentioning.<br class="">
>>>>>>>>>>> If you try this out and this does not work, please provide the backtrace (probably something like “Operation XYZ not implemented for MatType ABC”), and hopefully someone can add the missing plumbing.<br class="">
>>>>>>>>>>> I do not claim that this will be efficient, but I think this goes in the direction of what you want to achieve.<br class="">
>>>>>>>>>>> Thanks,<br class="">
>>>>>>>>>>> Pierre<br class="">
>>>>>>>>>>>> Chang<br class="">
>>>>>>>>>>>><br class="">
>>>>>>>>>>>> On 10/14/21 1:35 AM, Pierre Jolivet wrote:<br class="">
>>>>>>>>>>>>> Maybe I’m missing something, but can’t you use PCTELESCOPE as a subdomain solver, with a reduction factor equal to the number of MPI processes you have per block?<br class="">
>>>>>>>>>>>>> -sub_pc_type telescope -sub_pc_telescope_reduction_factor X -sub_telescope_pc_type lu<br class="">
>>>>>>>>>>>>> This does not work with MUMPS -mat_mumps_use_omp_threads because not only do the Mat needs to be redistributed, the secondary processes also need to be “converted” to OpenMP threads.<br class="">
>>>>>>>>>>>>> Thus the need for specific code in mumps.c.<br class="">
>>>>>>>>>>>>> Thanks,<br class="">
>>>>>>>>>>>>> Pierre<br class="">
>>>>>>>>>>>>>> On 14 Oct 2021, at 6:00 AM, Chang Liu via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>> wrote:<br class="">
>>>>>>>>>>>>>><br class="">
>>>>>>>>>>>>>> Hi Junchao,<br class="">
>>>>>>>>>>>>>><br class="">
>>>>>>>>>>>>>> Yes that is what I want.<br class="">
>>>>>>>>>>>>>><br class="">
>>>>>>>>>>>>>> Chang<br class="">
>>>>>>>>>>>>>><br class="">
>>>>>>>>>>>>>> On 10/13/21 11:42 PM, Junchao Zhang wrote:<br class="">
>>>>>>>>>>>>>>> On Wed, Oct 13, 2021 at 8:58 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank" class="">bsmith@petsc.dev</a> <mailto:<a href="mailto:bsmith@petsc.dev" target="_blank" class="">bsmith@petsc.dev</a>>> wrote:<br class="">
>>>>>>>>>>>>>>> Junchao,<br class="">
>>>>>>>>>>>>>>> If I understand correctly Chang is using the block Jacobi<br class="">
>>>>>>>>>>>>>>> method with a single block for a number of MPI ranks and a direct<br class="">
>>>>>>>>>>>>>>> solver for each block so it uses PCSetUp_BJacobi_Multiproc() which<br class="">
>>>>>>>>>>>>>>> is code Hong Zhang wrote a number of years ago for CPUs. For their<br class="">
>>>>>>>>>>>>>>> particular problems this preconditioner works well, but using an<br class="">
>>>>>>>>>>>>>>> iterative solver on the blocks does not work well.<br class="">
>>>>>>>>>>>>>>> If we had complete MPI-GPU direct solvers he could just use<br class="">
>>>>>>>>>>>>>>> the current code with MPIAIJCUSPARSE on each block but since we do<br class="">
>>>>>>>>>>>>>>> not he would like to use a single GPU for each block, this means<br class="">
>>>>>>>>>>>>>>> that diagonal blocks of the global parallel MPI matrix needs to be<br class="">
>>>>>>>>>>>>>>> sent to a subset of the GPUs (one GPU per block, which has multiple<br class="">
>>>>>>>>>>>>>>> MPI ranks associated with the blocks). Similarly for the triangular<br class="">
>>>>>>>>>>>>>>> solves the blocks of the right hand side needs to be shipped to the<br class="">
>>>>>>>>>>>>>>> appropriate GPU and the resulting solution shipped back to the<br class="">
>>>>>>>>>>>>>>> multiple GPUs. So Chang is absolutely correct, this is somewhat like<br class="">
>>>>>>>>>>>>>>> your code for MUMPS with OpenMP. OK, I now understand the background..<br class="">
>>>>>>>>>>>>>>> One could use PCSetUp_BJacobi_Multiproc() and get the blocks on the<br class="">
>>>>>>>>>>>>>>> MPI ranks and then shrink each block down to a single GPU but this<br class="">
>>>>>>>>>>>>>>> would be pretty inefficient, ideally one would go directly from the<br class="">
>>>>>>>>>>>>>>> big MPI matrix on all the GPUs to the sub matrices on the subset of<br class="">
>>>>>>>>>>>>>>> GPUs. But this may be a large coding project.<br class="">
>>>>>>>>>>>>>>> I don't understand these sentences. Why do you say "shrink"? In my mind, we just need to move each block (submatrix) living over multiple MPI ranks to one of them and solve directly there. In other words, we keep blocks' size, no shrinking or expanding.<br class="">
>>>>>>>>>>>>>>> As mentioned before, cusparse does not provide LU factorization. So the LU factorization would be done on CPU, and the solve be done on GPU. I assume Chang wants to gain from the (potential) faster solve (instead of factorization) on GPU.<br class="">
>>>>>>>>>>>>>>> Barry<br class="">
>>>>>>>>>>>>>>> Since the matrices being factored and solved directly are relatively<br class="">
>>>>>>>>>>>>>>> large it is possible that the cusparse code could be reasonably<br class="">
>>>>>>>>>>>>>>> efficient (they are not the tiny problems one gets at the coarse<br class="">
>>>>>>>>>>>>>>> level of multigrid). Of course, this is speculation, I don't<br class="">
>>>>>>>>>>>>>>> actually know how much better the cusparse code would be on the<br class="">
>>>>>>>>>>>>>>> direct solver than a good CPU direct sparse solver.<br class="">
>>>>>>>>>>>>>>> > On Oct 13, 2021, at 9:32 PM, Chang Liu <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>> wrote:<br class="">
>>>>>>>>>>>>>>> ><br class="">
>>>>>>>>>>>>>>> > Sorry I am not familiar with the details either. Can you please<br class="">
>>>>>>>>>>>>>>> check the code in MatMumpsGatherNonzerosOnMaster in mumps.c?<br class="">
>>>>>>>>>>>>>>> ><br class="">
>>>>>>>>>>>>>>> > Chang<br class="">
>>>>>>>>>>>>>>> ><br class="">
>>>>>>>>>>>>>>> > On 10/13/21 9:24 PM, Junchao Zhang wrote:<br class="">
>>>>>>>>>>>>>>> >> Hi Chang,<br class="">
>>>>>>>>>>>>>>> >> I did the work in mumps. It is easy for me to understand<br class="">
>>>>>>>>>>>>>>> gathering matrix rows to one process.<br class="">
>>>>>>>>>>>>>>> >> But how to gather blocks (submatrices) to form a large block? Can you draw a picture of that?<br class="">
>>>>>>>>>>>>>>> >> Thanks<br class="">
>>>>>>>>>>>>>>> >> --Junchao Zhang<br class="">
>>>>>>>>>>>>>>> >> On Wed, Oct 13, 2021 at 7:47 PM Chang Liu via petsc-users<br class="">
>>>>>>>>>>>>>>> <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> wrote:<br class="">
>>>>>>>>>>>>>>> >> Hi Barry,<br class="">
>>>>>>>>>>>>>>> >> I think mumps solver in petsc does support that. You can<br class="">
>>>>>>>>>>>>>>> check the<br class="">
>>>>>>>>>>>>>>> >> documentation on "-mat_mumps_use_omp_threads" at<br class="">
>>>>>>>>>>>>>>> >><br class="">
>>>>>>>>>>>>>>> <a href="https://petsc.org/release/docs/manualpages/Mat/MATSOLVERMUMPS.html" rel="noreferrer" target="_blank" class="">https://petsc.org/release/docs/manualpages/Mat/MATSOLVERMUMPS.html</a><br class="">
>>>>>>>>>>>>>>> <<a href="https://petsc.org/release/docs/manualpages/Mat/MATSOLVERMUMPS.html" rel="noreferrer" target="_blank" class="">https://petsc.org/release/docs/manualpages/Mat/MATSOLVERMUMPS.html</a>><br class="">
>>>>>>>>>>>>>>> >> <<a href="https://petsc.org/release/docs/manualpages/Mat/MATSOLVERMUMPS.html" rel="noreferrer" target="_blank" class="">https://petsc.org/release/docs/manualpages/Mat/MATSOLVERMUMPS.html</a><br class="">
>>>>>>>>>>>>>>> <<a href="https://petsc.org/release/docs/manualpages/Mat/MATSOLVERMUMPS.html" rel="noreferrer" target="_blank" class="">https://petsc.org/release/docs/manualpages/Mat/MATSOLVERMUMPS.html</a>>><br class="">
>>>>>>>>>>>>>>> >> and the code enclosed by #if<br class="">
>>>>>>>>>>>>>>> defined(PETSC_HAVE_OPENMP_SUPPORT) in<br class="">
>>>>>>>>>>>>>>> >> functions MatMumpsSetUpDistRHSInfo and<br class="">
>>>>>>>>>>>>>>> >> MatMumpsGatherNonzerosOnMaster in<br class="">
>>>>>>>>>>>>>>> >> mumps.c<br class="">
>>>>>>>>>>>>>>> >> 1. I understand it is ideal to do one MPI rank per GPU.<br class="">
>>>>>>>>>>>>>>> However, I am<br class="">
>>>>>>>>>>>>>>> >> working on an existing code that was developed based on MPI<br class="">
>>>>>>>>>>>>>>> and the the<br class="">
>>>>>>>>>>>>>>> >> # of mpi ranks is typically equal to # of cpu cores. We don't<br class="">
>>>>>>>>>>>>>>> want to<br class="">
>>>>>>>>>>>>>>> >> change the whole structure of the code.<br class="">
>>>>>>>>>>>>>>> >> 2. What you have suggested has been coded in mumps.c. See<br class="">
>>>>>>>>>>>>>>> function<br class="">
>>>>>>>>>>>>>>> >> MatMumpsSetUpDistRHSInfo.<br class="">
>>>>>>>>>>>>>>> >> Regards,<br class="">
>>>>>>>>>>>>>>> >> Chang<br class="">
>>>>>>>>>>>>>>> >> On 10/13/21 7:53 PM, Barry Smith wrote:<br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> >> On Oct 13, 2021, at 3:50 PM, Chang Liu <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>> wrote:<br class="">
>>>>>>>>>>>>>>> >> >><br class="">
>>>>>>>>>>>>>>> >> >> Hi Barry,<br class="">
>>>>>>>>>>>>>>> >> >><br class="">
>>>>>>>>>>>>>>> >> >> That is exactly what I want.<br class="">
>>>>>>>>>>>>>>> >> >><br class="">
>>>>>>>>>>>>>>> >> >> Back to my original question, I am looking for an approach to<br class="">
>>>>>>>>>>>>>>> >> transfer<br class="">
>>>>>>>>>>>>>>> >> >> matrix<br class="">
>>>>>>>>>>>>>>> >> >> data from many MPI processes to "master" MPI<br class="">
>>>>>>>>>>>>>>> >> >> processes, each of which taking care of one GPU, and then<br class="">
>>>>>>>>>>>>>>> upload<br class="">
>>>>>>>>>>>>>>> >> the data to GPU to<br class="">
>>>>>>>>>>>>>>> >> >> solve.<br class="">
>>>>>>>>>>>>>>> >> >> One can just grab some codes from mumps.c to<br class="">
>>>>>>>>>>>>>>> <a href="http://aijcusparse.cu/" rel="noreferrer" target="_blank" class="">aijcusparse.cu</a> <<a href="http://aijcusparse.cu/" rel="noreferrer" target="_blank" class="">http://aijcusparse.cu</a>><br class="">
>>>>>>>>>>>>>>> >> <<a href="http://aijcusparse.cu/" rel="noreferrer" target="_blank" class="">http://aijcusparse.cu</a> <<a href="http://aijcusparse.cu/" rel="noreferrer" target="_blank" class="">http://aijcusparse.cu</a>>>.<br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> > mumps.c doesn't actually do that. It never needs to<br class="">
>>>>>>>>>>>>>>> copy the<br class="">
>>>>>>>>>>>>>>> >> entire matrix to a single MPI rank.<br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> > It would be possible to write such a code that you<br class="">
>>>>>>>>>>>>>>> suggest but<br class="">
>>>>>>>>>>>>>>> >> it is not clear that it makes sense<br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> > 1) For normal PETSc GPU usage there is one GPU per MPI<br class="">
>>>>>>>>>>>>>>> rank, so<br class="">
>>>>>>>>>>>>>>> >> while your one GPU per big domain is solving its systems the<br class="">
>>>>>>>>>>>>>>> other<br class="">
>>>>>>>>>>>>>>> >> GPUs (with the other MPI ranks that share that domain) are doing<br class="">
>>>>>>>>>>>>>>> >> nothing.<br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> > 2) For each triangular solve you would have to gather the<br class="">
>>>>>>>>>>>>>>> right<br class="">
>>>>>>>>>>>>>>> >> hand side from the multiple ranks to the single GPU to pass it to<br class="">
>>>>>>>>>>>>>>> >> the GPU solver and then scatter the resulting solution back<br class="">
>>>>>>>>>>>>>>> to all<br class="">
>>>>>>>>>>>>>>> >> of its subdomain ranks.<br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> > What I was suggesting was assign an entire subdomain to a<br class="">
>>>>>>>>>>>>>>> >> single MPI rank, thus it does everything on one GPU and can<br class="">
>>>>>>>>>>>>>>> use the<br class="">
>>>>>>>>>>>>>>> >> GPU solver directly. If all the major computations of a subdomain<br class="">
>>>>>>>>>>>>>>> >> can fit and be done on a single GPU then you would be<br class="">
>>>>>>>>>>>>>>> utilizing all<br class="">
>>>>>>>>>>>>>>> >> the GPUs you are using effectively.<br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> > Barry<br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> >><br class="">
>>>>>>>>>>>>>>> >> >> Chang<br class="">
>>>>>>>>>>>>>>> >> >><br class="">
>>>>>>>>>>>>>>> >> >> On 10/13/21 1:53 PM, Barry Smith wrote:<br class="">
>>>>>>>>>>>>>>> >> >>> Chang,<br class="">
>>>>>>>>>>>>>>> >> >>> You are correct there is no MPI + GPU direct<br class="">
>>>>>>>>>>>>>>> solvers that<br class="">
>>>>>>>>>>>>>>> >> currently do the triangular solves with MPI + GPU parallelism<br class="">
>>>>>>>>>>>>>>> that I<br class="">
>>>>>>>>>>>>>>> >> am aware of. You are limited that individual triangular solves be<br class="">
>>>>>>>>>>>>>>> >> done on a single GPU. I can only suggest making each subdomain as<br class="">
>>>>>>>>>>>>>>> >> big as possible to utilize each GPU as much as possible for the<br class="">
>>>>>>>>>>>>>>> >> direct triangular solves.<br class="">
>>>>>>>>>>>>>>> >> >>> Barry<br class="">
>>>>>>>>>>>>>>> >> >>>> On Oct 13, 2021, at 12:16 PM, Chang Liu via petsc-users<br class="">
>>>>>>>>>>>>>>> >> <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>><br class="">
>>>>>>>>>>>>>>> >> >>>> Hi Mark,<br class="">
>>>>>>>>>>>>>>> >> >>>><br class="">
>>>>>>>>>>>>>>> >> >>>> '-mat_type aijcusparse' works with mpiaijcusparse with<br class="">
>>>>>>>>>>>>>>> other<br class="">
>>>>>>>>>>>>>>> >> solvers, but with -pc_factor_mat_solver_type cusparse, it<br class="">
>>>>>>>>>>>>>>> will give<br class="">
>>>>>>>>>>>>>>> >> an error.<br class="">
>>>>>>>>>>>>>>> >> >>>><br class="">
>>>>>>>>>>>>>>> >> >>>> Yes what I want is to have mumps or superlu to do the<br class="">
>>>>>>>>>>>>>>> >> factorization, and then do the rest, including GMRES solver,<br class="">
>>>>>>>>>>>>>>> on gpu.<br class="">
>>>>>>>>>>>>>>> >> Is that possible?<br class="">
>>>>>>>>>>>>>>> >> >>>><br class="">
>>>>>>>>>>>>>>> >> >>>> I have tried to use aijcusparse with superlu_dist, it<br class="">
>>>>>>>>>>>>>>> runs but<br class="">
>>>>>>>>>>>>>>> >> the iterative solver is still running on CPUs. I have<br class="">
>>>>>>>>>>>>>>> contacted the<br class="">
>>>>>>>>>>>>>>> >> superlu group and they confirmed that is the case right now.<br class="">
>>>>>>>>>>>>>>> But if<br class="">
>>>>>>>>>>>>>>> >> I set -pc_factor_mat_solver_type cusparse, it seems that the<br class="">
>>>>>>>>>>>>>>> >> iterative solver is running on GPU.<br class="">
>>>>>>>>>>>>>>> >> >>>><br class="">
>>>>>>>>>>>>>>> >> >>>> Chang<br class="">
>>>>>>>>>>>>>>> >> >>>><br class="">
>>>>>>>>>>>>>>> >> >>>> On 10/13/21 12:03 PM, Mark Adams wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>>> On Wed, Oct 13, 2021 at 11:10 AM Chang Liu<br class="">
>>>>>>>>>>>>>>> <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>> wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>>> Thank you Junchao for explaining this. I guess in<br class="">
>>>>>>>>>>>>>>> my case<br class="">
>>>>>>>>>>>>>>> >> the code is<br class="">
>>>>>>>>>>>>>>> >> >>>>> just calling a seq solver like superlu to do<br class="">
>>>>>>>>>>>>>>> >> factorization on GPUs.<br class="">
>>>>>>>>>>>>>>> >> >>>>> My idea is that I want to have a traditional MPI<br class="">
>>>>>>>>>>>>>>> code to<br class="">
>>>>>>>>>>>>>>> >> utilize GPUs<br class="">
>>>>>>>>>>>>>>> >> >>>>> with cusparse. Right now cusparse does not support<br class="">
>>>>>>>>>>>>>>> mpiaij<br class="">
>>>>>>>>>>>>>>> >> matrix, Sure it does: '-mat_type aijcusparse' will give you an<br class="">
>>>>>>>>>>>>>>> >> mpiaijcusparse matrix with > 1 processes.<br class="">
>>>>>>>>>>>>>>> >> >>>>> (-mat_type mpiaijcusparse might also work with >1 proc).<br class="">
>>>>>>>>>>>>>>> >> >>>>> However, I see in grepping the repo that all the mumps and<br class="">
>>>>>>>>>>>>>>> >> superlu tests use aij or sell matrix type.<br class="">
>>>>>>>>>>>>>>> >> >>>>> MUMPS and SuperLU provide their own solves, I assume<br class="">
>>>>>>>>>>>>>>> .... but<br class="">
>>>>>>>>>>>>>>> >> you might want to do other matrix operations on the GPU. Is<br class="">
>>>>>>>>>>>>>>> that the<br class="">
>>>>>>>>>>>>>>> >> issue?<br class="">
>>>>>>>>>>>>>>> >> >>>>> Did you try -mat_type aijcusparse with MUMPS and/or<br class="">
>>>>>>>>>>>>>>> SuperLU<br class="">
>>>>>>>>>>>>>>> >> have a problem? (no test with it so it probably does not work)<br class="">
>>>>>>>>>>>>>>> >> >>>>> Thanks,<br class="">
>>>>>>>>>>>>>>> >> >>>>> Mark<br class="">
>>>>>>>>>>>>>>> >> >>>>> so I<br class="">
>>>>>>>>>>>>>>> >> >>>>> want the code to have a mpiaij matrix when adding<br class="">
>>>>>>>>>>>>>>> all the<br class="">
>>>>>>>>>>>>>>> >> matrix terms,<br class="">
>>>>>>>>>>>>>>> >> >>>>> and then transform the matrix to seqaij when doing the<br class="">
>>>>>>>>>>>>>>> >> factorization<br class="">
>>>>>>>>>>>>>>> >> >>>>> and<br class="">
>>>>>>>>>>>>>>> >> >>>>> solve. This involves sending the data to the master<br class="">
>>>>>>>>>>>>>>> >> process, and I<br class="">
>>>>>>>>>>>>>>> >> >>>>> think<br class="">
>>>>>>>>>>>>>>> >> >>>>> the petsc mumps solver have something similar already.<br class="">
>>>>>>>>>>>>>>> >> >>>>> Chang<br class="">
>>>>>>>>>>>>>>> >> >>>>> On 10/13/21 10:18 AM, Junchao Zhang wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > On Tue, Oct 12, 2021 at 1:07 PM Mark Adams<br class="">
>>>>>>>>>>>>>>> >> <<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a> <mailto:<a href="mailto:mfadams@lbl.gov" target="_blank" class="">mfadams@lbl.gov</a>>>>>> wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > On Tue, Oct 12, 2021 at 1:45 PM Chang Liu<br class="">
>>>>>>>>>>>>>>> >> <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>>> wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > Hi Mark,<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > The option I use is like<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > -pc_type bjacobi -pc_bjacobi_blocks 16<br class="">
>>>>>>>>>>>>>>> >> -ksp_type fgmres<br class="">
>>>>>>>>>>>>>>> >> >>>>> -mat_type<br class="">
>>>>>>>>>>>>>>> >> >>>>> > aijcusparse *-sub_pc_factor_mat_solver_type<br class="">
>>>>>>>>>>>>>>> >> cusparse<br class="">
>>>>>>>>>>>>>>> >> >>>>> *-sub_ksp_type<br class="">
>>>>>>>>>>>>>>> >> >>>>> > preonly *-sub_pc_type lu* -ksp_max_it 2000<br class="">
>>>>>>>>>>>>>>> >> -ksp_rtol 1.e-300<br class="">
>>>>>>>>>>>>>>> >> >>>>> > -ksp_atol 1.e-300<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > Note, If you use -log_view the last column<br class="">
>>>>>>>>>>>>>>> (rows<br class="">
>>>>>>>>>>>>>>> >> are the<br class="">
>>>>>>>>>>>>>>> >> >>>>> method like<br class="">
>>>>>>>>>>>>>>> >> >>>>> > MatFactorNumeric) has the percent of work<br class="">
>>>>>>>>>>>>>>> in the GPU.<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > Junchao: *This* implies that we have a<br class="">
>>>>>>>>>>>>>>> cuSparse LU<br class="">
>>>>>>>>>>>>>>> >> >>>>> factorization. Is<br class="">
>>>>>>>>>>>>>>> >> >>>>> > that correct? (I don't think we do)<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > No, we don't have cuSparse LU factorization. If you check<br class="">
>>>>>>>>>>>>>>> >> >>>>> > MatLUFactorSymbolic_SeqAIJCUSPARSE(),you will<br class="">
>>>>>>>>>>>>>>> find it<br class="">
>>>>>>>>>>>>>>> >> calls<br class="">
>>>>>>>>>>>>>>> >> >>>>> > MatLUFactorSymbolic_SeqAIJ() instead.<br class="">
>>>>>>>>>>>>>>> >> >>>>> > So I don't understand Chang's idea. Do you want to<br class="">
>>>>>>>>>>>>>>> >> make bigger<br class="">
>>>>>>>>>>>>>>> >> >>>>> blocks?<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > I think this one do both factorization and<br class="">
>>>>>>>>>>>>>>> >> solve on gpu.<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > You can check the<br class="">
>>>>>>>>>>>>>>> runex72_aijcusparse.sh file<br class="">
>>>>>>>>>>>>>>> >> in petsc<br class="">
>>>>>>>>>>>>>>> >> >>>>> install<br class="">
>>>>>>>>>>>>>>> >> >>>>> > directory, and try it your self (this<br class="">
>>>>>>>>>>>>>>> is only lu<br class="">
>>>>>>>>>>>>>>> >> >>>>> factorization<br class="">
>>>>>>>>>>>>>>> >> >>>>> > without<br class="">
>>>>>>>>>>>>>>> >> >>>>> > iterative solve).<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > Chang<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > On 10/12/21 1:17 PM, Mark Adams wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > On Tue, Oct 12, 2021 at 11:19 AM<br class="">
>>>>>>>>>>>>>>> Chang Liu<br class="">
>>>>>>>>>>>>>>> >> >>>>> <<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>>>> wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > Hi Junchao,<br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > No I only needs it to be transferred<br class="">
>>>>>>>>>>>>>>> >> within a<br class="">
>>>>>>>>>>>>>>> >> >>>>> node. I use<br class="">
>>>>>>>>>>>>>>> >> >>>>> > block-Jacobi<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > method and GMRES to solve the sparse<br class="">
>>>>>>>>>>>>>>> >> matrix, so each<br class="">
>>>>>>>>>>>>>>> >> >>>>> > direct solver will<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > take care of a sub-block of the<br class="">
>>>>>>>>>>>>>>> whole<br class="">
>>>>>>>>>>>>>>> >> matrix. In this<br class="">
>>>>>>>>>>>>>>> >> >>>>> > way, I can use<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > one<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > GPU to solve one sub-block, which is<br class="">
>>>>>>>>>>>>>>> >> stored within<br class="">
>>>>>>>>>>>>>>> >> >>>>> one node.<br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > It was stated in the<br class="">
>>>>>>>>>>>>>>> documentation that<br class="">
>>>>>>>>>>>>>>> >> cusparse<br class="">
>>>>>>>>>>>>>>> >> >>>>> solver<br class="">
>>>>>>>>>>>>>>> >> >>>>> > is slow.<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > However, in my test using<br class="">
>>>>>>>>>>>>>>> ex72.c, the<br class="">
>>>>>>>>>>>>>>> >> cusparse<br class="">
>>>>>>>>>>>>>>> >> >>>>> solver is<br class="">
>>>>>>>>>>>>>>> >> >>>>> > faster than<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > mumps or superlu_dist on CPUs.<br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > Are we talking about the<br class="">
>>>>>>>>>>>>>>> factorization, the<br class="">
>>>>>>>>>>>>>>> >> solve, or<br class="">
>>>>>>>>>>>>>>> >> >>>>> both?<br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > We do not have an interface to<br class="">
>>>>>>>>>>>>>>> cuSparse's LU<br class="">
>>>>>>>>>>>>>>> >> >>>>> factorization (I<br class="">
>>>>>>>>>>>>>>> >> >>>>> > just<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > learned that it exists a few weeks ago).<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > Perhaps your fast "cusparse solver" is<br class="">
>>>>>>>>>>>>>>> >> '-pc_type lu<br class="">
>>>>>>>>>>>>>>> >> >>>>> -mat_type<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > aijcusparse' ? This would be the CPU<br class="">
>>>>>>>>>>>>>>> >> factorization,<br class="">
>>>>>>>>>>>>>>> >> >>>>> which is the<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > dominant cost.<br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > Chang<br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > On 10/12/21 10:24 AM, Junchao<br class="">
>>>>>>>>>>>>>>> Zhang wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > Hi, Chang,<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > For the mumps solver, we<br class="">
>>>>>>>>>>>>>>> usually<br class="">
>>>>>>>>>>>>>>> >> transfers<br class="">
>>>>>>>>>>>>>>> >> >>>>> matrix<br class="">
>>>>>>>>>>>>>>> >> >>>>> > and vector<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > data<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > within a compute node. For<br class="">
>>>>>>>>>>>>>>> the idea you<br class="">
>>>>>>>>>>>>>>> >> >>>>> propose, it<br class="">
>>>>>>>>>>>>>>> >> >>>>> > looks like<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > we need<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > to gather data within<br class="">
>>>>>>>>>>>>>>> >> MPI_COMM_WORLD, right?<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > Mark, I remember you said<br class="">
>>>>>>>>>>>>>>> >> cusparse solve is<br class="">
>>>>>>>>>>>>>>> >> >>>>> slow<br class="">
>>>>>>>>>>>>>>> >> >>>>> > and you would<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > rather do it on CPU. Is it right?<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > --Junchao Zhang<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > On Mon, Oct 11, 2021 at 10:25 PM<br class="">
>>>>>>>>>>>>>>> >> Chang Liu via<br class="">
>>>>>>>>>>>>>>> >> >>>>> petsc-users<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:petsc-users@mcs.anl.gov" target="_blank" class="">petsc-users@mcs.anl.gov</a>>>>>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > wrote:<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > Hi,<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > Currently, it is possible<br class="">
>>>>>>>>>>>>>>> to use<br class="">
>>>>>>>>>>>>>>> >> mumps<br class="">
>>>>>>>>>>>>>>> >> >>>>> solver in<br class="">
>>>>>>>>>>>>>>> >> >>>>> > PETSC with<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > -mat_mumps_use_omp_threads<br class="">
>>>>>>>>>>>>>>> >> option, so that<br class="">
>>>>>>>>>>>>>>> >> >>>>> > multiple MPI<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > processes will<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > transfer the matrix and<br class="">
>>>>>>>>>>>>>>> rhs data<br class="">
>>>>>>>>>>>>>>> >> to the master<br class="">
>>>>>>>>>>>>>>> >> >>>>> > rank, and then<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > master<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > rank will call mumps with<br class="">
>>>>>>>>>>>>>>> OpenMP<br class="">
>>>>>>>>>>>>>>> >> to solve<br class="">
>>>>>>>>>>>>>>> >> >>>>> the matrix.<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > I wonder if someone can<br class="">
>>>>>>>>>>>>>>> develop<br class="">
>>>>>>>>>>>>>>> >> similar<br class="">
>>>>>>>>>>>>>>> >> >>>>> option for<br class="">
>>>>>>>>>>>>>>> >> >>>>> > cusparse<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > solver.<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > Right now, this solver<br class="">
>>>>>>>>>>>>>>> does not<br class="">
>>>>>>>>>>>>>>> >> work with<br class="">
>>>>>>>>>>>>>>> >> >>>>> > mpiaijcusparse. I<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > think a<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > possible workaround is to<br class="">
>>>>>>>>>>>>>>> >> transfer all the<br class="">
>>>>>>>>>>>>>>> >> >>>>> matrix<br class="">
>>>>>>>>>>>>>>> >> >>>>> > data to one MPI<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > process, and then upload the<br class="">
>>>>>>>>>>>>>>> >> data to GPU to<br class="">
>>>>>>>>>>>>>>> >> >>>>> solve.<br class="">
>>>>>>>>>>>>>>> >> >>>>> > In this<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > way, one can<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > use cusparse solver for a MPI<br class="">
>>>>>>>>>>>>>>> >> program.<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > Chang<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > --<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > Chang Liu<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > Staff Research Physicist<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > +1 609 243 3438<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > Princeton Plasma Physics<br class="">
>>>>>>>>>>>>>>> Laboratory<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > > 100 Stellarator Rd,<br class="">
>>>>>>>>>>>>>>> Princeton NJ<br class="">
>>>>>>>>>>>>>>> >> 08540, USA<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > --<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > Chang Liu<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > Staff Research Physicist<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > +1 609 243 3438<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > > Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>>>>>>> >> >>>>> > > 100 Stellarator Rd, Princeton NJ<br class="">
>>>>>>>>>>>>>>> 08540, USA<br class="">
>>>>>>>>>>>>>>> >> >>>>> > ><br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> > --<br class="">
>>>>>>>>>>>>>>> >> >>>>> > Chang Liu<br class="">
>>>>>>>>>>>>>>> >> >>>>> > Staff Research Physicist<br class="">
>>>>>>>>>>>>>>> >> >>>>> > +1 609 243 3438<br class="">
>>>>>>>>>>>>>>> >> >>>>> > <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> > Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>>>>>>> >> >>>>> > 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>>>>>>>>>> >> >>>>> ><br class="">
>>>>>>>>>>>>>>> >> >>>>> -- Chang Liu<br class="">
>>>>>>>>>>>>>>> >> >>>>> Staff Research Physicist<br class="">
>>>>>>>>>>>>>>> >> >>>>> +1 609 243 3438<br class="">
>>>>>>>>>>>>>>> >> >>>>> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> >> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>>><br class="">
>>>>>>>>>>>>>>> >> >>>>> Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>>>>>>> >> >>>>> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>>>>>>>>>> >> >>>><br class="">
>>>>>>>>>>>>>>> >> >>>> --<br class="">
>>>>>>>>>>>>>>> >> >>>> Chang Liu<br class="">
>>>>>>>>>>>>>>> >> >>>> Staff Research Physicist<br class="">
>>>>>>>>>>>>>>> >> >>>> +1 609 243 3438<br class="">
>>>>>>>>>>>>>>> >> >>>> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >>>> Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>>>>>>> >> >>>> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>>>>>>>>>> >> >><br class="">
>>>>>>>>>>>>>>> >> >> --<br class="">
>>>>>>>>>>>>>>> >> >> Chang Liu<br class="">
>>>>>>>>>>>>>>> >> >> Staff Research Physicist<br class="">
>>>>>>>>>>>>>>> >> >> +1 609 243 3438<br class="">
>>>>>>>>>>>>>>> >> >> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> >> Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>>>>>>> >> >> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>>>>>>>>>> >> ><br class="">
>>>>>>>>>>>>>>> >> -- Chang Liu<br class="">
>>>>>>>>>>>>>>> >> Staff Research Physicist<br class="">
>>>>>>>>>>>>>>> >> +1 609 243 3438<br class="">
>>>>>>>>>>>>>>> >> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>>> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>>><br class="">
>>>>>>>>>>>>>>> >> Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>>>>>>> >> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>>>>>>>>>> ><br class="">
>>>>>>>>>>>>>>> > --<br class="">
>>>>>>>>>>>>>>> > Chang Liu<br class="">
>>>>>>>>>>>>>>> > Staff Research Physicist<br class="">
>>>>>>>>>>>>>>> > +1 609 243 3438<br class="">
>>>>>>>>>>>>>>> > <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a> <mailto:<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a>><br class="">
>>>>>>>>>>>>>>> > Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>>>>>>> > 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>>>>>>>>><br class="">
>>>>>>>>>>>>>> -- <br class="">
>>>>>>>>>>>>>> Chang Liu<br class="">
>>>>>>>>>>>>>> Staff Research Physicist<br class="">
>>>>>>>>>>>>>> +1 609 243 3438<br class="">
>>>>>>>>>>>>>> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>>>> Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>>>>>> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>>>>>>><br class="">
>>>>>>>>>>>> -- <br class="">
>>>>>>>>>>>> Chang Liu<br class="">
>>>>>>>>>>>> Staff Research Physicist<br class="">
>>>>>>>>>>>> +1 609 243 3438<br class="">
>>>>>>>>>>>> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>>>> Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>>>> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>>>>><br class="">
>>>>>>>>>> -- <br class="">
>>>>>>>>>> Chang Liu<br class="">
>>>>>>>>>> Staff Research Physicist<br class="">
>>>>>>>>>> +1 609 243 3438<br class="">
>>>>>>>>>> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>>>> Princeton Plasma Physics Laboratory<br class="">
>>>>>>>>>> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>>><br class="">
>>>>>>>> -- <br class="">
>>>>>>>> Chang Liu<br class="">
>>>>>>>> Staff Research Physicist<br class="">
>>>>>>>> +1 609 243 3438<br class="">
>>>>>>>> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>>>>>> Princeton Plasma Physics Laboratory<br class="">
>>>>>>>> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>>>>>>><br class="">
>>>>>><br class="">
>>>><br class="">
>>>> -- <br class="">
>>>> Chang Liu<br class="">
>>>> Staff Research Physicist<br class="">
>>>> +1 609 243 3438<br class="">
>>>> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>>>> Princeton Plasma Physics Laboratory<br class="">
>>>> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
>><br class="">
>> -- <br class="">
>> Chang Liu<br class="">
>> Staff Research Physicist<br class="">
>> +1 609 243 3438<br class="">
>> <a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
>> Princeton Plasma Physics Laboratory<br class="">
>> 100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
> <br class="">
<br class="">
-- <br class="">
Chang Liu<br class="">
Staff Research Physicist<br class="">
+1 609 243 3438<br class="">
<a href="mailto:cliu@pppl.gov" target="_blank" class="">cliu@pppl.gov</a><br class="">
Princeton Plasma Physics Laboratory<br class="">
100 Stellarator Rd, Princeton NJ 08540, USA<br class="">
</blockquote></div>
</div></blockquote></div><br class=""></div></body></html>