<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">Faraz :</div><div class="gmail_quote">Direct sparse solvers are generally not scalable -- they are used for ill-conditioned problems which cannot be solved by iterative methods. </div><div class="gmail_quote"><br></div><div class="gmail_quote">Can you try sequential symbolic factorization instead of parallel, i.e., use mumps default '-mat_mumps_icntl_28 1'?</div><div class="gmail_quote"><br></div><div class="gmail_quote">Hong</div><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Thanks for the quick response. Here are the log_summary for 24, 48 and 72 cpus:<br>
<br>
24 cpus<br>
======<br>
MatSolve 1 1.0 1.8100e+00 1.0 0.00e+00 0.0 7.0e+02 7.4e+04 3.0e+00 0 0 68 3 9 0 0 68 3 9 0<br>
MatCholFctrSym 1 1.0 4.6683e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 6 0 0 0 15 6 0 0 0 15 0<br>
MatCholFctrNum 1 1.0 5.8129e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 78 0 0 0 0 78 0 0 0 0 0<br>
<br>
48 cpus<br>
======<br>
MatSolve 1 1.0 1.4915e+00 1.0 0.00e+00 0.0 1.6e+03 3.3e+04 3.0e+00 0 0 68 3 9 0 0 68 3 9 0<br>
MatCholFctrSym 1 1.0 5.3486e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 9 0 0 0 15 9 0 0 0 15 0<br>
MatCholFctrNum 1 1.0 4.0803e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 71 0 0 0 0 71 0 0 0 0 0<br>
<br>
72 cpus<br>
======<br>
<span class="">MatSolve 1 1.0 7.7200e+00 1.1 0.00e+00 0.0 2.6e+03 2.0e+04 3.0e+00 1 0 68 2 9 1 0 68 2 9 0<br>
MatCholFctrSym 1 1.0 1.8439e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 29 0 0 0 15 29 0 0 0 15 0<br>
MatCholFctrNum 1 1.0 3.3969e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 53 0 0 0 0 53 0 0 0 0 0<br>
<br>
</span>Does this look normal or is something off here? Regarding reordering algorithm of Pardiso. At this time I do not know much about that. I will do some research and see what I can learn. However, I believe Mumps only has two options:<br>
<br>
-mat_mumps_icntl_29 - ICNTL(29): parallel ordering 1 = ptscotch, 2 = parmetis<br>
<br>
I have tried both and do not see any speed difference. Or are you referring to some other kind of reordering?<br>
<div class=""><br>
<br>
--------------------------------------------<br>
On Mon, 6/27/16, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
<br>
Subject: Re: [petsc-users] Performance of mumps vs. Intel Pardiso<br>
To: "Faraz Hussain" <<a href="mailto:faraz_hussain@yahoo.com">faraz_hussain@yahoo.com</a>><br>
Cc: "<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>" <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Date: Monday, June 27, 2016, 5:50 PM<br>
</div><div class=""><div class="h5"><br>
<br>
These are the only lines that<br>
matter<br>
<br>
MatSolve <br>
1 1.0 7.7200e+00 1.1 0.00e+00<br>
0.0 2.6e+03 2.0e+04 3.0e+00 1 0 68 2 <br>
9 1 0 68 2 9 0<br>
MatCholFctrSym 1 1.0<br>
1.8439e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 29 0 <br>
0 0 15 29 0 0 0 15 0<br>
MatCholFctrNum 1 1.0<br>
3.3969e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 53 0 <br>
0 0 0 53 0 0 0 0 0<br>
<br>
look at the log summary for 24<br>
and 48 processes. How are the symbolic and numeric parts<br>
scaling with the number of processes?<br>
<br>
Things that could affect the performance a lot.<br>
Is the symbolic factorization done in parallel? What<br>
reordering is used? If Pardiso is using a reordering that is<br>
better for this matrix and has (much) lower fill that could<br>
explain why it is so much faster.<br>
<br>
Perhaps correspond with the MUMPS developers<br>
on what MUMPS options might make it faster<br>
<br>
Barry<br>
<br>
<br>
> On Jun 27, 2016, at 5:39 PM, Faraz Hussain<br>
<<a href="mailto:faraz_hussain@yahoo.com">faraz_hussain@yahoo.com</a>><br>
wrote:<br>
><br>
> I am<br>
struggling trying to understand why mumps is so much slower<br>
than Intel Pardiso solver for my simple test matrix (<br>
3million^2 sparse symmetrix matrix with ~1000 non-zero<br>
entries per line ).<br>
><br>
> My compute nodes have 24 cpus each. Intel<br>
Pardiso solves it in in 120 seconds using all 24 cpus of one<br>
node. With Mumps I get:<br>
><br>
> 24 cpus - 765 seconds<br>
><br>
48 cpus - 401 seconds<br>
> 72 cpus - 344<br>
seconds<br>
> beyond 72 cpus no speed<br>
improvement.<br>
><br>
> I am attaching the -log_summary to see if<br>
there is something wrong in how I am solving the problem. I<br>
am really hoping mumps will be faster when using more cpus..<br>
Otherwise I will have to abort my exploration of<br>
mumps!<log_summary.o265103><br>
</div></div></blockquote></div><br></div></div>