<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">Faraz:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="">>Mumps uses parmetis or scotch for parallel symbolic factorization. For sequential symbolic >factorization, it has several matrix orderings, which you can experiment with the option >'-mat_mumps_icntl_7 <>'.<br>
> I doubt any of these ordering would match the performance of Pardiso.<br><br>
</span>Thanks, I have been experimenting with the different ordering options in Mumps. So far I have not seen any speed difference among them. Do you feel Pardiso ordering is so superior to result in 2X increase in speed? I find this hard to believe since my problem is fairly classic and can't imagine why a 10+ year old algorithm would be so much better than what's available today.<br>
<span class=""><br></span></blockquote><div>I do not think Pardiso has much algorithmic superior over mumps.</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="">
<br>
> Again, how large is your matrix? How do you run Pardiso in parallel? Can you use Pardiso on large matrices as mumps?<br><br>
</span>My matrix is 3million^2 with max 1000 non-zeroes per line. Pardiso supports multi-thread, so I just do export OMP_NUM_THREADS=24 to use all available cpus . When running Petsc/Mumps I have to do export OMP_NUM_THREADS=1, otherwise I get very weird cpu utilization. Perhaps that has something to do with the speed difference?<br></blockquote><div><br></div><div>Taking advantage of computer architecture likely makes Pardiso superior.</div><div><br></div><div>Hong</div></div><br></div></div>