[petsc-users] Using direct solvers in parallel

Hong Zhang hzhang at mcs.anl.gov
Tue May 15 09:40:45 CDT 2012


Thomas :

>
>>  I attached my data to this mail. For the largest matrix, umfpack failed
> after allocating 4 GB of memory. I have not tried to figure out what's the
> problem there. As you can see, for these matrices the distributed solvers
> are


 umfpack is a sequential package. 4GB+ likely exceeds memory capability of
single core in your machine.

slower by a factor of 2 or 3 compared to umfpack. For all solvers, I have
> used the standard parameters, so I have not played around with the
> permutation strategies and such things. This may be also the reason why
> superlu is much slower than superlu_dist with just one core as it makes use
> of different col and row permutation strategies.


The data on superlu_dist and mumps look reasonable to me. The poor parallel
performance
is likely due to the multicore machine being used.
Try these runs on a machine that is desirable for distributed computing. See
http://www.mcs.anl.gov/petsc/documentation/faq.html#computers.

Hong

>
>  However, umpack will not work on a distributed memory machine.
>> My personal preference is to use superlu_dist in parallel. In my
>> experience using it as a coarse grid solver for multigrid, I find it
>> much more reliable than mumps. However, when mumps works, its is
>> typically slightly faster than superlu_dist. Again, not by a large
>> amount - never more than a factor of 2 faster.
>>
> In my codes I also make use of the distributed direct solvers for the
> coarse grid problems. I just wanted to make some tests how far away these
> solvers are from the sequential counterparts.
>
> Thomas
>
>
>> The failure rate using mumps is definitely higher (in my experience)
>> when running on large numbers of cores compared to superlu_dist. I've
>> never got to the bottom as to why it fails.
>>
>> Cheers,
>>   Dave
>>
>>
>> On 15 May 2012 09:25, Thomas Witkowski<thomas.witkowski at tu-**dresden.de<thomas.witkowski at tu-dresden.de>>
>>  wrote:
>>
>>> I made some comparisons of using umfpack, superlu, superlu_dist and
>>> mumps to
>>> solve systems with sparse matrices arising from finite element method.
>>> The
>>> size of the matrices range from around 50000 to more than 3 million
>>> unknowns. I used 1, 2, 4, 8 and 16 nodes to make the benchmark. Now, I
>>> wonder that in all cases the sequential umfpack was the fastest one. So
>>> even
>>> with 16 cores, superlu_dist and mumps are slower. Can anybody of you
>>> confirm
>>> this observation? Are there any other parallel direct solvers around
>>> which
>>> are more efficient?
>>>
>>> Thomas
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120515/243d1f4a/attachment.htm>


More information about the petsc-users mailing list