[petsc-users] mumps freezes for bigger problems
Dominik Szczerba
dominik at itis.ethz.ch
Fri Dec 23 16:36:31 CST 2011
Hi Hong,
I tried all of your suggestions, unfortunately, still get MUMPS not
moving an inch. Running in debugger and interrupting reveals:
0x000000000139d233 in dgemm (transa=..., transb=..., m=3621, n=3621, k=251,
alpha=-1, a=..., lda=3872, b=..., ldb=3872, beta=1, c=..., ldc=3872,
_transa=1, _transb=1) at dgemm.f:242
242 C(I,J) = C(I,J) + TEMP*A(I,L)
So it's seems to be doing something, but on a small problem it takes a
few minutes, so I do not expect a problem of 2-3 times the size take
longer than a day...
Regards,
Dominik
On Wed, Dec 21, 2011 at 4:15 PM, Hong Zhang <hzhang at mcs.anl.gov> wrote:
> Direct solvers often require large memory for storing matrix factors.
> As Jed suggests, you may try superlu_dist.
>
> With mumps, I notice you use parallel analysis, which is relative new in mumps.
> What happens if you use default sequential analysis with
> different matrix orderings?
> I usually use matrix ordering '-mat_mumps_icntl_7 2'.
>
> Also, you can increase fill ratio,
> -mat_mumps_icntl_14 <20>: ICNTL(14): percentage of estimated workspace
> increase (None)
> i.e., default ration is 20, you may try 50? (I notice that you already use 30).
>
> It seems you use 16 CPUs for "a mere couple thousands
> elements" problems, and mumps "silently freezes". I do not have this type
> of experience with mumps. I usually can solve sparse matrix of size
> 10k with 1 cpu using mumps.
> When mumps runs out of memory or gets other problems, it terminates
> execution and dumps out error message,
> not freezes.
> Something is wrong here. Use a debugger and figuring out where it freezes.
>
> Hong
>
> On Wed, Dec 21, 2011 at 7:01 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>> -pc_type lu -pc_factor_mat_solver_package superlu_dist
>>
>> On Dec 21, 2011 6:19 AM, "Dominik Szczerba" <dominik at itis.ethz.ch> wrote:
>>>
>>> I am successfully solving my indefinite systems with MUMPS but only
>>> for very small problems. To give a feeling, a mere couple thousands
>>> elements. If I only double the problem size, it silently freezes, even
>>> with max verbosity via the control parameters. Did anyone succeed here
>>> with big problems? Any recommendations for a drop-in replacement for
>>> MUMPS?
>>>
>>> Thanks for any hints,
>>> Dominik
>>>
>>>
>>>
>>> Options used:
>>> -mat_mumps_icntl_4 3 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29
>>>
>>> Output:
>>>
>>> ****** FACTORIZATION STEP ********
>>>
>>>
>>> GLOBAL STATISTICS PRIOR NUMERICAL FACTORIZATION ...
>>> NUMBER OF WORKING PROCESSES = 16
>>> OUT-OF-CORE OPTION (ICNTL(22)) = 0
>>> REAL SPACE FOR FACTORS = 1438970073
>>> INTEGER SPACE FOR FACTORS = 11376442
>>> MAXIMUM FRONTAL SIZE (ESTIMATED) = 16868
>>> NUMBER OF NODES IN THE TREE = 43676
>>> Convergence error after scaling for ONE-NORM (option 7/8) = 0.21D+01
>>> Maximum effective relaxed size of S = 231932340
>>> Average effective relaxed size of S = 182366303
>>>
>>> REDISTRIB: TOTAL DATA LOCAL/SENT = 1509215 22859750
>>> GLOBAL TIME FOR MATRIX DISTRIBUTION = 0.8270
>>> ** Memory relaxation parameter ( ICNTL(14) ) : 35
>>> ** Rank of processor needing largest memory in facto : 0
>>> ** Space in MBYTES used by this processor for facto : 2017
>>> ** Avg. Space in MBYTES per working proc during facto : 1618
>
More information about the petsc-users
mailing list