Question on using MUMPS in PETSC

Randall Mackie rlmackie862 at gmail.com
Fri Aug 1 10:21:34 CDT 2008


Hi Hong,

Thanks for the email - this appears to be happening in the factorization stage,
and one process continues to just eat up memory. I tried running your
ex2.c with m=n=5000, and I saw the same behavior. I was wondering
if there was some setting I was suppose to toggle, but it sounds like this
behavior is not correct.

I have another program from years ago that called the MUMPS routines
directly. I might try that and see what happens.

Randy


Hong Zhang wrote:
> 
> Randy,
> The petsc interface does not create much of extra
> memories.
> The analysis phase of MUMPS solver is sequential - which might causes 
> one process blow up with memory.
> I'm forwarding this email to the mumps developer
> for their input.
> 
> Jean-Yves,
> What do you think about the reported problem
> (see attached below)?
> 
> Thanks,
> 
> Hong
> 
> On Thu, 31 Jul 2008, Randall Mackie wrote:
> 
>> Barry,
>>
>> I don't think it's the matrix - I saw the same behavior when I ran your
>> ex2.c program and set m=n=5000.
>>
>> Randy
>>
>>
>> Barry Smith wrote:
>>>
>>>    If m and n are the number of rows and columns of the sparse matrix 
>>> (i.e. it is
>>> tiny problem) then please
>>> send us matrix so we can experiment with it to petsc-maint at mcs.anl.log
>>>
>>>   You can send us the matrix by simply running with -ksp_view_binary and
>>> sending us the file binaryoutput.
>>>
>>>    Barry
>>>
>>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote:
>>>
>>>> When m = n = small (like 50), it works fine. When I set m=n=5000, I see
>>>> the same thing, where one process on the localhost is taking >4 G of 
>>>> RAM,
>>>> while all other processes are taking 137 M.
>>>>
>>>> Is this the standard behavior for MUMPS? It seems strange to me.
>>>>
>>>> Randy
>>>>
>>>>
>>>> Matthew Knepley wrote:
>>>>> Does it work on KSP ex2?
>>>>>  Matt
>>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie 
>>>>> <rlmackie862 at gmail.com> wrote:
>>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run a 
>>>>>> small test
>>>>>> problem, but I'm having some problems. It seems to begin just 
>>>>>> fine, but
>>>>>> what I notice is that on one process (out of 64), the memory just 
>>>>>> keeps
>>>>>> going up and up and up until it crashes, while on the other 
>>>>>> processes,
>>>>>> the memory usage is reasonable. I'm wondering if anyone might have 
>>>>>> any idea
>>>>>> why? By the way, my command file is like this:
>>>>>>
>>>>>> -ksp_type preonly
>>>>>> -pc_type lu
>>>>>> -mat_type aijmumps
>>>>>> -mat_mumps_cntl_4 3
>>>>>> -mat_mumps_cntl_9 1
>>>>>>
>>>>>>
>>>>>> Randy
>>>>>>
>>>>>> ps. This happens after the analysis stage and in the factorization 
>>>>>> stage.
>>>>>>
>>>>>>
>>>>
>>>
>>
>>
> 




More information about the petsc-users mailing list