[mumps-dev] Re: Question on using MUMPS in PETSC

Randall Mackie rlmackie862 at gmail.com
Mon Aug 4 12:56:40 CDT 2008


Hi Hong,

I am using PETSc 2.3.3-p11.

I was running on 64 processors (8 8-core Intel CPUS).

I was using options:

-ksp_type preonly
-pc_type lu
-mat_type aijmumps
-mat_mumps_sym 0
-mat_mumps_icntl_4 3
-mat_mumps_icntl_9 1

With Mumps, both my code and ex2.c (with m=n=5000) would just keep
allocating memory to one process until it ran out of memory.

SuperLU worked fine on my problem (I didn't try it with ex2.c), taking
only 1 Gbyte per process, and the results were exactly right.

Randy


Hong Zhang wrote:
> 
> Randy,
> 
> I'll check it.
> Did you use
> /src/ksp/ksp/examples/tutorials/ex2.c ?
> 
> Can you give me the petsc version, num of processors, and the runtime 
> options used.
> 
> Thanks,
> 
> Hong
> 
> On Fri, 1 Aug 2008, Randall Mackie wrote:
> 
>> Barry,
>>
>> No, this is the same program I've used quite successfully using iterative
>> methods within PETSc for years. Each processors portion of the matrix
>> is constructed on the individual processors.
>>
>> In fact, I downloaded and recompiled PETSc to use SUPERLU, and using 
>> the exact
>> same program, but changing the matrix type from aijmumps to superlu_dist,
>> and it worked just fine.
>>
>> So, I'm not sure why MUMPS is not working.
>>
>> Randy
>>
>>
>> Barry Smith wrote:
>>>
>>>   Are you sure you are not constructing the original matrix with all 
>>> its rows and columns
>>> on the first process?
>>>
>>>   Barry
>>> On Aug 1, 2008, at 5:49 PM, Randall Mackie wrote:
>>>
>>>> In fact, during the Analysis step, the max amount of memory 300 
>>>> Mbytes is
>>>> used by one process. However, during the Factorization stage, that 
>>>> same process
>>>> then starts to increase in memory, with all the other processes 
>>>> staying the same.
>>>>
>>>> I've re-run this several times using different numbers of 
>>>> processors, and I
>>>> keep getting the same behavior.
>>>>
>>>>
>>>>
>>>> Randy
>>>>
>>>>
>>>> Jean-Yves L Excellent wrote:
>>>>> Hi,
>>>>> Clearly in MUMPS processor 0 uses more memory during
>>>>> the analysis step because the analysis is sequential.
>>>>> So until we provide a parallel analysis, processor 0
>>>>> is gathering the graph of the matrix from all other
>>>>> processors to perform the analysis. But that memory
>>>>> is freed at the end of the analysis so it should
>>>>> not affect the factorization.
>>>>> Thanks for letting us know if you have more information.
>>>>> Regards,
>>>>> Jean-Yves
>>>>> Hong Zhang wrote:
>>>>>>
>>>>>> Randy,
>>>>>> The petsc interface does not create much of extra
>>>>>> memories.
>>>>>> The analysis phase of MUMPS solver is sequential - which might 
>>>>>> causes one process blow up with memory.
>>>>>> I'm forwarding this email to the mumps developer
>>>>>> for their input.
>>>>>>
>>>>>> Jean-Yves,
>>>>>> What do you think about the reported problem
>>>>>> (see attached below)?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Hong
>>>>>>
>>>>>> On Thu, 31 Jul 2008, Randall Mackie wrote:
>>>>>>
>>>>>>> Barry,
>>>>>>>
>>>>>>> I don't think it's the matrix - I saw the same behavior when I 
>>>>>>> ran your
>>>>>>> ex2.c program and set m=n=5000.
>>>>>>>
>>>>>>> Randy
>>>>>>>
>>>>>>>
>>>>>>> Barry Smith wrote:
>>>>>>>>
>>>>>>>>   If m and n are the number of rows and columns of the sparse 
>>>>>>>> matrix (i.e. it is
>>>>>>>> tiny problem) then please
>>>>>>>> send us matrix so we can experiment with it to 
>>>>>>>> petsc-maint at mcs.anl.log
>>>>>>>>
>>>>>>>>  You can send us the matrix by simply running with 
>>>>>>>> -ksp_view_binary and
>>>>>>>> sending us the file binaryoutput.
>>>>>>>>
>>>>>>>>   Barry
>>>>>>>>
>>>>>>>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote:
>>>>>>>>
>>>>>>>>> When m = n = small (like 50), it works fine. When I set 
>>>>>>>>> m=n=5000, I see
>>>>>>>>> the same thing, where one process on the localhost is taking >4 
>>>>>>>>> G of RAM,
>>>>>>>>> while all other processes are taking 137 M.
>>>>>>>>>
>>>>>>>>> Is this the standard behavior for MUMPS? It seems strange to me.
>>>>>>>>>
>>>>>>>>> Randy
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Matthew Knepley wrote:
>>>>>>>>>> Does it work on KSP ex2?
>>>>>>>>>> Matt
>>>>>>>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie 
>>>>>>>>>> <rlmackie862 at gmail.com> wrote:
>>>>>>>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run 
>>>>>>>>>>> a small test
>>>>>>>>>>> problem, but I'm having some problems. It seems to begin just 
>>>>>>>>>>> fine, but
>>>>>>>>>>> what I notice is that on one process (out of 64), the memory 
>>>>>>>>>>> just keeps
>>>>>>>>>>> going up and up and up until it crashes, while on the other 
>>>>>>>>>>> processes,
>>>>>>>>>>> the memory usage is reasonable. I'm wondering if anyone might 
>>>>>>>>>>> have any idea
>>>>>>>>>>> why? By the way, my command file is like this:
>>>>>>>>>>>
>>>>>>>>>>> -ksp_type preonly
>>>>>>>>>>> -pc_type lu
>>>>>>>>>>> -mat_type aijmumps
>>>>>>>>>>> -mat_mumps_cntl_4 3
>>>>>>>>>>> -mat_mumps_cntl_9 1
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Randy
>>>>>>>>>>>
>>>>>>>>>>> ps. This happens after the analysis stage and in the 
>>>>>>>>>>> factorization stage.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>
>>>
>>
>>
> 




More information about the petsc-users mailing list