[mumps-dev] Re: Question on using MUMPS in PETSC

Barry Smith bsmith at mcs.anl.gov
Fri Aug 1 19:44:15 CDT 2008


   Are you sure you are not constructing the original matrix with all  
its rows and columns
on the first process?

   Barry
On Aug 1, 2008, at 5:49 PM, Randall Mackie wrote:

> In fact, during the Analysis step, the max amount of memory 300  
> Mbytes is
> used by one process. However, during the Factorization stage, that  
> same process
> then starts to increase in memory, with all the other processes  
> staying the same.
>
> I've re-run this several times using different numbers of  
> processors, and I
> keep getting the same behavior.
>
>
>
> Randy
>
>
> Jean-Yves L Excellent wrote:
>> Hi,
>> Clearly in MUMPS processor 0 uses more memory during
>> the analysis step because the analysis is sequential.
>> So until we provide a parallel analysis, processor 0
>> is gathering the graph of the matrix from all other
>> processors to perform the analysis. But that memory
>> is freed at the end of the analysis so it should
>> not affect the factorization.
>> Thanks for letting us know if you have more information.
>> Regards,
>> Jean-Yves
>> Hong Zhang wrote:
>>>
>>> Randy,
>>> The petsc interface does not create much of extra
>>> memories.
>>> The analysis phase of MUMPS solver is sequential - which might  
>>> causes one process blow up with memory.
>>> I'm forwarding this email to the mumps developer
>>> for their input.
>>>
>>> Jean-Yves,
>>> What do you think about the reported problem
>>> (see attached below)?
>>>
>>> Thanks,
>>>
>>> Hong
>>>
>>> On Thu, 31 Jul 2008, Randall Mackie wrote:
>>>
>>>> Barry,
>>>>
>>>> I don't think it's the matrix - I saw the same behavior when I  
>>>> ran your
>>>> ex2.c program and set m=n=5000.
>>>>
>>>> Randy
>>>>
>>>>
>>>> Barry Smith wrote:
>>>>>
>>>>>   If m and n are the number of rows and columns of the sparse  
>>>>> matrix (i.e. it is
>>>>> tiny problem) then please
>>>>> send us matrix so we can experiment with it to petsc-maint at mcs.anl.log
>>>>>
>>>>>  You can send us the matrix by simply running with - 
>>>>> ksp_view_binary and
>>>>> sending us the file binaryoutput.
>>>>>
>>>>>   Barry
>>>>>
>>>>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote:
>>>>>
>>>>>> When m = n = small (like 50), it works fine. When I set  
>>>>>> m=n=5000, I see
>>>>>> the same thing, where one process on the localhost is taking >4  
>>>>>> G of RAM,
>>>>>> while all other processes are taking 137 M.
>>>>>>
>>>>>> Is this the standard behavior for MUMPS? It seems strange to me.
>>>>>>
>>>>>> Randy
>>>>>>
>>>>>>
>>>>>> Matthew Knepley wrote:
>>>>>>> Does it work on KSP ex2?
>>>>>>> Matt
>>>>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie <rlmackie862 at gmail.com 
>>>>>>> > wrote:
>>>>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run  
>>>>>>>> a small test
>>>>>>>> problem, but I'm having some problems. It seems to begin just  
>>>>>>>> fine, but
>>>>>>>> what I notice is that on one process (out of 64), the memory  
>>>>>>>> just keeps
>>>>>>>> going up and up and up until it crashes, while on the other  
>>>>>>>> processes,
>>>>>>>> the memory usage is reasonable. I'm wondering if anyone might  
>>>>>>>> have any idea
>>>>>>>> why? By the way, my command file is like this:
>>>>>>>>
>>>>>>>> -ksp_type preonly
>>>>>>>> -pc_type lu
>>>>>>>> -mat_type aijmumps
>>>>>>>> -mat_mumps_cntl_4 3
>>>>>>>> -mat_mumps_cntl_9 1
>>>>>>>>
>>>>>>>>
>>>>>>>> Randy
>>>>>>>>
>>>>>>>> ps. This happens after the analysis stage and in the  
>>>>>>>> factorization stage.
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>
>




More information about the petsc-users mailing list