Question on using MUMPS in PETSC

Randall Mackie rlmackie862 at gmail.com
Thu Jul 31 23:56:12 CDT 2008


Barry,

I don't think it's the matrix - I saw the same behavior when I ran your
ex2.c program and set m=n=5000.

Randy


Barry Smith wrote:
> 
>    If m and n are the number of rows and columns of the sparse matrix 
> (i.e. it is
> tiny problem) then please
> send us matrix so we can experiment with it to petsc-maint at mcs.anl.log
> 
>   You can send us the matrix by simply running with -ksp_view_binary and
> sending us the file binaryoutput.
> 
>    Barry
> 
> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote:
> 
>> When m = n = small (like 50), it works fine. When I set m=n=5000, I see
>> the same thing, where one process on the localhost is taking >4 G of RAM,
>> while all other processes are taking 137 M.
>>
>> Is this the standard behavior for MUMPS? It seems strange to me.
>>
>> Randy
>>
>>
>> Matthew Knepley wrote:
>>> Does it work on KSP ex2?
>>>  Matt
>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie 
>>> <rlmackie862 at gmail.com> wrote:
>>>> I've compiled PETSc with MUMPS support, and I'm trying to run a 
>>>> small test
>>>> problem, but I'm having some problems. It seems to begin just fine, but
>>>> what I notice is that on one process (out of 64), the memory just keeps
>>>> going up and up and up until it crashes, while on the other processes,
>>>> the memory usage is reasonable. I'm wondering if anyone might have 
>>>> any idea
>>>> why? By the way, my command file is like this:
>>>>
>>>> -ksp_type preonly
>>>> -pc_type lu
>>>> -mat_type aijmumps
>>>> -mat_mumps_cntl_4 3
>>>> -mat_mumps_cntl_9 1
>>>>
>>>>
>>>> Randy
>>>>
>>>> ps. This happens after the analysis stage and in the factorization 
>>>> stage.
>>>>
>>>>
>>
> 




More information about the petsc-users mailing list