[petsc-users] mumps freezes for bigger problems

Jack Poulson jack.poulson at gmail.com
Sun Dec 25 11:06:36 CST 2011


Dominik,

I apologize for the confusion, but, if you read the quoted text, you will
see that I was replying to Hong about a branch from this thread concerning
Hailong.

Barry's response was slso related to said branch.

Jack

On Saturday, December 24, 2011, Dominik Szczerba <dominik at itis.ethz.ch>
wrote:
> Jack: I do not even have these packages installed anywhere on my system.
> Barry: That's what I did, I downloaded everything via configure.
>
> Anywhere else to look?
>
> Dominik
>
> On Sat, Dec 24, 2011 at 1:35 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>   If you have PETSc ./configure do all the installs this decreases the
chance of problems like this.  Use --download-blacs --download-scalapack
--download-mumps --download-parmetis --download-ptscotch
>>
>>
>>   Barry
>>
>> On Dec 23, 2011, at 4:56 PM, Jack Poulson wrote:
>>
>>> It looks like it's due to mixing different MPI implementations together
(i.e., including the wrong 'mpif.h'):
>>> http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-July/007559.html
>>>
>>> If I recall correctly, MUMPS only uses ScaLAPACK to factor the root
separator when it is sufficiently large, and that would explain why it
works for him for smaller problems. I would double check that ScaLAPACK,
PETSc, and MUMPS are all compiled with the same MPI implementation.
>>>
>>> Jack
>>>
>>> On Wed, Dec 21, 2011 at 4:55 PM, Hong Zhang <hzhang at mcs.anl.gov> wrote:
>>> Hailong:
>>> I've never seen this type of error from MUMPS.
>>> It seems programming bug. Are you sure smaller problem runs correctly?
>>> Use valgrind check it.
>>>
>>> Hong
>>>
>>> > I got the error from MUMPS.
>>> >
>>> > When I run MUMPS (which requring scalapack) with matrix size (n) =
30620,
>>> > nonzeros (nz) = 785860,
>>> > I could run it. And could get result.
>>> > But when I run it with
>>> > nz=3112820
>>> > n =61240
>>> >
>>> >
>>> > I am getting the following error
>>> >
>>> >
>>> > 17 - <NO ERROR MESSAGE> : Could not convert index 1140850688 into a
pointer
>>> > The index may be an incorrect argument.
>>> > Possible sources of this problem are a missing "include 'mpif.h'",
>>> > a misspelled MPI object (e.g., MPI_COM_WORLD instead of
MPI_COMM_WORLD)
>>> > or a misspelled user variable for an MPI object (e.g.,
>>> > com instead of comm).
>>> > [17] [] Aborting Program!
>>> >
>>> >
>>> >
>>> > Do you know what happened?
>>> > Is that possible it is running out of memory?
>>> >
>>> > On Wed, Dec 21, 2011 at 7:15 AM, Hong Zhang <hzhang at mcs.anl.gov>
wrote:
>>> >>
>>> >> Direct solvers often require large memory for storing matrix factors.
>>> >> As Jed suggests, you may try superlu_dist.
>>> >>
>>> >> With mumps, I notice you use parallel analysis, which is relative
new in
>>> >> mumps.
>>> >> What happens if you use default sequential analysis with
>>> >> different matrix orderings?
>>> >> I usually use matrix ordering '-mat_mumps_icntl_7 2'.
>>> >>
>>> >> Also, you can increase fill ratio,
>>> >> -mat_mumps_icntl_14 <20>: ICNTL(14): percentage of estimated
workspace
>>> >> increase (None)
>>> >> i.e., default ration is 20, you may try 50? (I notice that you
already use
>>> >> 30).
>>> >>
>>> >> It seems you use 16 CPUs for "a mere couple thousands
>>> >> elements" problems, and mumps "silently freezes". I do not have this
type
>>> >> of experience with mumps. I usually can solve sparse matrix of size
>>> >> 10k with 1 cpu using mumps.
>>> >> When mumps runs out of memory or gets other problems, it terminates
>>> >> execution and dumps out error message,
>>> >> not freezes.
>>> >> Something is wrong here. Use a debugger and figuring out where it
freezes.
>>> >>
>>> >> Hong
>>> >>
>>> >> On Wed, Dec 21, 2011 at 7:01 AM, Jed Brown <jedbrown at mcs.anl.gov>
wrote:
>>> >> > -pc_type lu -pc_factor_mat_solver_package superlu_dist
>>> >> >
>>> >> > On Dec 21, 2011 6:19 AM, "Dominik Szczerba" <dominik at i
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111225/0ea216a5/attachment.htm>


More information about the petsc-users mailing list