Dominik,<br><br>I apologize for the confusion, but, if you read the quoted text, you will see that I was replying to Hong about a branch from this thread concerning Hailong.<br><br>Barry's response was slso related to said branch.<br>
<br>Jack<br><br>On Saturday, December 24, 2011, Dominik Szczerba <<a href="mailto:dominik@itis.ethz.ch">dominik@itis.ethz.ch</a>> wrote:<br>> Jack: I do not even have these packages installed anywhere on my system.<br>
> Barry: That's what I did, I downloaded everything via configure.<br>><br>> Anywhere else to look?<br>><br>> Dominik<br>><br>> On Sat, Dec 24, 2011 at 1:35 AM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
>><br>>> If you have PETSc ./configure do all the installs this decreases the chance of problems like this. Use --download-blacs --download-scalapack --download-mumps --download-parmetis --download-ptscotch<br>
>><br>>><br>>> Barry<br>>><br>>> On Dec 23, 2011, at 4:56 PM, Jack Poulson wrote:<br>>><br>>>> It looks like it's due to mixing different MPI implementations together (i.e., including the wrong 'mpif.h'):<br>
>>> <a href="http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-July/007559.html">http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-July/007559.html</a><br>>>><br>>>> If I recall correctly, MUMPS only uses ScaLAPACK to factor the root separator when it is sufficiently large, and that would explain why it works for him for smaller problems. I would double check that ScaLAPACK, PETSc, and MUMPS are all compiled with the same MPI implementation.<br>
>>><br>>>> Jack<br>>>><br>>>> On Wed, Dec 21, 2011 at 4:55 PM, Hong Zhang <<a href="mailto:hzhang@mcs.anl.gov">hzhang@mcs.anl.gov</a>> wrote:<br>>>> Hailong:<br>>>> I've never seen this type of error from MUMPS.<br>
>>> It seems programming bug. Are you sure smaller problem runs correctly?<br>>>> Use valgrind check it.<br>>>><br>>>> Hong<br>>>><br>>>> > I got the error from MUMPS.<br>
>>> ><br>>>> > When I run MUMPS (which requring scalapack) with matrix size (n) = 30620,<br>>>> > nonzeros (nz) = 785860,<br>>>> > I could run it. And could get result.<br>>>> > But when I run it with<br>
>>> > nz=3112820<br>>>> > n =61240<br>>>> ><br>>>> ><br>>>> > I am getting the following error<br>>>> ><br>>>> ><br>>>> > 17 - <NO ERROR MESSAGE> : Could not convert index 1140850688 into a pointer<br>
>>> > The index may be an incorrect argument.<br>>>> > Possible sources of this problem are a missing "include 'mpif.h'",<br>>>> > a misspelled MPI object (e.g., MPI_COM_WORLD instead of MPI_COMM_WORLD)<br>
>>> > or a misspelled user variable for an MPI object (e.g.,<br>>>> > com instead of comm).<br>>>> > [17] [] Aborting Program!<br>>>> ><br>>>> ><br>>>> ><br>
>>> > Do you know what happened?<br>>>> > Is that possible it is running out of memory?<br>>>> ><br>>>> > On Wed, Dec 21, 2011 at 7:15 AM, Hong Zhang <<a href="mailto:hzhang@mcs.anl.gov">hzhang@mcs.anl.gov</a>> wrote:<br>
>>> >><br>>>> >> Direct solvers often require large memory for storing matrix factors.<br>>>> >> As Jed suggests, you may try superlu_dist.<br>>>> >><br>>>> >> With mumps, I notice you use parallel analysis, which is relative new in<br>
>>> >> mumps.<br>>>> >> What happens if you use default sequential analysis with<br>>>> >> different matrix orderings?<br>>>> >> I usually use matrix ordering '-mat_mumps_icntl_7 2'.<br>
>>> >><br>>>> >> Also, you can increase fill ratio,<br>>>> >> -mat_mumps_icntl_14 <20>: ICNTL(14): percentage of estimated workspace<br>>>> >> increase (None)<br>
>>> >> i.e., default ration is 20, you may try 50? (I notice that you already use<br>>>> >> 30).<br>>>> >><br>>>> >> It seems you use 16 CPUs for "a mere couple thousands<br>
>>> >> elements" problems, and mumps "silently freezes". I do not have this type<br>>>> >> of experience with mumps. I usually can solve sparse matrix of size<br>>>> >> 10k with 1 cpu using mumps.<br>
>>> >> When mumps runs out of memory or gets other problems, it terminates<br>>>> >> execution and dumps out error message,<br>>>> >> not freezes.<br>>>> >> Something is wrong here. Use a debugger and figuring out where it freezes.<br>
>>> >><br>>>> >> Hong<br>>>> >><br>>>> >> On Wed, Dec 21, 2011 at 7:01 AM, Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>> wrote:<br>
>>> >> > -pc_type lu -pc_factor_mat_solver_package superlu_dist<br>>>> >> ><br>>>> >> > On Dec 21, 2011 6:19 AM, "Dominik Szczerba" <dominik@i