<div>Hi Hong</div>
<div> </div>
<div>I got the error from MUMPS.</div>
<div> </div>
<div>When I run MUMPS (which requring scalapack) with matrix size (n) = 30620, nonzeros (nz) = 785860, <br>I could run it. And could get result. <br></div>
<div>But when I run it with <br>nz=3112820 <br>n =61240 <br><br><br>I am getting the following error <br><br><br>17 - <NO ERROR MESSAGE> : Could not convert index 1140850688 into a pointer <br>The index may be an incorrect argument. <br>
Possible sources of this problem are a missing "include 'mpif.h'", <br>a misspelled MPI object (e.g., MPI_COM_WORLD instead of MPI_COMM_WORLD) <br>or a misspelled user variable for an MPI object (e.g., <br>
com instead of comm). <br>[17] [] Aborting Program! <br><br><br><br>Do you know what happened? <br>Is that possible it is running out of memory?<br><br></div>
<div class="gmail_quote">On Wed, Dec 21, 2011 at 7:15 AM, Hong Zhang <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov">hzhang@mcs.anl.gov</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PADDING-LEFT:1ex" class="gmail_quote">Direct solvers often require large memory for storing matrix factors.<br>As Jed suggests, you may try superlu_dist.<br>
<br>With mumps, I notice you use parallel analysis, which is relative new in mumps.<br>What happens if you use default sequential analysis with<br>different matrix orderings?<br>I usually use matrix ordering '-mat_mumps_icntl_7 2'.<br>
<br>Also, you can increase fill ratio,<br>-mat_mumps_icntl_14 <20>: ICNTL(14): percentage of estimated workspace<br>increase (None)<br>i.e., default ration is 20, you may try 50? (I notice that you already use 30).<br>
<br>It seems you use 16 CPUs for "a mere couple thousands<br>elements" problems, and mumps "silently freezes". I do not have this type<br>of experience with mumps. I usually can solve sparse matrix of size<br>
10k with 1 cpu using mumps.<br>When mumps runs out of memory or gets other problems, it terminates<br>execution and dumps out error message,<br>not freezes.<br>Something is wrong here. Use a debugger and figuring out where it freezes.<br>
<span class="HOEnZb"><font color="#888888"><br>Hong<br></font></span>
<div class="HOEnZb">
<div class="h5"><br>On Wed, Dec 21, 2011 at 7:01 AM, Jed Brown <<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>> wrote:<br>> -pc_type lu -pc_factor_mat_solver_package superlu_dist<br>><br>> On Dec 21, 2011 6:19 AM, "Dominik Szczerba" <<a href="mailto:dominik@itis.ethz.ch">dominik@itis.ethz.ch</a>> wrote:<br>
>><br>>> I am successfully solving my indefinite systems with MUMPS but only<br>>> for very small problems. To give a feeling, a mere couple thousands<br>>> elements. If I only double the problem size, it silently freezes, even<br>
>> with max verbosity via the control parameters. Did anyone succeed here<br>>> with big problems? Any recommendations for a drop-in replacement for<br>>> MUMPS?<br>>><br>>> Thanks for any hints,<br>
>> Dominik<br>>><br>>><br>>><br>>> Options used:<br>>> -mat_mumps_icntl_4 3 -mat_mumps_icntl_28 2 -mat_mumps_icntl_29<br>>><br>>> Output:<br>>><br>>> ****** FACTORIZATION STEP ********<br>
>><br>>><br>>> GLOBAL STATISTICS PRIOR NUMERICAL FACTORIZATION ...<br>>> NUMBER OF WORKING PROCESSES = 16<br>>> OUT-OF-CORE OPTION (ICNTL(22)) = 0<br>
>> REAL SPACE FOR FACTORS = 1438970073<br>>> INTEGER SPACE FOR FACTORS = 11376442<br>>> MAXIMUM FRONTAL SIZE (ESTIMATED) = 16868<br>>> NUMBER OF NODES IN THE TREE = 43676<br>
>> Convergence error after scaling for ONE-NORM (option 7/8) = 0.21D+01<br>>> Maximum effective relaxed size of S = 231932340<br>>> Average effective relaxed size of S = 182366303<br>
>><br>>> REDISTRIB: TOTAL DATA LOCAL/SENT = 1509215 22859750<br>>> GLOBAL TIME FOR MATRIX DISTRIBUTION = 0.8270<br>>> ** Memory relaxation parameter ( ICNTL(14) ) : 35<br>
>> ** Rank of processor needing largest memory in facto : 0<br>>> ** Space in MBYTES used by this processor for facto : 2017<br>>> ** Avg. Space in MBYTES per working proc during facto : 1618<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Hailong<br>