<div dir="ltr"><div><div><div><div><div><div><div>Hi <br></div>I have restructured my matrix eigenvalue problem to see why B is singular as you suggested by changing the governing equations in different form. <br><br></div>Now my matrix B is not singular. Both A and B are invertible in Ax=lambda Bx. <br><br></div>Still I receive error in MUMPS as it uses large memory (attached is the error log)<br><br></div>I gave the command: aprun -n 240 -N 24 ./ex7 -f1 A100t -f2 B100t -st_type sinvert -eps_target 0.01 -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_package mumps -mat_mumps_cntl_1 1e-5 -mat_mumps_icntl_4 2 -evecs v100t<br><br></div>The matrix A is 60% with zeros.<br><br></div>Kindly help me.<br><br></div>Venkatesh <br></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, May 31, 2015 at 8:04 PM, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span style="color:rgb(80,0,80)">venkatesh,</span><div><font color="#500050"><br></font><div><font color="#500050">As we discussed previously, even on smaller problems, </font></div><div><font color="#500050">both mumps and superlu_dist failed, although Mumps gave "OOM" error in numerical factorization.</font></div><div><font color="#500050"><br></font></div><div><font color="#500050">You acknowledged that B is singular, which may need additional reformulation for your eigenvalue problems. The option '</font><span style="font-size:12.8000001907349px">-st_type sinvert' likely uses B^{-1} (have you read slepc manual?), which could be the source of trouble. </span></div><div><span style="font-size:12.8000001907349px"><br></span></div><div><span style="font-size:12.8000001907349px">Please investigate your model, understand why B is singular; if there is a way to dump null space before submitting large size simulation.</span></div><span class="HOEnZb"><font color="#888888"><div><span style="font-size:12.8000001907349px"><br></span></div><div><span style="font-size:12.8000001907349px">Hong</span></div></font></span><div><div class="h5"><div><font color="#500050"><br></font><div class="gmail_extra"><br><div class="gmail_quote">On Sun, May 31, 2015 at 8:36 AM, Dave May <span dir="ltr"><<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">It failed due to a lack of memory. "OOM" stands for "out of memory". OOM killer terminated your job means you ran out of memory.<div><div><div><br></div><div><br><div><br><br>On Sunday, 31 May 2015, venkatesh g <<a>venkateshgk.j@gmail.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div>Hi all,<br><br></div>I tried to run my Generalized Eigenproblem in 120 x 24 = 2880 cores. <br></div><div>The matrix size of A = 20GB and B = 5GB. <br><br></div><div>It got killed after 7 Hrs of run time. Please see the mumps error log. Why must it fail ? <br></div><div>I gave the command: <br><br>aprun -n 240 -N 24 ./ex7 -f1 a110t -f2 b110t -st_type sinvert -eps_nev 1 -log_summary -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_package mumps -mat_mumps_cntl_1 1e-2<br><br></div><div>Kindly let me know.<br><br></div><div>cheers,<br></div><div>Venkatesh<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 29, 2015 at 10:46 PM, venkatesh g <span dir="ltr"><<a>venkateshgk.j@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div>Hi Matt, users,<br><br></div>Thanks for the info. Do you also use Petsc and Slepc with MUMPS ? I get into the segmentation error if I increase my matrix size. <br><br></div>Can you suggest other software for direct solver for QR in parallel since as LU may not be good for a singular B matrix in Ax=lambda Bx ? I am attaching the working version mumps log.<br><br></div>My matrix size here is around 47000x47000. If I am not wrong, the memory usage per core is 272MB.<br><br></div>Can you tell me if I am wrong ? or really if its light on memory for this matrix ?<br><br></div>Thanks<br></div>cheers,<br></div>Venkatesh<br></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 29, 2015 at 4:00 PM, Matt Landreman <span dir="ltr"><<a>matt.landreman@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><p dir="ltr">Dear Venkatesh,</p>
<p dir="ltr">As you can see in the error log, you are now getting a segmentation fault, which is almost certainly a separate issue from the info(1)=-9 memory problem you had previously. Here is one idea which may or may not help. I've used mumps on the NERSC Edison system, and I found that I sometimes get segmentation faults when using the default Intel compiler. When I switched to the cray compiler the problem disappeared. So you could perhaps try a different compiler if one is available on your system.</p><span><font color="#888888">
<p dir="ltr">Matt</p></font></span><div><div>
<div class="gmail_quote">On May 29, 2015 4:04 AM, "venkatesh g" <<a>venkateshgk.j@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div>Hi Matt,<br><br></div>I did what you told and read the manual of that CNTL parameters. I solve for that with CNTL(1)=1e-4. It is working. <br><br></div>But it was a test matrix with size 46000x46000. Actual matrix size is 108900x108900 and will increase in the future. <br><br></div>I get this error of memory allocation failed. And the binary matrix size of A is 20GB and B is 5 GB.<br><br>Now I submit this in 240 processors each 4 GB RAM and also in 128 Processors with total 512 GB RAM.<br><br>In both the cases, it fails with the following error like memory is not enough. But for 90000x90000 size it had run serially in Matlab with <256 GB RAM.<br><br>Kindly let me know.<br><br>Venkatesh<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 26, 2015 at 8:02 PM, Matt Landreman <span dir="ltr"><<a>matt.landreman@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Hi Venkatesh,<div><br></div><div>I've struggled a bit with mumps memory allocation too. I think the behavior of mumps is roughly the following. First, in the "analysis step", mumps computes a minimum memory required based on the structure of nonzeros in the matrix. Then when it actually goes to factorize the matrix, if it ever encounters an element smaller than CNTL(1) (default=0.01) in the diagonal of a sub-matrix it is trying to factorize, it modifies the ordering to avoid the small pivot, which increases the fill-in (hence memory needed). ICNTL(14) sets the margin allowed for this unanticipated fill-in. Setting ICNTL(14)=200000 as in your email is not the solution, since this means mumps asks for a huge amount of memory at the start. Better would be to lower CNTL(1) or (I think) use static pivoting (CNTL(4)). Read the section in the mumps manual about these CNTL parameters. I typically set CNTL(1)=1e-6, which eliminated all the INFO(1)=-9 errors for my problem, without having to modify ICNTL(14).</div><div><br></div><div>Also, I recommend running with ICNTL(4)=3 to display diagnostics. Look for the line in standard output that says "TOTAL space in MBYTES for IC factorization". This is the amount of memory that mumps is trying to allocate, and for the default ICNTL(14), it should be similar to matlab's need.</div><div><br></div><div>Hope this helps,</div><div>-Matt Landreman</div><div>University of Maryland</div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 26, 2015 at 10:03 AM, venkatesh g <span dir="ltr"><<a>venkateshgk.j@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div>I posted a while ago in MUMPS forums but no one seems to reply.<br><br></div>I am solving a large generalized Eigenvalue problem. <br><br></div>I am getting the following error which is attached, after giving the command:<br><br>/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpiexec -np 64 -hosts compute-0-4,compute-0-6,compute-0-7,compute-0-8 ./ex7 -f1 a72t -f2 b72t -st_type sinvert -eps_nev 3 -eps_target 0.5 -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_package mumps -mat_mumps_icntl_14 200000<br><br></div>IT IS impossible to allocate so much memory per processor.. it is asking like around 70 GB per processor. <br><br></div>A serial job in MATLAB for the same matrices takes < 60GB. <br><br></div><div>After trying out superLU_dist, I have attached the error there also (segmentation error).<br></div><div><br></div>Kindly help me. <br><span><font color="#888888"><br></font></span></div><span><font color="#888888">Venkatesh<br><div><div><div><div><div><div><div><div><br><br></div></div></div></div></div></div></div></div></font></span></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</blockquote></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</blockquote></div>
</div>
</div></div></blockquote></div><br></div></div></div></div></div></div>
</blockquote></div><br></div>