<div dir="ltr"><div><div>Thanks. So I what other direct solver I can use for this singular B ? QR is a good option but what to do ? <br><br></div>I will read the manual on dumping out null space. <br><br></div>Venkatesh<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 29, 2015 at 7:42 PM, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">venkatesh:<span class=""><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span>On Tue, May 26, 2015 at 9:02 PM, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><span style="font-size:12.8000001907349px">'A serial job in MATLAB for the same matrices takes < 60GB. '</span><br></div><div><span style="font-size:12.8000001907349px">Can you run this case in serial? If so, try petsc, superlu or mumps to make sure the matrix is non-singular.</span></div></div></blockquote></span><div>B matrix is singular but I get my result in Petsc and Mumps for small matrices. <br></div></div></div></div></blockquote><div> </div></span><div>You are lucky to get something out of a singular matrix using LU factorization, likely due to arithmetic roundoff. Is the obtained solution useful?</div><div><br></div><div>Suggest reading petsc or slepc manual on how to dump out null space when a matrix is singular. </div><span class="HOEnZb"><font color="#888888"><div><br></div><div>Hong</div></font></span><div><div class="h5"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div></div><span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div>Both mumps and superlu_dist show crash in MatFactorNumeric(). Mumps gives error<div>[16]PETSC ERROR: Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory 65710 megabytes.</div><div><br></div><div>Does your code work for smaller problems?</div></div></blockquote></span><div>Yes code works for small problems </div><span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Try using more processors?</div><div><br></div><div>Why use such huge '<span style="font-size:12.8000001907349px">-mat_mumps_icntl_14 200000' (percentage of estimated workspace increase)? The default is 20. Try 40?</span></div><div><span style="font-size:12.8000001907349px"><br></span></div><div>Superlu_dist usually uses less memory than mumps, but it also crashes. I guess something wrong with your matrix. Is it singular? </div></div></blockquote></span><div>The B matrix is singular. So Super Lu cant be used is it ?</div><span><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Run superlu_dist on a slightly smaller matrix with option '-mat_superlu_dist_statprint' which displays memory usage info., e.g.,<br></div></div></blockquote></span><div>Ok I will do that and check. <br></div><div><div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div></div><div><div>petsc/src/ksp/ksp/examples/tutorials (maint)</div><div>$ mpiexec -n 2 ./ex2 -pc_type lu -pc_factor_mat_solver_package superlu_dist -mat_superlu_dist_statprint</div><div><span style="white-space:pre-wrap"> </span>Nonzeros in L 560</div><div><span style="white-space:pre-wrap"> </span>Nonzeros in U 560</div><div><span style="white-space:pre-wrap"> </span>nonzeros in L+U 1064</div><div><span style="white-space:pre-wrap"> </span>nonzeros in LSUB 248</div><div><span style="white-space:pre-wrap"> </span>NUMfact space (MB) sum(procs): L\U<span style="white-space:pre-wrap"> </span>0.01<span style="white-space:pre-wrap"> </span>all<span style="white-space:pre-wrap"> </span>0.05</div><div><span style="white-space:pre-wrap"> </span>Total highmark (MB): All<span style="white-space:pre-wrap"> </span>0.05<span style="white-space:pre-wrap"> </span>Avg<span style="white-space:pre-wrap"> </span>0.02<span style="white-space:pre-wrap"> </span>Max<span style="white-space:pre-wrap"> </span>0.02</div><div><span style="white-space:pre-wrap"> </span>EQUIL time 0.00</div><div><span style="white-space:pre-wrap"> </span>ROWPERM time 0.00</div><div><span style="white-space:pre-wrap"> </span>COLPERM time 0.00</div><div><span style="white-space:pre-wrap"> </span>SYMBFACT time 0.00</div><div><span style="white-space:pre-wrap"> </span>DISTRIBUTE time 0.00</div><div><span style="white-space:pre-wrap"> </span>FACTOR time 0.00</div><div><span style="white-space:pre-wrap"> </span>Factor flops<span style="white-space:pre-wrap"> </span>1.181000e+04<span style="white-space:pre-wrap"> </span>Mflops <span style="white-space:pre-wrap"> </span> 4.80</div><div><span style="white-space:pre-wrap"> </span>SOLVE time 0.00</div><div><span style="white-space:pre-wrap"> </span>SOLVE time 0.00</div><div><span style="white-space:pre-wrap"> </span>Solve flops<span style="white-space:pre-wrap"> </span>2.194000e+03<span style="white-space:pre-wrap"> </span>Mflops <span style="white-space:pre-wrap"> </span> 4.43</div><div><span style="white-space:pre-wrap"> </span>SOLVE time 0.00</div><div><span style="white-space:pre-wrap"> </span>Solve flops<span style="white-space:pre-wrap"> </span>2.194000e+03<span style="white-space:pre-wrap"> </span>Mflops <span style="white-space:pre-wrap"> </span> 5.14</div><div>Norm of error 1.18018e-15 iterations 1</div></div><span><font color="#888888"><div><br></div><div>Hong<br><div><br></div></div></font></span></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 26, 2015 at 9:03 AM, venkatesh g <span dir="ltr"><<a href="mailto:venkateshgk.j@gmail.com" target="_blank">venkateshgk.j@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div>I posted a while ago in MUMPS forums but no one seems to reply.<br><br></div>I am solving a large generalized Eigenvalue problem. <br><br></div>I am getting the following error which is attached, after giving the command:<br><br>/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpiexec -np 64 -hosts compute-0-4,compute-0-6,compute-0-7,compute-0-8 ./ex7 -f1 a72t -f2 b72t -st_type sinvert -eps_nev 3 -eps_target 0.5 -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_package mumps -mat_mumps_icntl_14 200000<br><br></div>IT IS impossible to allocate so much memory per processor.. it is asking like around 70 GB per processor. <br><br></div>A serial job in MATLAB for the same matrices takes < 60GB. <br><br></div><div>After trying out superLU_dist, I have attached the error there also (segmentation error).<br></div><div><br></div>Kindly help me. <br><span><font color="#888888"><br></font></span></div><span><font color="#888888">Venkatesh<br><div><div><div><div><div><div><div><div><br><br></div></div></div></div></div></div></div></div></font></span></div>
</blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>