<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Jun 27, 2018 at 3:12 PM, Smith, Barry F. <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
David,<br>
<br>
This is ugly but should work. BEFORE reading in the matrix and right hand side set the LOCAL sizes for the matrix and vector. This way you can control exactly which rows go on which process. Note you will have to have your own mechanism to know what the local sizes should be (for example have the original program print out the sizes and just cut and paste them into your copy of ex10.c) PETSc doesn't provide an automatic way to do this (nor should it).<br>
<span class="m_-7014872054230522750m_4592477465790287280m_6798277638623563476HOEnZb"><font color="#888888"><br>
Barry<br></font></span></blockquote><div><br></div><div><br></div><div>Thanks Barry and Stefano, the approach you both suggested (calling MatSetSizes and VecSetSizes before reading in) was what I was looking for.</div><div><br></div><div>Best,</div><div>David</div><div><br></div><div><br></div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="m_-7014872054230522750m_4592477465790287280m_6798277638623563476HOEnZb"><div class="m_-7014872054230522750m_4592477465790287280m_6798277638623563476h5">> On Jun 27, 2018, at 1:36 PM, David Knezevic <<a href="mailto:david.knezevic@akselos.com" target="_blank">david.knezevic@akselos.com</a>> wrote:<br>
> <br>
> I ran into a case where using MUMPS (called via "-ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps") for a particular solve hangs indefinitely with 24 MPI processes (but it works fine with other numbers of processes). The stack trace when killing the job is below, in case that gives any clue as to what is wrong.<br>
> <br>
> I'm trying to replicate this with a simple test case. I wrote out the matrix and right-hand side to disk using MatView and VecView, and then I modified ksp ex10 to read in these files and solve with 24 cores. However, that did not replicate the error, so I think I also need to make sure that I use the same number of rows per process in the test case as in the case that hung. As a result I'm wondering if there is a way to modify the parallel layout of the matrix and vector after I read them in?<br>
> <br>
> Also, if there are any other suggestions about reproducing or debugging this issue, please let me know!<br>
> <br>
> Best,<br>
> David<br>
> <br>
> ------------------------------<wbr>--<br>
> <br>
> #0 0x00007fb12bf0e74d in poll () at ../sysdeps/unix/syscall-templa<wbr>te.S:84<br>
> #1 0x00007fb126262e58 in ?? () from /usr/lib/libopen-pal.so.13<br>
> #2 0x00007fb1262596fb in opal_libevent2021_event_base_l<wbr>oop () from /usr/lib/libopen-pal.so.13<br>
> #3 0x00007fb126223238 in opal_progress () from /usr/lib/libopen-pal.so.13<br>
> #4 0x00007fb12cef53db in ompi_request_default_test () from /usr/lib/libmpi.so.12<br>
> #5 0x00007fb12cf21d61 in PMPI_Test () from /usr/lib/libmpi.so.12<br>
> #6 0x00007fb127a5b939 in pmpi_test__ () from /usr/lib/libmpi_mpifh.so.12<br>
> #7 0x00007fb132888d87 in dmumps_try_recvtreat (comm_load=8, ass_irecv=40, blocking=.FALSE., set_irecv=.TRUE., message_received=.FALSE., msgsou=-1, msgtag=-1, status=..., bufr=..., lbufr=401408, lbufr_bytes=1605629, procnode_steps=..., posfac=410095, iwpos=3151, iwposcb=30557, <br>
> iptrlu=1536548, lrlu=1126454, lrlus=2864100, n=30675, iw=..., liw=39935, a=..., la=3367108, ptrist=..., ptlust=..., ptrfac=..., ptrast=..., step=..., pimaster=..., pamaster=..., nstk_s=..., comp=0, iflag=0, ierror=0, comm=7, nbprocfils=..., ipool=..., lpool=48, leaf=2, <br>
> nbfin=90, myid=33, slavef=90, root=..., opassw=353031, opeliw=700399235, itloc=..., rhs_mumps=..., fils=..., ptrarw=..., ptraiw=..., intarr=..., dblarr=..., icntl=..., keep=..., keep8=..., dkeep=..., nd=..., frere=..., lptrar=30675, nelt=1, frtptr=..., frtelt=..., <br>
> istep_to_iniv2=..., tab_pos_in_pere=..., stack_right_authorized=.TRUE., lrgroups=...) at dfac_process_message.F:646<br>
> #8 0x00007fb1328cfcd1 in dmumps_fac_par_m::dmumps_fac_p<wbr>ar (n=30675, iw=..., liw=39935, a=..., la=3367108, nstk_steps=..., nbprocfils=..., nd=..., fils=..., step=..., frere=..., dad=..., cand=..., istep_to_iniv2=..., tab_pos_in_pere=..., maxfrt=0, ntotpv=0, nmaxnpiv=150, <br>
> ptrist=..., ptrast=..., pimaster=..., pamaster=..., ptrarw=..., ptraiw=..., itloc=..., rhs_mumps=..., ipool=..., lpool=48, rinfo=..., posfac=410095, iwpos=3151, lrlu=1126454, iptrlu=1536548, lrlus=2864100, leaf=2, nbroot=1, nbrtot=90, uu=0.01, icntl=..., ptlust=..., ptrfac=..., <br>
> nsteps=1, info=..., keep=..., keep8=..., procnode_steps=..., slavef=90, myid=33, comm_nodes=7, myid_nodes=33, bufr=..., lbufr=401408, lbufr_bytes=1605629, intarr=..., dblarr=..., root=..., perm=..., nelt=1, frtptr=..., frtelt=..., lptrar=30675, comm_load=8, ass_irecv=40, <br>
> seuil=0, seuil_ldlt_niv2=0, mem_distrib=..., ne=..., dkeep=..., pivnul_list=..., lpn_list=1, lrgroups=...) at dfac_par_m.F:207<br>
> #9 0x00007fb13287f875 in dmumps_fac_b (n=30675, nsteps=1, a=..., la=3367108, iw=..., liw=39935, sym_perm=..., na=..., lna=47, ne_steps=..., nfsiz=..., fils=..., step=..., frere=..., dad=..., cand=..., istep_to_iniv2=..., tab_pos_in_pere=..., ptrar=..., ldptrar=30675, ptrist=..., <br>
> ptlust_s=..., ptrfac=..., iw1=..., iw2=..., itloc=..., rhs_mumps=..., pool=..., lpool=48, cntl1=0.01, icntl=..., info=..., rinfo=..., keep=..., keep8=..., procnode_steps=..., slavef=90, comm_nodes=7, myid=33, myid_nodes=33, bufr=..., lbufr=401408, lbufr_bytes=1605629, <br>
> intarr=..., dblarr=..., root=..., nelt=1, frtptr=..., frtelt=..., comm_load=8, ass_irecv=40, seuil=0, seuil_ldlt_niv2=0, mem_distrib=..., dkeep=..., pivnul_list=..., lpn_list=1, lrgroups=...) at dfac_b.F:167<br>
> #10 0x00007fb1328419ed in dmumps_fac_driver (id=<error reading variable: value requires 600640 bytes, which is more than max-value-size>) at dfac_driver.F:2291<br>
> #11 0x00007fb1327ff6dc in dmumps (id=<error reading variable: value requires 600640 bytes, which is more than max-value-size>) at dmumps_driver.F:1686<br>
> #12 0x00007fb1327faf0a in dmumps_f77 (job=2, sym=0, par=1, comm_f77=5, n=30675, icntl=..., cntl=..., keep=..., dkeep=..., keep8=..., nz=0, nnz=0, irn=..., irnhere=0, jcn=..., jcnhere=0, a=..., ahere=0, nz_loc=622296, nnz_loc=0, irn_loc=..., irn_lochere=1, jcn_loc=..., <br>
> jcn_lochere=1, a_loc=..., a_lochere=1, nelt=0, eltptr=..., eltptrhere=0, eltvar=..., eltvarhere=0, a_elt=..., a_elthere=0, perm_in=..., perm_inhere=0, rhs=..., rhshere=0, redrhs=..., redrhshere=0, info=..., rinfo=..., infog=..., rinfog=..., deficiency=0, lwk_user=0, <br>
> size_schur=0, listvar_schur=..., listvar_schurhere=0, schur=..., schurhere=0, wk_user=..., wk_userhere=0, colsca=..., colscahere=0, rowsca=..., rowscahere=0, instance_number=1, nrhs=1, lrhs=0, lredrhs=0, rhs_sparse=..., rhs_sparsehere=0, sol_loc=..., sol_lochere=0, <br>
> irhs_sparse=..., irhs_sparsehere=0, irhs_ptr=..., irhs_ptrhere=0, isol_loc=..., isol_lochere=0, nz_rhs=0, lsol_loc=0, schur_mloc=0, schur_nloc=0, schur_lld=0, mblock=0, nblock=0, nprow=0, npcol=0, ooc_tmpdir=..., ooc_prefix=..., write_problem=..., tmpdirlen=20, prefixlen=20, <br>
> write_problemlen=20) at dmumps_f77.F:267<br>
> #13 0x00007fb1327f9cfa in dmumps_c (mumps_par=mumps_par@entry=0x1<wbr>2bd9660) at mumps_c.c:417<br>
> #14 0x00007fb1321a23fc in MatFactorNumeric_MUMPS (F=0x12bd8b60, A=0x26bd890, info=<optimized out>) at /home/buildslave/software/pets<wbr>c-src/src/mat/impls/aij/mpi/mu<wbr>mps/mumps.c:1073<br>
> #15 0x00007fb131ec6ea7 in MatLUFactorNumeric (fact=0x12bd8b60, mat=0x26bd890, info=info@entry=0xc2a66f8) at /home/buildslave/software/pets<wbr>c-src/src/mat/interface/matrix<wbr>.c:3025<br>
> #16 0x00007fb1325040d6 in PCSetUp_LU (pc=0xc2a6380) at /home/buildslave/software/pets<wbr>c-src/src/ksp/pc/impls/factor/<wbr>lu/lu.c:131<br>
> #17 0x00007fb13259903e in PCSetUp (pc=0xc2a6380) at /home/buildslave/software/pets<wbr>c-src/src/ksp/pc/interface/pre<wbr>con.c:923<br>
> #18 0x00007fb13263e53f in KSPSetUp (ksp=ksp@entry=0x12b28c70) at /home/buildslave/software/pets<wbr>c-src/src/ksp/ksp/interface/it<wbr>func.c:381<br>
> #19 0x00007fb13263ed36 in KSPSolve (ksp=0x12b28c70, b=0xad77d50, x=0xad801c0) at /home/buildslave/software/pets<wbr>c-src/src/ksp/ksp/interface/it<wbr>func.c:612<br>
> #20 0x00007fb12db5dfc2 in libMesh::PetscLinearSolver<dou<wbr>ble>::solve(libMesh::SparseMat<wbr>rix<double>&, libMesh::SparseMatrix<double>&<wbr>, libMesh::NumericVector<double><wbr>&, libMesh::NumericVector<double><wbr>&, double, unsigned int) ()<br>
> from /mnt/fileserver/akselos-4.2.x/<wbr>scrbe/build/bin/../../third_pa<wbr>rty/opt_real/libmesh_opt.so.0<br>
> #21 0x00007fb1338d0c06 in libMesh::PetscLinearSolver<dou<wbr>ble>::solve(libMesh::SparseMat<wbr>rix<double>&, libMesh::NumericVector<double><wbr>&, libMesh::NumericVector<double><wbr>&, double, unsigned int) () from /mnt/fileserver/akselos-4.2.x/<wbr>scrbe/build/bin/../lib/libscrb<wbr>e-opt_real.so<br>
> #22 0x00007fb1335e8abd in std::pair<unsigned int, double> SolveHelper::try_linear_solve<<wbr>libMesh::LinearSolver<double> >(libMesh::LinearSolver<double<wbr>>&, libMesh::SolverConfiguration&, libMesh::SparseMatrix<double>&<wbr>, libMesh::NumericVector<double><wbr>&, libMesh::NumericVector<double><wbr>&) ()<br>
> from /mnt/fileserver/akselos-4.2.x/<wbr>scrbe/build/bin/../lib/libscrb<wbr>e-opt_real.so<br>
> #23 0x00007fb133a70206 in <br>
<br>
</div></div></blockquote></div></div></div>