<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">I am checking v4.1 now. I'll let you know when I fixed the problem.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">Sherry</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jul 28, 2015 at 8:27 AM, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Sherry,<div>I tested with superlu_dist v4.1. The extra printings are gone, but hang remains.</div><div>It hangs at </div><div><br></div><div><div>#5  0x00007fde5af1c818 in PMPI_Wait (request=0xb6e4e0, status=0x7fff9cd83d60)</div><div>    at src/mpi/pt2pt/wait.c:168</div><div>#6  0x00007fde602dd635 in pzgstrf (options=0x9202f0, m=4900, n=4900, </div><div>    anorm=13.738475134194639, LUstruct=0x9203c8, grid=0x9202c8, </div><div>    stat=0x7fff9cd84880, info=0x7fff9cd848bc) at pzgstrf.c:1308</div></div><div><br></div><div><div>                if (recv_req[0] != MPI_REQUEST_NULL) {</div><div> -->                   MPI_Wait (&recv_req[0], &status);</div></div><div><br></div><div>We will update petsc interface to superlu_dist v4.1.</div><span class="HOEnZb"><font color="#888888"><div><br></div></font></span><div><span class="HOEnZb"><font color="#888888">Hong</font></span><div><div class="h5"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 27, 2015 at 11:33 PM, Xiaoye S. Li <span dir="ltr"><<a href="mailto:xsli@lbl.gov" target="_blank">xsli@lbl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div style="font-family:arial,helvetica,sans-serif">​Hong,</div><div style="font-family:arial,helvetica,sans-serif">Thanks for trying out. </div><div style="font-family:arial,helvetica,sans-serif">The extra printings are not properly guarded by the print level.  I will fix that.   I will look into the hang problem soon. </div><div style="font-family:arial,helvetica,sans-serif"><br></div><div style="font-family:arial,helvetica,sans-serif">Sherry</div><div style="font-family:arial,helvetica,sans-serif">​</div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 27, 2015 at 7:50 PM, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Sherry,<div><br></div><div>I can repeat hang using petsc/src/ksp/ksp/examples/tutorials/ex10.c:</div><div>mpiexec -n 4 ./ex10 -f0 /homes/hzhang/tmp/Amat_binary.m -rhs 0 -pc_type lu -pc_factor_mat_solver_package superlu_dist -mat_superlu_dist_parsymbfact<br></div><div>...</div><div><div>.. Starting with 1 OpenMP threads</div><div>[0] .. BIG U size 1342464</div><div>[0] .. BIG V size 131072</div><div>  Max row size is 1311</div><div>  Using buffer_size of 5000000</div><div>  Threads per process 1</div></div><div>...</div><div><br></div><div>using a debugger (with petsc option '-start_in_debugger'), I find that hang occurs at</div><div><div>#0  0x00007f117d870998 in __GI___poll (fds=0x20da750, nfds=4, </div><div>    timeout=<optimized out>, timeout@entry=-1)</div><div>    at ../sysdeps/unix/sysv/linux/poll.c:83</div><div>#1  0x00007f117de9f7de in MPIDU_Sock_wait (sock_set=0x20da550, </div><div>    millisecond_timeout=millisecond_timeout@entry=-1, </div><div>    eventp=eventp@entry=0x7fff654930b0)</div><div>    at src/mpid/common/sock/poll/sock_wait.i:123</div><div>#2  0x00007f117de898b8 in MPIDI_CH3i_Progress_wait (</div><div>    progress_state=0x7fff65493120)</div><div>    at src/mpid/ch3/channels/sock/src/ch3_progress.c:218</div><div>#3  MPIDI_CH3I_Progress (blocking=blocking@entry=1, </div><div>    state=state@entry=0x7fff65493120)</div><div>    at src/mpid/ch3/channels/sock/src/ch3_progress.c:921</div><div>#4  0x00007f117de1a559 in MPIR_Wait_impl (request=request@entry=0x262df90, </div><div>    status=status@entry=0x7fff65493390) at src/mpi/pt2pt/wait.c:67</div><div>#5  0x00007f117de1a818 in PMPI_Wait (request=0x262df90, status=0x7fff65493390)</div><div>    at src/mpi/pt2pt/wait.c:168</div><div>#6  0x00007f11831da557 in pzgstrf (options=0x23dfda0, m=4900, n=4900, </div><div>    anorm=13.738475134194639, LUstruct=0x23dfe78, grid=0x23dfd78, </div><div>    stat=0x7fff65493ea0, info=0x7fff65493edc) at pzgstrf.c:1308</div><div><br></div><div>#7  0x00007f11831bf3bd in pzgssvx (options=0x23dfda0, A=0x23dfe30, </div><div>    ScalePermstruct=0x23dfe50, B=0x0, ldb=1225, nrhs=0, grid=0x23dfd78, </div><div>    LUstruct=0x23dfe78, SOLVEstruct=0x23dfe98, berr=0x0, stat=0x7fff65493ea0, </div><div>---Type <return> to continue, or q <return> to quit---</div><div>    info=0x7fff65493edc) at pzgssvx.c:1063</div><div><br></div><div>#8  0x00007f11825c2340 in MatLUFactorNumeric_SuperLU_DIST (F=0x23a0110, </div><div>    A=0x21bb7e0, info=0x2355068)</div><div>    at /sandbox/hzhang/petsc/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:411</div><div>#9  0x00007f1181c6c567 in MatLUFactorNumeric (fact=0x23a0110, mat=0x21bb7e0, </div><div>    info=0x2355068) at /sandbox/hzhang/petsc/src/mat/interface/matrix.c:2946</div><div>#10 0x00007f1182a56489 in PCSetUp_LU (pc=0x2353a10)</div><div>    at /sandbox/hzhang/petsc/src/ksp/pc/impls/factor/lu/lu.c:152</div><div>#11 0x00007f1182b16f24 in PCSetUp (pc=0x2353a10)</div><div>    at /sandbox/hzhang/petsc/src/ksp/pc/interface/precon.c:983</div><div>#12 0x00007f1182be61b5 in KSPSetUp (ksp=0x232c2a0)</div><div>    at /sandbox/hzhang/petsc/src/ksp/ksp/interface/itfunc.c:332</div><div>#13 0x0000000000405a31 in main (argc=11, args=0x7fff65499578)</div><div>    at /sandbox/hzhang/petsc/src/ksp/ksp/examples/tutorials/ex10.c:312</div></div><div><br></div><div>You may take a look at it. Sequential symbolic factorization works fine.</div><div><br></div><div>Why superlu_dist (v4.0) in complex precision displays</div><div><br></div><div><div>.. Starting with 1 OpenMP threads</div><div>[0] .. BIG U size 1342464</div><div>[0] .. BIG V size 131072</div><div>  Max row size is 1311</div><div>  Using buffer_size of 5000000</div><div>  Threads per process 1</div></div><div>...</div><div><br></div><div>I realize that I use superlu_dist v4.0. Would v4.1 works? I'll give it a try tomorrow.</div><span><font color="#888888"><div><br></div><div>Hong</div></font></span></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 27, 2015 at 1:25 PM, Anthony Paul Haas <span dir="ltr"><<a href="mailto:aph@email.arizona.edu" target="_blank">aph@email.arizona.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Hong,<div><br></div><div>No that is not the correct matrix. Note that I forgot to mention that it is a complex matrix. I tried loading the matrix I sent you this morning with:<span><br><br>
!...Load a Matrix in Binary Format<br></span>
      call PetscViewerBinaryOpen(PETSC_COMM_WORLD,"Amat_binary.m",FILE_MODE_READ,viewer,ierr)<br>
      call MatCreate(PETSC_COMM_WORLD,DLOAD,ierr)<br>
      call MatSetType(DLOAD,MATAIJ,ierr)<br>
      call MatLoad(DLOAD,viewer,ierr)<br>
      call PetscViewerDestroy(viewer,ierr)<br>
<br>
      call MatView(DLOAD,PETSC_VIEWER_STDOUT_WORLD,ierr)<br><br></div><div>The first 37 rows should look like this:<br></div><div><br>Mat Object: 2 MPI processes<br>  type: mpiaij<br>row 0: (0, 1) <br>row 1: (1, 1) <br>row 2: (2, 1) <br>row 3: (3, 1) <br>row 4: (4, 1) <br>row 5: (5, 1) <br>row 6: (6, 1) <br>row 7: (7, 1) <br>row 8: (8, 1) <br>row 9: (9, 1) <br>row 10: (10, 1) <br>row 11: (11, 1) <br>row 12: (12, 1) <br>row 13: (13, 1) <br>row 14: (14, 1) <br>row 15: (15, 1) <br>row 16: (16, 1) <br>row 17: (17, 1) <br>row 18: (18, 1) <br>row 19: (19, 1) <br>row 20: (20, 1) <br>row 21: (21, 1) <br>row 22: (22, 1) <br>row 23: (23, 1) <br>row 24: (24, 1) <br>row 25: (25, 1) <br>row 26: (26, 1) <br>row 27: (27, 1) <br>row 28: (28, 1) <br>row 29: (29, 1) <br>row 30: (30, 1) <br>row 31: (31, 1) <br>row 32: (32, 1) <br>row 33: (33, 1) <br>row 34: (34, 1) <br>row 35: (35, 1) <br>row 36: (1, -41.2444)  (35, -41.2444)  (36, 118.049 - 0.999271 i) (37, -21.447)  (38, 5.18873)  (39, -2.34856)  (40, 1.3607)  (41, -0.898206)  (42, 0.642715)  (43, -0.48593)  (44, 0.382471)  (45, -0.310476)  (46, 0.258302)  (47, -0.219268)  (48, 0.189304)  (49, -0.165815)  (50, 0.147076)  (51, -0.131907)  (52, 0.119478)  (53, -0.109189)  (54, 0.1006)  (55, -0.0933795)  (56, 0.0872779)  (57, -0.0821019)  (58, 0.0777011)  (59, -0.0739575)  (60, 0.0707775)  (61, -0.0680868)  (62, 0.0658258)  (63, -0.0639473)  (64, 0.0624137)  (65, -0.0611954)  (66, 0.0602698)  (67, -0.0596202)  (68, 0.0592349)  (69, -0.0295536)  (71, -21.447)  (106, 5.18873)  (141, -2.34856)  (176, 1.3607)  (211, -0.898206)  (246, 0.642715)  (281, -0.48593)  (316, 0.382471)  (351, -0.310476)  (386, 0.258302)  (421, -0.219268)  (456, 0.189304)  (491, -0.165815)  (526, 0.147076)  (561, -0.131907)  (596, 0.119478)  (631, -0.109189)  (666, 0.1006)  (701, -0.0933795)  (736, 0.0872779)  (771, -0.0821019)  (806, 0.0777011)  (841, -0.0739575)  (876, 0.0707775)  (911, -0.0680868)  (946, 0.0658258)  (981, -0.0639473)  (1016, 0.0624137)  (1051, -0.0611954)  (1086, 0.0602698)  (1121, -0.0596202)  (1156, 0.0592349)  (1191, -0.0295536)  (1261, 0)  (3676, 117.211)  (3711, -58.4801)  (3746, -78.3633)  (3781, 29.4911)  (3816, -15.8073)  (3851, 9.94324)  (3886, -6.87205)  (3921, 5.05774)  (3956, -3.89521)  (3991, 3.10522)  (4026, -2.54388)  (4061, 2.13082)  (4096, -1.8182)  (4131, 1.57606)  (4166, -1.38491)  (4201, 1.23155)  (4236, -1.10685)  (4271, 1.00428)  (4306, -0.919116)  (4341, 0.847829)  (4376, -0.787776)  (4411, 0.736933)  (4446, -0.693735)  (4481, 0.656958)  (4516, -0.625638)  (4551, 0.599007)  (4586, -0.576454)  (4621, 0.557491)  (4656, -0.541726)  (4691, 0.528849)  (4726, -0.518617)  (4761, 0.51084)  (4796, -0.50538)  (4831, 0.502142)  (4866, -0.250534) <br></div><div><br><br></div><div>Thanks,<br></div><div><br></div><div>Anthony</div><div><br></div><div><br></div><div><br><br></div></div><div class="gmail_extra"><br><div class="gmail_quote"><span>On Fri, Jul 24, 2015 at 7:56 PM, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br></span><div><div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">Anthony:</div><div class="gmail_quote">I test your Amat_binary.m using petsc/src/ksp/ksp/examples/tutorials/ex10.c. </div><div class="gmail_quote">Your matrix has many zero rows:</div><div class="gmail_quote">./ex10 -f0 ~/tmp/Amat_binary.m -rhs 0 -mat_view |more<br></div><div class="gmail_quote"><div class="gmail_quote">Mat Object: 1 MPI processes</div><div class="gmail_quote">  type: seqaij</div><div class="gmail_quote">row 0: (0, 1)</div><div class="gmail_quote">row 1: (1, 0)</div><div class="gmail_quote">row 2: (2, 1)</div><div class="gmail_quote">row 3: (3, 0)</div><div class="gmail_quote">row 4: (4, 1)</div><div class="gmail_quote">row 5: (5, 0)</div><div class="gmail_quote">row 6: (6, 1)</div><div class="gmail_quote">row 7: (7, 0)</div><div class="gmail_quote">row 8: (8, 1)</div><div class="gmail_quote">row 9: (9, 0)</div><div class="gmail_quote">...</div><div class="gmail_quote"><div class="gmail_quote">row 36: (1, 1)  (35, 0)  (36, 1)  (37, 0)  (38, 1)  (39, 0)  (40, 1)  (41, 0)  (42, 1)  (43, 0)  (44, 1)  (45,</div><div class="gmail_quote">0)  (46, 1)  (47, 0)  (48, 1)  (49, 0)  (50, 1)  (51, 0)  (52, 1)  (53, 0)  (54, 1)  (55, 0)  (56, 1)  (57, 0)</div><div class="gmail_quote"> (58, 1)  (59, 0)  (60, 1)  ...</div><div class="gmail_quote"><br></div><div class="gmail_quote">Do you send us correct matrix?</div></div></div><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><div><br>
      I ran my code through valgrind and gdb as suggested by Barry. I am
      now coming back to some problem I have had while running with
      parallel symbolic factorization. I am attaching a test matrix
      (petsc binary format) that I LU decompose and then use to solve a
      linear system (see code below). I can run on 2 processors with
      parsymbfact or with 4 processors without parsymbfact. However, if
      I run on 4 procs with parsymbfact, the code is just hanging. Below
      is the simplified test case that I have used to test. The matrix A
      and B are built somewhere else in my program. The matrix I am
      attaching is A-sigma*B (see below).<br>
      <br>
      One thing is that I don't know for sparse matrices what is the
      optimum number of processors to use for a LU decomposition? Does
      it depend on the total number of nonzero? Do you have an easy way
      to compute it?<br></div></div></blockquote><div><br></div></span><div>You have to experiment your matrix on a target machine to find out. </div><span><font color="#888888"><div><br></div><div>Hong</div></font></span><div><div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000"><div>
      <br>
      <br>
      <br>
           Subroutine HowBigLUCanBe(rank)<br>
      <br>
            IMPLICIT NONE<br>
            <br>
            integer(i4b),intent(in) :: rank<br>
            integer(i4b)            :: i,ct<br>
            real(dp)                :: begin,endd <br>
            complex(dpc)            :: sigma<br>
            <br>
            PetscErrorCode ierr <br>
            <br>
            <br>
            if (rank==0) call cpu_time(begin)<br>
            <br>
            if (rank==0) then<br>
               write(*,*)<br>
               write(*,*)'Testing How Big LU Can Be...'<br>
               write(*,*)'============================'<br>
               write(*,*)<br>
            endif<br>
            <br>
            sigma = (1.0d0,0.0d0)<span><br>
            call MatAXPY(A,-sigma,B,DIFFERENT_NONZERO_PATTERN,ierr) ! on
      exit A = A-sigma*B<br>
      <br></span>
      !.....Write Matrix to ASCII and Binary Format<br>
            !call
      PetscViewerASCIIOpen(PETSC_COMM_WORLD,"Amat.m",viewer,ierr)<br>
            !call MatView(DXX,viewer,ierr)<br>
            !call PetscViewerDestroy(viewer,ierr)<br>
            <br>
            call
PetscViewerBinaryOpen(PETSC_COMM_WORLD,"Amat_binary.m",FILE_MODE_WRITE,viewer,ierr)<br>
            call MatView(A,viewer,ierr)<br>
            call PetscViewerDestroy(viewer,ierr)<br>
            <br>
      !.....Create Linear Solver Context<br>
            call KSPCreate(PETSC_COMM_WORLD,ksp,ierr)<br>
            <br>
      !.....Set operators. Here the matrix that defines the linear
      system also serves as the preconditioning matrix.<span><br>
            !call
      KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,ierr) !aha
      commented and replaced by next line<br></span><span>
            call KSPSetOperators(ksp,A,A,ierr) ! remember: here A =
      A-sigma*B<br>
            <br></span>
      !.....Set Relative and Absolute Tolerances and Uses Default for
      Divergence Tol<br>
            tol = 1.e-10 <br>
            call
KSPSetTolerances(ksp,tol,tol,PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,ierr)<br>
            <br>
      !.....Set the Direct (LU) Solver<span><br>
            call KSPSetType(ksp,KSPPREONLY,ierr)<br>
            call KSPGetPC(ksp,pc,ierr)<br>
            call PCSetType(pc,PCLU,ierr)<br></span><span>
            call
      PCFactorSetMatSolverPackage(pc,MATSOLVERSUPERLU_DIST,ierr) !
      MATSOLVERSUPERLU_DIST MATSOLVERMUMPS<br>
            <br></span>
      !.....Create Right-Hand-Side Vector<br>
            call MatCreateVecs(A,frhs,PETSC_NULL_OBJECT,ierr)<br>
            call MatCreateVecs(A,sol,PETSC_NULL_OBJECT,ierr)<br>
            <br>
            allocate(xwork1(IendA-IstartA))<br>
            allocate(loc(IendA-IstartA))<br>
            <br>
            ct=0<br>
            do i=IstartA,IendA-1<br>
               ct=ct+1<br>
               loc(ct)=i<br>
               xwork1(ct)=(1.0d0,0.0d0)<br>
            enddo<br>
            <br>
            call
      VecSetValues(frhs,IendA-IstartA,loc,xwork1,INSERT_VALUES,ierr)<br>
            call VecZeroEntries(sol,ierr)<br>
            <br>
            deallocate(xwork1,loc)<br>
            <br>
      !.....Assemble Vectors<br>
            call VecAssemblyBegin(frhs,ierr)<br>
            call VecAssemblyEnd(frhs,ierr)<br>
            <br>
      !.....Solve the Linear System<br>
            call KSPSolve(ksp,frhs,sol,ierr)<br>
            <br>
            !call VecView(sol,PETSC_VIEWER_STDOUT_WORLD,ierr)<br>
            <br>
            if (rank==0) then    <br>
               call cpu_time(endd)<br>
               write(*,*)<br>
               print '("Total time for HowBigLUCanBe = ",f21.3,"
      seconds.")',endd-begin<br>
            endif<br>
      <br>
            call SlepcFinalize(ierr)<br>
            <br>
            STOP<br>
            <br>
            <br>
          end Subroutine HowBigLUCanBe<div><div><br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      <br>
      On 07/08/2015 11:23 AM, Xiaoye S. Li wrote:<br>
    </div></div></div><div><div>
    <blockquote type="cite">
      <div dir="ltr">
        <div style="font-family:arial,helvetica,sans-serif">Indeed, the
          parallel symbolic factorization routine needs power of 2
          processes, however, you can use however many processes you
          need;  internally, we redistribute matrix to nearest power of
          2 processes, do symbolic, then redistribute back to all the
          processes to do factorization, triangular solve etc.  So,
          there is no  restriction from the users viewpoint.<br>
          <br>
        </div>
        <div style="font-family:arial,helvetica,sans-serif">It's difficult
          to tell what the problem is.  Do you think you can print your
          matrix, then, I can do some debugging by running superlu_dist
          standalone?<br>
          <br>
        </div>
        <div style="font-family:arial,helvetica,sans-serif">Sherry<br>
          <br>
        </div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Wed, Jul 8, 2015 at 10:34 AM,
          Anthony Paul Haas <span dir="ltr"><<a href="mailto:aph@email.arizona.edu" target="_blank">aph@email.arizona.edu</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div dir="ltr">
              <div>
                <div>
                  <div>
                    <div>Hi,<br>
                      <br>
                    </div>
                    I have used the switch -mat_superlu_dist_parsymbfact
                    in my pbs script. However, although my program
                    worked fine with sequential symbolic factorization,
                    I get one of the following 2 behaviors when I run
                    with parallel symbolic factorization (depending on
                    the number of processors that I use):<br>
                    <br>
                  </div>
                  1) the program just hangs (it seems stuck in some
                  subroutine ==> see test.out-hangs)<br>
                </div>
                2) I get a floating point exception ==> see
                test.out-floating-point-exception<br>
                <br>
              </div>
              <div>Note that as suggested in the Superlu manual, I use a
                power of 2 number of procs. Are there any tunable
                parameters for the parallel symbolic factorization? Note
                that when I build my sparse matrix, most elements I add
                are nonzero of course but to simplify the programming, I
                also add a few zero elements in the sparse matrix. I was
                thinking that maybe if the parallel symbolic
                factorization proceed by block, there could be some
                blocks where the pivot would be zero, hence creating the
                FPE??<br>
                <br>
              </div>
              <div>Thanks,<br>
                <br>
              </div>
              <div>Anthony<br>
              </div>
              <div><br>
              </div>
              <br>
            </div>
            <div>
              <div>
                <div class="gmail_extra"><br>
                  <div class="gmail_quote">On Wed, Jul 8, 2015 at 6:46
                    AM, Xiaoye S. Li <span dir="ltr"><<a href="mailto:xsli@lbl.gov" target="_blank">xsli@lbl.gov</a>></span>
                    wrote:<br>
                    <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                      <div dir="ltr">
                        <div style="font-family:arial,helvetica,sans-serif">Did
                          you find out how to change option to use
                          parallel symbolic factorization?  Perhaps
                          PETSc team can help. </div>
                        <div style="font-family:arial,helvetica,sans-serif"><br>
                        </div>
                        <div style="font-family:arial,helvetica,sans-serif">Sherry</div>
                        <div style="font-family:arial,helvetica,sans-serif"><br>
                        </div>
                      </div>
                      <div>
                        <div>
                          <div class="gmail_extra"><br>
                            <div class="gmail_quote">On Tue, Jul 7, 2015
                              at 3:58 PM, Xiaoye S. Li <span dir="ltr"><<a href="mailto:xsli@lbl.gov" target="_blank">xsli@lbl.gov</a>></span>
                              wrote:<br>
                              <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                <div dir="ltr">
                                  <div style="font-family:arial,helvetica,sans-serif">Is
                                    there an inquiry function that tells
                                    you all the available options?<br>
                                    <br>
                                  </div>
                                  <div style="font-family:arial,helvetica,sans-serif">Sherry<br>
                                  </div>
                                </div>
                                <div>
                                  <div>
                                    <div class="gmail_extra"><br>
                                      <div class="gmail_quote">On Tue,
                                        Jul 7, 2015 at 3:25 PM, Anthony
                                        Paul Haas <span dir="ltr"><<a href="mailto:aph@email.arizona.edu" target="_blank">aph@email.arizona.edu</a>></span>
                                        wrote:<br>
                                        <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                          <div dir="ltr">
                                            <div>
                                              <div>
                                                <div>
                                                  <div>
                                                    <div>Hi Sherry,<br>
                                                      <br>
                                                    </div>
                                                    <div>Thanks for your
                                                      message. I have
                                                      used superlu_dist
                                                      default options. I
                                                      did not realize
                                                      that I was doing
                                                      serial symbolic
                                                      factorization.
                                                      That is probably
                                                      the cause of my
                                                      problem. <br>
                                                    </div>
                                                    Each node on Garnet
                                                    has 60GB usable
                                                    memory and I can run
                                                    with 1,2,4,8,16 or
                                                    32 core per node. <br>
                                                    <br>
                                                  </div>
                                                  So I should use: <br>
                                                  <br>
                                                  -mat_superlu_dist_r 20<br>
                                                  -mat_superlu_dist_c 32<b><br>
                                                    <br>
                                                  </b></div>
                                                How do you specify the
                                                parallel symbolic
                                                factorization option? is
                                                it
                                                -mat_superlu_dist_matinput
                                                1<b><br>
                                                  <br>
                                                </b></div>
                                              Thanks,<br>
                                              <br>
                                            </div>
                                            Anthony<br>
                                            <div>
                                              <div>
                                                <div>
                                                  <div><br>
                                                  </div>
                                                </div>
                                              </div>
                                            </div>
                                          </div>
                                          <div>
                                            <div>
                                              <div class="gmail_extra"><br>
                                                <div class="gmail_quote">On
                                                  Tue, Jul 7, 2015 at
                                                  3:08 PM, Xiaoye S. Li
                                                  <span dir="ltr"><<a href="mailto:xsli@lbl.gov" target="_blank">xsli@lbl.gov</a>></span>
                                                  wrote:<br>
                                                  <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                                                    <div dir="ltr">
                                                      <div style="font-family:arial,helvetica,sans-serif">For superlu_dist failure,
                                                        this occurs
                                                        during symbolic
                                                        factorization. 
                                                        Since you are
                                                        using serial
                                                        symbolic
                                                        factorization,
                                                        it requires the
                                                        entire graph of
                                                        A to be
                                                        available in the
                                                        memory of one
                                                        MPI task. How
                                                        much memory do
                                                        you have for
                                                        each MPI task?<br>
                                                        <br>
                                                      </div>
                                                      <div style="font-family:arial,helvetica,sans-serif">It won't help even if you
                                                        use more
                                                        processes.  You
                                                        should try to
                                                        use parallel
                                                        symbolic
                                                        factorization
                                                        option.<br>
                                                        <br>
                                                      </div>
                                                      <div style="font-family:arial,helvetica,sans-serif">Another point.  You set
                                                        up process grid
                                                        as:<br>
                                                               Process
                                                        grid nprow 32 x
                                                        npcol 20 <br>
                                                      </div>
                                                      <div style="font-family:arial,helvetica,sans-serif">For better performance,
                                                        you show swap
                                                        the grid
                                                        dimension. That
                                                        is, it's better
                                                        to use 20 x 32,
                                                        never gives
                                                        nprow larger
                                                        than npcol.<br>
                                                        <br>
                                                        <br>
                                                      </div>
                                                      <div style="font-family:arial,helvetica,sans-serif">Sherry<br>
                                                        <br>
                                                      </div>
                                                    </div>
                                                    <div class="gmail_extra"><br>
                                                      <div class="gmail_quote"><span>On
                                                          Tue, Jul 7,
                                                          2015 at 1:27
                                                          PM, Barry
                                                          Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span>
                                                          wrote:<br>
                                                        </span>
                                                        <div>
                                                          <div>
                                                          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
                                                             I would
                                                          suggest
                                                          running a
                                                          sequence of
                                                          problems, 101
                                                          by 101 111 by
                                                          111 etc and
                                                          get the memory
                                                          usage in each
                                                          case (when you
                                                          run out of
                                                          memory you can
                                                          get NO useful
                                                          information
                                                          out about
                                                          memory needs).
                                                          You can then
                                                          plot memory
                                                          usage as a
                                                          function of
                                                          problem size
                                                          to get a
                                                          handle on how
                                                          much memory it
                                                          is using.  You
                                                          can also run
                                                          on more and
                                                          more processes
                                                          (which have a
                                                          total of more
                                                          memory) to see
                                                          how large a
                                                          problem you
                                                          may be able to
                                                          reach.<br>
                                                          <br>
                                                             MUMPS also
                                                          has an "out of
                                                          core" version
                                                          (which we have
                                                          never used)
                                                          that could in
                                                          theory anyways
                                                          let you get to
                                                          large problems
                                                          if you have
                                                          lots of disk
                                                          space, but you
                                                          are on your
                                                          own figuring
                                                          out how to use
                                                          it.<br>
                                                          <br>
                                                            Barry<br>
                                                          <div>
                                                          <div><br>
                                                          > On Jul 7,
                                                          2015, at 2:37
                                                          PM, Anthony
                                                          Paul Haas <<a href="mailto:aph@email.arizona.edu" target="_blank">aph@email.arizona.edu</a>>
                                                          wrote:<br>
                                                          ><br>
                                                          > Hi Jose,<br>
                                                          ><br>
                                                          > In my
                                                          code, I use
                                                          once PETSc to
                                                          solve a linear
                                                          system to get
                                                          the baseflow
                                                          (without using
                                                          SLEPc) and
                                                          then I use
                                                          SLEPc to do
                                                          the stability
                                                          analysis of
                                                          that baseflow.
                                                          This is why,
                                                          there are some
                                                          SLEPc options
                                                          that are not
                                                          used in
                                                          test.out-superlu_dist-151x151
                                                          (when I am
                                                          solving for
                                                          the baseflow
                                                          with PETSc
                                                          only). I have
                                                          attached a
                                                          101x101 case
                                                          for which I
                                                          get the
                                                          eigenvalues.
                                                          That case
                                                          works fine.
                                                          However If i
                                                          increase to
                                                          151x151, I get
                                                          the error that
                                                          you can see in
                                                          test.out-superlu_dist-151x151
                                                          (similar error
                                                          with mumps:
                                                          see
                                                          test.out-mumps-151x151
                                                          line 2918 ).
                                                          If you look a
                                                          the very end
                                                          of the files
                                                          test.out-superlu_dist-151x151
                                                          and
                                                          test.out-mumps-151x151,
                                                          you will see
                                                          that the last
                                                          info message
                                                          printed is:<br>
                                                          ><br>
                                                          > On
                                                          Processor
                                                          (after
                                                          EPSSetFromOptions) 
                                                          0    memory: 
                                                           
                                                          0.65073152000E+08 
                                                                 
                                                          =====> 
                                                          (see line 807
                                                          of
                                                          module_petsc.F90)<br>
                                                          ><br>
                                                          > This
                                                          means that the
                                                          memory error
                                                          probably
                                                          occurs in the
                                                          call to
                                                          EPSSolve (see
                                                          module_petsc.F90
                                                          line 810). I
                                                          would like to
                                                          evaluate how
                                                          much memory is
                                                          required by
                                                          the most
                                                          memory
                                                          intensive
                                                          operation
                                                          within
                                                          EPSSolve.
                                                          Since I am
                                                          solving a
                                                          generalized
                                                          EVP, I would
                                                          imagine that
                                                          it would be
                                                          the LU
                                                          decomposition.
                                                          But is there
                                                          an accurate
                                                          way of doing
                                                          it?<br>
                                                          ><br>
                                                          > Before
                                                          starting with
                                                          iterative
                                                          solvers, I
                                                          would like to
                                                          exploit as
                                                          much as I can
                                                          direct
                                                          solvers. I
                                                          tried GMRES
                                                          with default
                                                          preconditioner
                                                          at some point
                                                          but I had
                                                          convergence
                                                          problem. What
                                                          solver/preconditioner
                                                          would you
                                                          recommend for
                                                          a generalized
                                                          non-Hermitian
                                                          (EPS_GNHEP)
                                                          EVP?<br>
                                                          ><br>
                                                          > Thanks,<br>
                                                          ><br>
                                                          > Anthony<br>
                                                          ><br>
                                                          > On Tue,
                                                          Jul 7, 2015 at
                                                          12:17 AM, Jose
                                                          E. Roman <<a href="mailto:jroman@dsic.upv.es" target="_blank">jroman@dsic.upv.es</a>>
                                                          wrote:<br>
                                                          ><br>
                                                          > El
                                                          07/07/2015, a
                                                          las 02:33,
                                                          Anthony Haas
                                                          escribió:<br>
                                                          ><br>
                                                          > > Hi,<br>
                                                          > ><br>
                                                          > > I am
                                                          computing
                                                          eigenvalues
                                                          using
                                                          PETSc/SLEPc
                                                          and
                                                          superlu_dist
                                                          for the LU
                                                          decomposition
                                                          (my problem is
                                                          a generalized
                                                          eigenvalue
                                                          problem). The
                                                          code runs fine
                                                          for a grid
                                                          with 101x101
                                                          but when I
                                                          increase to
                                                          151x151, I get
                                                          the following
                                                          error:<br>
                                                          > ><br>
                                                          > >
                                                          Can't expand
                                                          MemType 1:
                                                          jcol 16104 
                                                           (and then
                                                          [NID 00037]
                                                          2015-07-06
                                                          19:19:17 Apid
                                                          31025976: OOM
                                                          killer
                                                          terminated
                                                          this process.)<br>
                                                          > ><br>
                                                          > > It
                                                          seems to be a
                                                          memory
                                                          problem. I
                                                          monitor the
                                                          memory usage
                                                          as far as I
                                                          can and it
                                                          seems that
                                                          memory usage
                                                          is pretty low.
                                                          The most
                                                          memory
                                                          intensive part
                                                          of the program
                                                          is probably
                                                          the LU
                                                          decomposition
                                                          in the context
                                                          of the
                                                          generalized
                                                          EVP. Is there
                                                          a way to
                                                          evaluate how
                                                          much memory
                                                          will be
                                                          required for
                                                          that step? I
                                                          am currently
                                                          running the
                                                          debug version
                                                          of the code
                                                          which I would
                                                          assume would
                                                          use more
                                                          memory?<br>
                                                          > ><br>
                                                          > > I
                                                          have attached
                                                          the output of
                                                          the job. Note
                                                          that the
                                                          program uses
                                                          twice PETSc:
                                                          1) to solve a
                                                          linear system
                                                          for which no
                                                          problem
                                                          occurs, and,
                                                          2) to solve
                                                          the
                                                          Generalized
                                                          EVP with
                                                          SLEPc, where I
                                                          get the error.<br>
                                                          > ><br>
                                                          > >
                                                          Thanks<br>
                                                          > ><br>
                                                          > >
                                                          Anthony<br>
                                                          > >
                                                          <test.out-superlu_dist-151x151><br>
                                                          ><br>
                                                          > In the
                                                          output you are
                                                          attaching
                                                          there are no
                                                          SLEPc objects
                                                          in the report
                                                          and SLEPc
                                                          options are
                                                          not used. It
                                                          seems that
                                                          SLEPc calls
                                                          are skipped?<br>
                                                          ><br>
                                                          > Do you
                                                          get the same
                                                          error with
                                                          MUMPS? Have
                                                          you tried to
                                                          solve linear
                                                          systems with a
                                                          preconditioned
                                                          iterative
                                                          solver?<br>
                                                          ><br>
                                                          > Jose<br>
                                                          ><br>
                                                          ><br>
                                                          </div>
                                                          </div>
                                                          >
<module_petsc.F90><test.out-mumps-151x151><test.out_superlu_dist-101x101><test.out-superlu_dist-151x151><br>
                                                          <br>
                                                          </blockquote>
                                                          </div>
                                                        </div>
                                                      </div>
                                                      <br>
                                                    </div>
                                                  </blockquote>
                                                </div>
                                                <br>
                                              </div>
                                            </div>
                                          </div>
                                        </blockquote>
                                      </div>
                                      <br>
                                    </div>
                                  </div>
                                </div>
                              </blockquote>
                            </div>
                            <br>
                          </div>
                        </div>
                      </div>
                    </blockquote>
                  </div>
                  <br>
                </div>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div></div></div><br></div></div>
</blockquote></div></div></div><br></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div></div></div></div></div>
</blockquote></div><br></div>