<div dir="ltr">David,<div>The 1st solve calls LU factorization once which took 3.3686e+01 sec.</div><div>The remaining solves do not call LU factorization at all, thus fast.</div><div><br></div><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div><div dir="ltr"><div><br>
<div>MatSolve 8 1.0 2.8005e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 7 0 0 0 0 0</div>
<div>MatLUFactorSym 1 1.0 4.4394e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div>
<div>MatLUFactorNum 1 1.0 3.3686e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 85 0 0 0 0 85 0 0 0 0 0</div></div></div></div></blockquote><div><br></div><div>For petsc/Superlu_dist interface, MatLUFactorNum actually includes time for MatLUFactorSym </div>
<div>because Superlu_dist's API design., i.e., 3.3686e+01 includes time spent on MatLUFactorSym.</div><div>Can you send us '-log_summary' from mumps?</div><div><br></div><div>Hong</div><div><br></div><div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div><div dir="ltr"><div>
<div>MatAssemblyBegin 7 1.0 4.4827e-02 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01 0 0 0 0 5 0 0 0 0 5 0</div>
<div>MatAssemblyEnd 7 1.0 2.1890e-01 1.1 0.00e+00 0.0 2.6e+02 2.2e+04 8.0e+00 1 0 6 4 3 1 0 6 4 3 0</div>
<div>KSPGMRESOrthog 6 1.0 3.5851e-03 1.2 1.12e+06 1.0 0.0e+00 0.0e+00 6.0e+00 0 8 0 0 2 0 8 0 0 2 3760</div>
<div>KSPSetUp 1 1.0 8.0609e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0</div>
<div>KSPSolve 2 1.0 3.6561e+01 1.0 1.51e+07 1.7 7.9e+02 8.7e+04 2.2e+01 92 63 18 43 8 92 63 18 43 8 3</div>
<div>PCSetUp 1 1.0 3.3688e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 85 0 0 0 3 85 0 0 0 3 0</div>
<div>PCApply 8 1.0 2.8005e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 7 0 0 0 0 0</div>
<div>------------------------------------------------------------------------------------------------------------------------</div>
<div>Memory usage is given in bytes:</div>
<div>Object Type Creations Destructions Memory Descendants' Mem.</div>
<div>Reports information only for process 0.</div>
<div>--- Event Stage 0: Main Stage</div>
<div> Vector 50 50 6210712 0</div>
<div> Vector Scatter 15 15 15540 0</div>
<div> Index Set 32 32 465324 0</div>
<div> Matrix 6 6 21032760 0</div>
<div> Krylov Solver 1 1 18288 0</div>
<div> Preconditioner 1 1 936 0</div>
<div> Viewer 2 1 712 0</div>
<div>========================================================================================================================</div>
<div>Average time to get PetscTime(): 9.53674e-08</div>
<div>Average time for MPI_Barrier(): 5.38826e-06</div>
<div>Average time for zero size MPI_Send(): 9.57648e-06</div>
<div>#PETSc Option Table entries:</div>
<div>-Dmax 0.01</div>
<div>-LowerPML 0</div>
<div>-Mx 72</div>
<div>-My 60</div>
<div>-Mz 4</div>
<div>-Nc 3</div>
<div>-Npmlx 0</div>
<div>-Npmly 0</div>
<div>-Npmlz 3</div>
<div>-Nx 72</div>
<div>-Ny 60</div>
<div>-Nz 10</div>
<div>-dD 0.01</div>
<div>-epsfile eps3dhi.txt</div>
<div>-fproffile fprof3dhi.txt</div>
<div>-gamma 2.0</div>
<div>-hx 0.0625</div>
<div>-hy 0.061858957413174</div>
<div>-hz 0.2</div>
<div>-in0 pass</div>
<div>-log_summary</div>
<div>-norm 0.01</div>
<div>-out0 above</div>
<div>-pc_factor_mat_solver_package superlu_dist</div>
<div>-pc_type lu</div>
<div>-printnewton 1</div>
<div>-wa 1.5</div>
<div>#End of PETSc Option Table entries</div>
<div>Compiled without FORTRAN kernels</div>
<div>Compiled with full precision matrices (default)</div>
<div>sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4</div>
<div>Configure run at: Wed Apr 17 13:30:40 2013</div>
<div>Configure options: --prefix=/sw/xt/petsc/3.3/cnl3.1_pgi11.9.0 --PETSC_ARCH=REAL --known-level1-dcache-assoc=2 --known-level1-dcache-size=65536 --known-level1-dcache-linesize=64 --with-cc=cc --with-fc=ftn --with-cxx=CC --with-fortran --with-fortran-interfaces=1
--with-single-library=1 --with-shared-libraries=0 --with-scalar-type=real --with-debugging=0 --known-mpi-shared-libraries=0 --with-clib-autodetect=0 --with-fortranlib-autodetect=0 --known-memcmp-ok=1 --COPTFLAGS=-fast -Mipa=fast --FOPTFLAGS=-fast -Mipa=fast
--with-blas-lapack-lib=/opt/xt-libsci/11.0.06/pgi/109/istanbul/lib/libsci_pgi.a --with-scalapack=1 --with-scalapack-lib=/opt/xt-libsci/11.0.06/pgi/109/istanbul/lib/libsci_pgi.a --with-scalapack-include=/opt/xt-libsci/11.0.06/pgi/109/istanbul/include --with-ptscotch=1
--download-ptscotch=yes --with-blacs=1 --with-blacs-lib=/opt/xt-libsci/11.0.06/pgi/109/istanbul/lib/libsci_pgi.a --with-blacs-include=/opt/xt-libsci/11.0.06/pgi/109/istanbul/include --with-superlu_dist=1 --download-superlu_dist=yes --with-metis=1 --download-metis=yes
--with-parmetis=1 --download-parmetis=yes --with-mumps=1 --download-mumps=yes --with-hypre=1 --download-hypre --with-fftw=1 --with-fftw-dir=/opt/fftw/<a href="http://3.3.0.0/x86_64" target="_blank">3.3.0.0/x86_64</a> --with-hdf5=1 --with-hdf5-dir=/opt/cray/hdf5-parallel/1.8.6/pgi/109</div>
<div>-----------------------------------------</div>
<div>Libraries compiled on Wed Apr 17 13:30:40 2013 on krakenpf3 </div>
<div>Machine characteristics: Linux-<a href="tel:2.6.27.48-0.12.1" value="+12627480121" target="_blank">2.6.27.48-0.12.1</a>_1.0301.5943-cray_ss_s-x86_64-with-SuSE-11-x86_64</div>
<div>Using PETSc directory: /nics/e/sw/xt-cle3.1/petsc/3.3/cnl3.1_pgi11.9.0/petsc-3.3-p4</div>
<div>Using PETSc arch: REAL</div>
<div>-----------------------------------------</div>
<div>Using C compiler: cc -fast ${COPTFLAGS} ${CFLAGS}</div>
<div>Using Fortran compiler: ftn -fast ${FOPTFLAGS} ${FFLAGS} </div>
<div>-----------------------------------------</div>
<div>Using include paths: -I/nics/e/sw/xt-cle3.1/petsc/3.3/cnl3.1_pgi11.9.0/petsc-3.3-p4/REAL/include -I/nics/e/sw/xt-cle3.1/petsc/3.3/cnl3.1_pgi11.9.0/petsc-3.3-p4/include -I/nics/e/sw/xt-cle3.1/petsc/3.3/cnl3.1_pgi11.9.0/petsc-3.3-p4/include -I/nics/e/sw/xt-cle3.1/petsc/3.3/cnl3.1_pgi11.9.0/petsc-3.3-p4/REAL/include
-I/opt/fftw/<a href="http://3.3.0.0/x86_64/include" target="_blank">3.3.0.0/x86_64/include</a> -I/opt/cray/hdf5-parallel/1.8.6/pgi/109/include -I/opt/cray/portals/2.2.0-1.0301.26633.6.9.ss/include -I/opt/cray/pmi/2.1.4-1.0000.8596.15.1.ss/include -I/opt/cray/mpt/5.3.5/xt/seastar/mpich2-pgi/109/include
-I/opt/xt-libsci/11.0.06/pgi/109/istanbul/include -I/usr/include/alps</div>
<div>-----------------------------------------</div>
<div>Using C linker: cc</div>
<div>Using Fortran linker: ftn</div>
<div>Using libraries: -Wl,-rpath,/nics/e/sw/xt-cle3.1/petsc/3.3/cnl3.1_pgi11.9.0/petsc-3.3-p4/REAL/lib -L/nics/e/sw/xt-cle3.1/petsc/3.3/cnl3.1_pgi11.9.0/petsc-3.3-p4/REAL/lib -lpetsc -Wl,-rpath,/nics/e/sw/xt-cle3.1/petsc/3.3/cnl3.1_pgi11.9.0/petsc-3.3-p4/REAL/lib
-L/nics/e/sw/xt-cle3.1/petsc/3.3/cnl3.1_pgi11.9.0/petsc-3.3-p4/REAL/lib -lsuperlu_dist_3.1 -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lparmetis -lmetis -lptesmumps -lptscotch -lptscotcherr -Wl,-rpath,/opt/xt-libsci/11.0.06/pgi/109/istanbul/lib
-L/opt/xt-libsci/11.0.06/pgi/109/istanbul/lib -lsci_pgi -lsci_pgi -lpthread -Wl,-rpath,/opt/fftw/<a href="http://3.3.0.0/x86_64/lib" target="_blank">3.3.0.0/x86_64/lib</a> -L/opt/fftw/<a href="http://3.3.0.0/x86_64/lib" target="_blank">3.3.0.0/x86_64/lib</a> -lfftw3_mpi -lfftw3 -lHYPRE -L/opt/cray/portals/2.2.0-1.0301.26633.6.9.ss/lib64
-L/opt/cray/pmi/2.1.4-1.0000.8596.15.1.ss/lib64 -L/opt/cray/mpt/5.3.5/xt/seastar/mpich2-pgi/109/lib -L/opt/cray/hdf5-parallel/1.8.6/pgi/109/lib -L/usr/lib/alps -L/opt/pgi/11.9.0/linux86-64/11.9/lib -L/usr/lib64/gcc/x86_64-suse-linux/4.3 -ldl -L/opt/cray/atp/1.4.1/lib
-lAtpSigHandler -lhdf5_hl_pgi_parallel -lhdf5_pgi_parallel -lz -lscicpp_pgi -lsci_pgi_mp -lfftw3 -lfftw3f -lmpichcxx_pgi -lmpich_pgi -lmpl -lrt -lpmi -lportals -lalpslli -lalpsutil -lpthread -Wl,-rpath,/opt/pgi/11.9.0/linux86-64/11.9/lib -lzceh -lgcc_eh -lstdmpz
-lCmpz -lpgmp -lpgf90 -lpgf90_rpm1 -lpgf902 -lpgf90rtl -lpgftnrtl -lnspgc -lpgc -lsci_pgi -Wl,-rpath,/opt/cray/hdf5-parallel/1.8.6/pgi/109/lib -lhdf5_fortran -lhdf5 -lhdf5hl_fortran -lhdf5_hl -ldl -lAtpSigHandler -lhdf5_hl_pgi_parallel -lhdf5_pgi_parallel
-lz -lscicpp_pgi -lsci_pgi_mp -lfftw3 -lfftw3f -lmpichcxx_pgi -lmpich_pgi -lmpl -lrt -lpmi -lportals -lalpslli -lalpsutil -lpthread -lzceh -lgcc_eh -lstdmpz -lCmpz -lpgmp -lpgf90 -lpgf90_rpm1 -lpgf902 -lpgf90rtl -lpgftnrtl -lnspgc -lpgc -lrt -lm -lz -lz -ldl </div>
<div>-----------------------------------------</div>
</div>
<div><br>
</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote"><div class="im">On Tue, Jan 28, 2014 at 6:34 PM, Barry Smith <span dir="ltr">
<<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class="im">
<br>
Ok then use -log_summary and put the first solve in a separate stage (see PetscStageRegister()) and send the results of a run back demonstrating the slow first solver and we may be able to see what the story is.<br>
<span><font color="#888888"><br>
Barry<br>
</font></span>
</div><div>
<div><div class="im"><br>
On Jan 28, 2014, at 5:23 PM, David Liu <<a href="mailto:daveliu@mit.edu" target="_blank">daveliu@mit.edu</a>> wrote:<br>
<br>
> wow that is news to me. I always assumed that this is normal.<br>
><br>
> I'm pretty certain it's not the preallocation. I'm using MatCreateMPI, and to my knowledge I wouldn't even be able to set the values without crashing if I didn't preallocate. (If I'm not mistaken, the setting values slowly without preallocating is only possible
if you create the Mat using MatCreate + MatSetup).<br>
><br>
> Also, I'm certain that the time is taken on the first solve, not the setting of values, because I use the matrix in a MatMult first to get the RHS before solving, and the MatMult happens before the long first solve.<br>
><br>
><br></div><div class="im">
> On Tue, Jan 28, 2014 at 5:04 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> On Jan 28, 2014, at 1:36 PM, David Liu <<a href="mailto:daveliu@mit.edu" target="_blank">daveliu@mit.edu</a>> wrote:<br>
><br>
> > Hi, I'm writing an application that solves a sparse matrix many times using Pastix. I notice that the first solves takes a very long time,<br>
><br>
> Is it the first “solve” or the first time you put values into that matrix that “takes a long time”? If you are not properly preallocating the matrix then the initial setting of values will be slow and waste memory. See
<a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatXAIJSetPreallocation.html" target="_blank">
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatXAIJSetPreallocation.html</a><br>
><br>
> The symbolic factorization is usually much faster than a numeric factorization so that is not the cause of the slow “first solve”.<br>
><br>
> Barry<br>
><br>
><br>
><br>
> > while the subsequent solves are very fast. I don't fully understand what's going on behind the curtains, but I'm guessing it's because the very first solve has to read in the non-zero structure for the LU factorization, while the subsequent solves are faster
because the nonzero structure doesn't change.<br>
> ><br>
> > My question is, is there any way to save the information obtained from the very first solve, so that the next time I run the application, the very first solve can be fast too (provided that I still have the same nonzero structure)?<br>
><br>
><br>
<br>
</div></div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote></div><br></div></div>