[petsc-users] Improving efficiency of slepc usage

Matthew Knepley knepley at gmail.com
Tue Aug 24 10:59:23 CDT 2021


On Tue, Aug 24, 2021 at 8:47 AM dazza simplythebest <sayosale at hotmail.com>
wrote:

>
> Dear Matthew and Jose,
>    Apologies for the delayed reply, I had a couple of unforeseen days off
> this week.
> Firstly regarding Jose's suggestion re: MUMPS, the program is already
> using MUMPS
> to solve linear systems (the code is using a distributed MPI  matrix to
> solve the generalised
> non-Hermitian complex problem).
>
> I have tried the gdb debugger as per Matthew's suggestion.
> Just to note in case someone else is following this that at first it
> didn't work (couldn't 'attach') ,
> but after some googling I found a tip suggesting the command;
> echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
> which seemed to get it working.
>
> *I then first ran the debugger on the small matrix case that worked.*
> That stopped in gdb almost immediately after starting execution
> with a report regarding 'nanosleep.c':
> ../sysdeps/unix/sysv/linux/clock_nanosleep.c: No such file or directory.
> However, issuing the 'cont' command again caused the program to run
> through to the end of the
>  execution w/out any problems, and with correct looking results, so I am
> guessing this error
> is not particularly important.
>

We do that on purpose when the debugger starts up. Typing 'cont' is correct.


> *I then tried the same debugging procedure on the large matrix case that
> fails.*
> The code again stopped almost immediately after the start of execution
> with
> the same nanosleep error as before, and I was able to set the program
> running
>  again with 'cont' (see full output below). I was running the code with 4
> MPI processes,
>  and so had 4 gdb windows appear.  Thereafter the code ran for sometime
> until completing the
> matrix construction, and then one of the gdb process windows printed a
> Program terminated with signal SIGKILL, Killed.
> The program no longer exists.
> message.  I then typed 'where' into this terminal but just received the
> message
> No stack.
>

I have only seen this behavior one other time, and it was with Fortran.
Fortran allows you to declare really big arrays
on the stack by putting them at the start of a function (rather than F90
malloc). When I had one of those arrays exceed
the stack space, I got this kind of an error where everything is destroyed
rather than just stopping. Could it be that you
have a large structure on the stack?

Second, you can at least look at the stack for the processes that were not
killed. You type Ctrl-C, which should give you
the prompt and then "where".

  Thanks,

      Matt


> The other gdb windows basically seemed to be left in limbo until I issued
> the 'quit'
>  command in the SIGKILL, and then they vanished.
>
> I paste the full output from the gdb window that recorded the SIGKILL
> below here.
> I guess it is necessary to somehow work out where the SIGKILL originates
> from ?
>
>  Thanks once again,
>                          Dan.
>
>
>  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> GNU gdb (Ubuntu 9.2-0ubuntu1~20.04) 9.2
> Copyright (C) 2020 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.
> Type "show copying" and "show warranty" for details.
> This GDB was configured as "x86_64-linux-gnu".
> Type "show configuration" for configuration details.
> For bug reporting instructions, please see:
> <http://www.gnu.org/software/gdb/bugs/>.
> Find the GDB manual and other documentation resources online at:
>     <http://www.gnu.org/software/gdb/documentation/>.
>
> For help, type "help".
> Type "apropos word" to search for commands related to "word"...
> Reading symbols from ./stab1.exe...
> Attaching to program:
> /data/work/rotplane/omega_to_zero/stability/test/tmp10/tmp6/stab1.exe,
> process 675919
> Reading symbols from
> /data/work/slepc/SLEPC/slepc-3.15.1/arch-omp_nodbug/lib/libslepc.so.3.15...
> Reading symbols from
> /data/work/slepc/PETSC/petsc-3.15.0/arch-omp_nodbug/lib/libpetsc.so.3.15...
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/lib--Type <RET> for
> more, q to quit, c to continue without paging--cont
> /intel64_lin/libmkl_intel_lp64.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64_lin/libmkl_intel_lp64.so)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64_lin/libmkl_core.so...
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64_lin/libmkl_intel_thread.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64_lin/libmkl_intel_thread.so)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64_lin/libmkl_blacs_intelmpi_lp64.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64_lin/libmkl_blacs_intelmpi_lp64.so)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libiomp5.so...
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libiomp5.dbg...
> Reading symbols from /lib/x86_64-linux-gnu/libdl.so.2...
> Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libdl-2.31.so...
> Reading symbols from /lib/x86_64-linux-gnu/libpthread.so.0...
> Reading symbols from
> /usr/lib/debug/.build-id/e5/4761f7b554d0fcc1562959665d93dffbebdaf0.debug...
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> Reading symbols from /usr/lib/x86_64-linux-gnu/libstdc++.so.6...
> (No debugging symbols found in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/libmpifort.so.12...
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12...
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.dbg...
> Reading symbols from /lib/x86_64-linux-gnu/librt.so.1...
> Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/librt-2.31.so...
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libifport.so.5...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libifport.so.5)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libimf.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libimf.so)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libsvml.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libsvml.so)
> Reading symbols from /lib/x86_64-linux-gnu/libm.so.6...
> Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libm-2.31.so...
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libirc.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libirc.so)
> Reading symbols from /lib/x86_64-linux-gnu/libgcc_s.so.1...
> (No debugging symbols found in /lib/x86_64-linux-gnu/libgcc_s.so.1)
> Reading symbols from /usr/lib/x86_64-linux-gnu/libquadmath.so.0...
> (No debugging symbols found in /usr/lib/x86_64-linux-gnu/libquadmath.so.0)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/libmpi_ilp64.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/libmpi_ilp64.so)
> Reading symbols from /lib/x86_64-linux-gnu/libc.so.6...
> Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libc-2.31.so...
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libirng.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libirng.so)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libintlc.so.5...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/compiler/lib/intel64_lin/libintlc.so.5)
> Reading symbols from /lib64/ld-linux-x86-64.so.2...
> Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/ld-2.31.so...
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/libfabric.so.1...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/libfabric.so.1)
> Reading symbols from /usr/lib/x86_64-linux-gnu/libnuma.so...
> (No debugging symbols found in /usr/lib/x86_64-linux-gnu/libnuma.so)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/prov/libtcp-fi.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/prov/libtcp-fi.so)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/prov/libsockets-fi.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/prov/libsockets-fi.so)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/prov/librxm-fi.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/prov/librxm-fi.so)
> Reading symbols from
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/prov/libpsmx2-fi.so...
> (No debugging symbols found in
> /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/libfabric/lib/prov/libpsmx2-fi.so)
> Reading symbols from /usr/lib/x86_64-linux-gnu/libpsm2.so.2...
> (No debugging symbols found in /usr/lib/x86_64-linux-gnu/libpsm2.so.2)
> 0x00007fac4d0d8334 in __GI___clock_nanosleep (clock_id=<optimized out>,
> clock_id at entry=0, flags=flags at entry=0, req=req at entry=0x7ffdc641a9a0,
> rem=rem at entry=0x7ffdc641a9a0) at
> ../sysdeps/unix/sysv/linux/clock_nanosleep.c:78
> 78      ../sysdeps/unix/sysv/linux/clock_nanosleep.c: No such file or
> directory.
> (gdb) cont
> Continuing.
> [New Thread 0x7f9e49c02780 (LWP 676559)]
> [New Thread 0x7f9e49400800 (LWP 676560)]
> [New Thread 0x7f9e48bfe880 (LWP 676562)]
> [Thread 0x7f9e48bfe880 (LWP 676562) exited]
> [Thread 0x7f9e49400800 (LWP 676560) exited]
> [Thread 0x7f9e49c02780 (LWP 676559) exited]
>
> Program terminated with signal SIGKILL, Killed.
> The program no longer exists.
> (gdb) where
> No stack.
>
>  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> - - - - - - - - - - - - -
>
> ------------------------------
> *From:* Matthew Knepley <knepley at gmail.com>
> *Sent:* Friday, August 20, 2021 2:12 PM
> *To:* dazza simplythebest <sayosale at hotmail.com>
> *Cc:* Jose E. Roman <jroman at dsic.upv.es>; PETSc <petsc-users at mcs.anl.gov>
> *Subject:* Re: [petsc-users] Improving efficiency of slepc usage
>
> On Fri, Aug 20, 2021 at 6:55 AM dazza simplythebest <sayosale at hotmail.com>
> wrote:
>
> Dear Jose,
>     Many thanks for your response, I have been investigating this issue
> with a few more calculations
> today, hence the slightly delayed response.
>
> The problem is actually derived from a fluid dynamics problem, so to allow
> an easier exploration of things
> I first downsized the resolution of the underlying fluid solver while
> keeping all the physical parameters
>  the same - i.e. I would get a smaller matrix that should be solving the
> same physical problem as the original
>  larger matrix but to lower accuracy.
>
> *Results*
>
> *Small matrix (N= 21168) - everything good!*
> This converged when using the -eps_largest_real approach (taking 92
> iterations for nev=10,
> tol= 5.0000E-06 and ncv = 300), and also when using the shift-invert
> approach, converging
> very impressively in a single iteration ! Interestingly it did this both
> for a non-zero  -eps_target
>  and also for a zero  -eps_target.
>
> *Large matrix (N=50400)- works for -eps_largest_real , fails for st_type
> sinvert *
> I have just double checked again that the code does run properly when we
> use the -eps_largest_real
> option - indeed I ran it with a small nev and large tolerance (nev = 4,
> tol= -eps_tol 5.0e-4 , ncv = 300)
> and with these parameters convergence was obtained in 164 iterations,
> which took 6 hours on the
> machine I was running it on. Furthermore the eigenvalues seem to be
> ballpark correct; for this large
> higher resolution case (although with lower slepc tolerance) we obtain
> 1789.56816314173 -4724.51319554773i
>  as the eigenvalue with largest real part, while the smaller matrix (same
> physical problem but at lower resolution case)
> found this eigenvalue to be 1831.11845726501 -4787.54519511345i , which
> means the agreement is in line
> with expectations.
>
> *Unfortunately though the code does still crash though when I try to do
> shift-invert for the large matrix case *,
>  whether or not I use a non-zero  -eps_target. For reference this is the
> command line used :
> -eps_nev 10    -eps_ncv 300  -log_view -eps_view   -eps_target 0.1
> -st_type sinvert -eps_monitor :monitor_output05.txt
> To be precise the code crashes soon after calling EPSSolve (it
> successfully calls
>  MatCreateVecs, EPSCreate,  EPSSetOperators, EPSSetProblemType and
> EPSSetFromOptions).
> By crashes I mean that I do not even get any error messages from
> slepc/PETSC, and do not even get the
> 'EPS Object: 16 MPI processes' message - I simply get a  MPI/Fortran
> 'KILLED BY SIGNAL: 9 (Killed)' message
>  as soon as EPSsolve is called.
>
>
> Hi Dan,
>
> It would help track this error down if we had a stack trace. You can get a
> stack trace from the debugger. You run with
>
>   -start_in_debugger
>
> which should launch the debugger (usually), and then type
>
>   cont
>
> to continue, and then
>
>   where
>
> to get the stack trace when it crashes, or 'bt' on lldb.
>
>   Thanks,
>
>      Matt
>
>
> Do you have any ideas as to why this larger matrix case should fail when
> using shift-invert but succeed when using
> -eps_largest_real ? The fact that the program works and produces correct
> results
> when using the -eps_largest_real  option suggests that there is probably
> nothing wrong with the specification
> of the problem or the matrices ? It is strange how there is no error
> message from slepc / Petsc ... the
> only idea I have at the moment is that perhaps max memory has been
> exceeded, which could cause such a sudden
> shutdown? For your reference when running the large matrix case with the
> -eps_largest_real option I am using
> about 36 GB of the 148GB available on this machine  - does the shift
> invert approach require substantially
> more memory for example ?
>
>   I would be very grateful if you have any suggestions to resolve this
> issue or even ways to clarify it further,
>  the performance I have seen with the shift-invert for the small matrix is
> so impressive it would be great to
>  get that working for the full-size problem.
>
>    Many thanks and best wishes,
>                                   Dan.
>
>
>
> ------------------------------
> *From:* Jose E. Roman <jroman at dsic.upv.es>
> *Sent:* Thursday, August 19, 2021 7:58 AM
> *To:* dazza simplythebest <sayosale at hotmail.com>
> *Cc:* PETSc <petsc-users at mcs.anl.gov>
> *Subject:* Re: [petsc-users] Improving efficiency of slepc usage
>
> In A) convergence may be slow, especially if the wanted eigenvalues have
> small magnitude. I would not say 600 iterations is a lot, you probably need
> many more. In most cases, approach B) is better because it improves
> convergence of eigenvalues close to the target, but it requires prior
> knowledge of your spectrum distribution in order to choose an appropriate
> target.
>
> In B) what do you mean that it crashes. If you get an error about
> factorization, it means that your A-matrix is singular, In that case, try
> using a nonzero target -eps_target 0.1
>
> Jose
>
>
> > El 19 ago 2021, a las 7:12, dazza simplythebest <sayosale at hotmail.com>
> escribió:
> >
> > Dear All,
> >             I am planning on using slepc to do a large number of
> eigenvalue calculations
> >  of a generalized eigenvalue problem, called from a program written in
> fortran using MPI.
> >  Thus far I have successfully installed the slepc/PETSc software, both
> locally and on a cluster,
> >  and on smaller test problems everything is working well; the matrices
> are efficiently and
> > correctly constructed and slepc returns the correct spectrum. I am just
> now starting to move
> > towards now solving the full-size 'production run' problems, and would
> appreciate some
> > general advice on how to improve the solver's performance.
> >
> > In particular, I am currently trying to solve the problem Ax = lambda Bx
> whose matrices
> > are of size 50000 (this is the smallest 'production run' problem I will
> be tackling), and are
> > complex, non-Hermitian.  In most cases I aim to find the eigenvalues
> with the largest real part,
> > although in other cases I will also be interested in finding the
> eigenvalues whose real part
> > is close to zero.
> >
> > A)
> > Calling slepc 's EPS solver with the following options:
> >
> > -eps_nev 10   -log_view -eps_view -eps_max_it 600 -eps_ncv 140  -eps_tol
> 5.0e-6  -eps_largest_real -eps_monitor :monitor_output.txt
> >
> >
> > led to the code successfully running, but failing to find any
> eigenvalues within the maximum 600 iterations
> > (examining the monitor output it did appear to be very slowly
> approaching convergence).
> >
> > B)
> > On the same problem I have also tried a shift-invert transformation
> using the options
> >
> > -eps_nev 10    -eps_ncv 140    -eps_target 0.0+0.0i  -st_type sinvert
> >
> > -in this case the code crashed at the point it tried to call slepc, so
> perhaps I have incorrectly specified these options ?
> >
> >
> > Does anyone have any suggestions as to how to improve this performance (
> or find out more about the problem) ?
> > In the case of A) I can see from watching the slepc   videos that
> increasing ncv
> > may help, but I am wondering , since 600 is a large number of
> iterations, whether there
> > maybe something else going on - e.g. perhaps some alternative
> preconditioner may help ?
> > In the case of B), I guess there must be some mistake in these command
> line options?
> >  Again, any advice will be greatly appreciated.
> >      Best wishes,  Dan.
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210824/cdd46486/attachment-0001.html>


More information about the petsc-users mailing list