<div dir="ltr"><div><div><div>Hi Jose,<br><br></div>Follow up question: is it possible to call EPSKrylovSchurSetSubintervals from fortran? I don't see it in src/eps/impls/krylov/krylovschur/ftn-auto/krylovschurf.c.<br><br></div>Best wishes,<br><br></div>Jeff<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 19, 2016 at 2:44 PM, Jeff Steward <span dir="ltr"><<a href="mailto:jeffsteward@gmail.com" target="_blank">jeffsteward@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Thank you very much for your quick response Jose. I really appreciate it. Your point about load balancing even with the same number of pairs in the interval is well taken.<br><br></div><div>For now I can provide hints with EPSKrylovSchurSetSubintervals and that should work fine. However, my problem is reaching the upper limits of memory with plans to increase the size, so I'd like to work on an approach to spectrum slicing with iterative/shell methods for the long run.<br></div><div><br></div>The idea for spectrum slicing as I discussed in my previous message is based on the paper of Aktulga et al (2014):<br><div><div><br><a href="https://math.berkeley.edu/~linlin/publications/ParallelEig.pdf" target="_blank">https://math.berkeley.edu/~linlin/publications/ParallelEig.pdf</a><br><br></div><div>They address the issues involved with computing the (very expensive) matrix inertia. These experiments use PARPACK and FEAST to do a parallel spectrum slicing method. Of course PARPACK and FEAST have their own issues, and SLEPc is much easier to use and more flexible, extensible, etc. I think the MSIL approach they outline might be beneficial for my relatively large eigenproblem where I need many eigenpairs.<br><br>Computing the matrix inertia for such an approach is likely to be the bottleneck at least in terms of memory. How far do you think the incomplete Cholesky with MUMPS would scale?<br><br></div><div>I will send a separate e-mail to slepc-maint regarding my problem.<br><br></div><div>Thank you again!<br><br></div><div>Best wishes,<br><br></div><div>Jeff<br></div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 19, 2016 at 2:19 PM, Jose E. Roman <span dir="ltr"><<a href="mailto:jroman@dsic.upv.es" target="_blank">jroman@dsic.upv.es</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
> El 19 may 2016, a las 22:28, Jeff Steward <<a href="mailto:jeffsteward@gmail.com" target="_blank">jeffsteward@gmail.com</a>> escribió:<br>
><br>
> I have some questions regarding spectrum slicing in SLEPc, especially in the new version 3.7, as I see a line in the Change Notes that I don't quite understand ("in spectrum slicing in multi-communicator mode now it is possible to update the problem matrices directly on the sub-communicators".)<br>
<br>
</span>This is intended for applications that require solving a sequence of eigenproblems, where the problem matrices in a given problem are a modification of the previous ones. Instead of updating the matrices in the parent communicator it is now possible to do the update in the partitioned communicators. This avoids lots of communication required for moving data to/from the parent communicator. This is certainly an "advanced feature".<br>
<span><br>
><br>
> 1) In light of this statement, how should I divide my Mat to best work with eps_krylovschur_partitions? Let's say I have 384 processors and eps_krylovschur_partitions=48, so there will be 8 processors in each group. Should I distribute the matrix over all 384 processors (so let's say this gives 32 rows per processor) and have the entire communicator call EPS solve, or should I (can I?) distribute the matrix over each of the 8 groups (giving say 1536 rows per processor) and have the 48 subcommunicators call EPSSolve? Am I correct in thinking that distributing over all 384 processors requires collecting and duplicating the matrix to the 48 different groups?<br>
<br>
</span>The idea is that you create the matrix in the initial communicator (typically PETSC_COMM_WORLD) and then EPS will automatically create the partitioned sub-communicators and replicate the matrices via MatCreateRedundantMatrix(). You have to choose the number of partitions and the number of processes in the original communicator that are best suited for you application. If your matrix is relatively small but you need to compute many eigenvalues, then you can consider setting as many partitions as processes in the original communicator (hence the size of each partition is 1). But it will be necessary to communicate a lot for setting up data in the sub-communicators. The main point of sub-communicators is to workaround the limited scalability of parallel direct linear solvers (e.g. MUMPS) when one wants to use may processes.<br>
<span><br>
> 2) If I'm understanding it correctly, the approach for Krylov-Schur spectrum slicing described in the manual for SLEPc seems like a wasteful default method. From what I gather, regions are divided by equal distance, and different regions are bound to end up with different (and potentially vastly different) numbers of eigenpairs. I understand the user can provide their own region intervals, but wouldn't it be better for SLEPc to first compute the matrix inertias at some given first guess regions, interpolate, then fix the spectra endpoints so they contain approximately the same number in each region? An option for logarithmically spaced points rather than linearly spaced points would be helpful as well, as for the problem I am looking at the spectrum decays in this way (few large eigenvalues with an exponential decay down to many smaller eigenvalues). I require eigenpairs with eigenvalues that vary by several orders of magnitude (1e-3 to 1e3), so the linear equidistant strategy is hopeless.<br>
<br>
</span>If you have an a priori knowledge of eigenvalue distribution then you can use EPSKrylovSchurSetSubintervals() to give hints. We do not compute inertia to get a rough estimation of the distribution because in the general case computing inertia is very costly (it requires a factorization in our implementation). The main intention of EPSKrylovSchurSetSubintervals() is also the case where a sequence of eigenproblems must be solved, so the solution of one problem provides an estimation of eigenvalue distribution for the next problem.<br>
<br>
Anyway, giving subintervals that roughly contain the same number of eigenvalues does not guarantee that the workload will be balanced, since convergence may be quite different from one subinterval to the other.<br>
<span><br>
<br>
> 3) The example given for spectrum slicing in the user manual is<br>
><br>
> mpiexec -n 20 ./ex25 -eps_interval 0.4,0.8 -eps_krylovschur_partitions 4<br>
> -st_type sinvert -st_ksp_type preonly -st_pc_type cholesky<br>
> -st_pc_factor_mat_solver_package mumps -mat_mumps_icntl_13 1<br>
><br>
> which requires a direct solver. If I can compute the matrix inertias myself and come up with spectral regions and subcommunicators as described above, is there a way to efficiently use SLEPc with an iterative solver? How about with a matrix shell? (I'm getting greedy now ^_^).<br>
<br>
</span>I don't know how you can possibly compute the "exact" inertia cheaply. In that case yes, it would be worth using a shell matrix but the solver is probably not prepared for this case. If you want us to have a look at this possibility, send more details to slepc-maint.<br>
<span><font color="#888888"><br>
Jose<br>
</font></span><div><div><br>
><br>
> I would really appreciate any help on these questions. Thank you for your time.<br>
><br>
> Best wishes,<br>
><br>
> Jeff<br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>