<div dir="ltr"><div><div><div><div><div><div><div><div><div>Thanks for the answers!<br><br></div>First of all, I indeed didn't specify GHEP because if I do, it gives me the message "IPNorm: The inner product is not well defined!" (see below). If I do the same calculation with LAPACK, it does work. And if I compare the Eigenvalues of the GHEP LAPACK simulation and the NHEP (standard) Krylov-Schur simulation, there is a relative difference between both of about 1E-10 (while the solution states that the rel. error is about 1E-14, anyway)<br>
<br></div>So, if I try to do the same thing as in ex. 13, and set the Spectral Type to STSINVERT, it takes the error away for Krylov-Schur, but the solver does not find a solution. For Lanczos and Arnoldi with explicit restart, the error remains. The problem is that I don't really have any idea about what I'm doing!<br>
<br></div>Anyway, here is what I need to do: My matrices A and B correspond to the potential energy and the kinetic energy of a plasma, respectively. It is very similar to the calculation of energy levels of quantum systems: you perturb the energy of the system while holding the kinetic energy of the perturbation constant. Minimizing this leads to the discrete (and continuous) energy levels. In this case, of the plasma system. If the eigenvalues are positive, this means that there is an exploding solution (the EV are the squares of the actual time evolution appearing in an exponential: e^(i sqrt(lambda) t))<br>
<br></div>So, when abstracting boundary conditions, using spectral transforms in 2 dimensions and finite differences in the third, I get two matrices A and B, both hermitian.<br><br></div>I do as follows (FORTRAN):<br><br>
matrices:<br>- call MatCreateAIJ explicitely giving the sizes of each processor<br></div>- call MatGetOwnershipRange to find the ownership and determine which blocks to put<br></div>- call call MatSetValues with INSERT_VALUES to put these blocks in the matrices<br>
</div>- call MatAssemblyBegin and End to assemble them<br><br></div>EPS:<br><div>- call EPSCreate<br></div><div>- call EPSSetOperators<br></div><div>- call EPSSetProblemType with GHEP -> gives the error about the IPnorm, so I leave it out<br>
</div><div>- call EPSSetWhichEigenpairs with EPS_LARGEST_REAL<br></div><div>- call EPSSetFromOptions (why do I need to do this?)<br></div><div>- call EPSSolve<br><br></div><div>That's it...<br><br></div><div>Thank you again in advance!<br>
Toon<br></div><div><div><div><div><br></div></div></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 18 August 2014 22:20, Hong <span dir="ltr"><<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Toon:<br>
<div class="">><br>
> I think I expressed myself wrong: I can indeed get it to work with just<br>
> using AIJ matrices, as in example 13. This is the way that I am currently<br>
> solving my problem. There are only two issues:<br>
> 1. memory is indeed important so I would certainly like to decrease it by<br>
> one third if possible :-) The goal is to make the simulations as fast and<br>
> light as possible to be able to perform parameter studies (on the stability<br>
> of MHD configurations).<br>
<br>
</div>We need find out where the memory is taken. It seems you are using<br>
slepc default krylovschur/sinvert/preonly/lu, lu factorization might<br>
consume the most of memory. Run your code with '-eps_view' and find<br>
out what solver you are using.<br>
<div class="">><br>
> 2. I have played around a little bit with the different solvers but it<br>
> appears that the standard method and the Arnoldi with explicit restart<br>
> method are the best. Some of the others don't converge and some are slower.<br>
<br>
</div>Your problem is symmetric, so you should use:<br>
problem type: generalized symmetric eigenvalue problem<br>
It seems you are using non-symmetric type. Check it with '-eps_view'.<br>
<div class="">><br>
> The thing is that in the end the matrices that I use are large but they have<br>
> a very easy structure: hermitian tri-diagonal. That's why, I think, slepc<br>
> usually converges in a few iterations (correct me if I'm wrong).<br>
<br>
</div>The number of iterations does not depend on the data structure of the<br>
matrices, but the location of eigenvalues.<br>
<div class="">><br>
> The problem is that sometimes, when I consider more grid points, the solver<br>
> doesn't work any more because apparently it uses the LU decomposition (not<br>
> sure for the matrix A or B in A x = lambda B x) and there is a zero pivot<br>
> element (see below for error message). In other words: the matrices become<br>
> almost singular. This is a characteristic of the numerical method I think.<br>
> Is there any way to fix this by setting a more precise threshold or<br>
> something?<br>
<br>
</div>You need carefully select the target of the shift.<br>
<div class="">><br>
> Also, there is one more thing: is it possible and useful to use the non-zero<br>
</div>> structure of A when defining matrix B? Does this affect performance?<br>
<br>
Yes, use same data structure for A and B would make MatAXPY faster,<br>
but may not affect much of performance if you only use a single shift,<br>
e.g., ex13.c.<br>
<span class="HOEnZb"><font color="#888888"><br>
Hong<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
><br>
> Error messages: identical for KRYLOV-SCHUR and ARNOLDI<br>
><br>
> [0]PETSC ERROR: --------------------- Error Message<br>
> ------------------------------------<br>
> [0]PETSC ERROR: Detected zero pivot in LU factorization:<br>
> see <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot</a>!<br>
> [0]PETSC ERROR: Zero pivot row 19 value 6.57877e-16 tolerance 2.22045e-14!<br>
> [0]PETSC ERROR:<br>
> ------------------------------------------------------------------------<br>
> [0]PETSC ERROR: Petsc Release Version 3.4.5, Jun, 29, 2014<br>
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.<br>
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.<br>
> [0]PETSC ERROR: See docs/index.html for manual pages.<br>
> [0]PETSC ERROR:<br>
> ------------------------------------------------------------------------<br>
> [0]PETSC ERROR: ./PB3D on a debug-complex named toon-XPS-L501X by toon Mon<br>
> Aug 18 21:47:41 2014<br>
> [0]PETSC ERROR: Libraries linked from<br>
> /opt/petsc/petsc-3.4.5/debug-complex/lib<br>
> [0]PETSC ERROR: Configure run at Mon Aug 11 10:08:29 2014<br>
> [0]PETSC ERROR: Configure options PETSC_ARCH=debug-complex<br>
> --with-scalar-type=complex --with-debugging<br>
> [0]PETSC ERROR:<br>
> ------------------------------------------------------------------------<br>
> [0]PETSC ERROR: MatPivotCheck_none() line 589 in<br>
> /opt/petsc/petsc-3.4.5/include/petsc-private/matimpl.h<br>
> [0]PETSC ERROR: MatPivotCheck() line 608 in<br>
> /opt/petsc/petsc-3.4.5/include/petsc-private/matimpl.h<br>
> [0]PETSC ERROR: MatLUFactorNumeric_SeqAIJ_Inode() line 1837 in<br>
> /opt/petsc/petsc-3.4.5/src/mat/impls/aij/seq/inode.c<br>
> [0]PETSC ERROR: MatLUFactorNumeric() line 2889 in<br>
> /opt/petsc/petsc-3.4.5/src/mat/interface/matrix.c<br>
> [0]PETSC ERROR: PCSetUp_LU() line 152 in<br>
> /opt/petsc/petsc-3.4.5/src/ksp/pc/impls/factor/lu/lu.c<br>
> [0]PETSC ERROR: PCSetUp() line 890 in<br>
> /opt/petsc/petsc-3.4.5/src/ksp/pc/interface/precon.c<br>
> [0]PETSC ERROR: KSPSetUp() line 278 in<br>
> /opt/petsc/petsc-3.4.5/src/ksp/ksp/interface/itfunc.c<br>
> [0]PETSC ERROR: PCSetUp_Redundant() line 170 in<br>
> /opt/petsc/petsc-3.4.5/src/ksp/pc/impls/redundant/redundant.c<br>
> [0]PETSC ERROR: PCSetUp() line 890 in<br>
> /opt/petsc/petsc-3.4.5/src/ksp/pc/interface/precon.c<br>
> [0]PETSC ERROR: KSPSetUp() line 278 in<br>
> /opt/petsc/petsc-3.4.5/src/ksp/ksp/interface/itfunc.c<br>
> [0]PETSC ERROR: STSetUp_Shift() line 126 in<br>
> /opt/slepc/slepc-3.4.4/src/st/impls/shift/shift.c<br>
> [0]PETSC ERROR: STSetUp() line 294 in<br>
> /opt/slepc/slepc-3.4.4/src/st/interface/stsolve.c<br>
> [0]PETSC ERROR: EPSSetUp() line 215 in<br>
> /opt/slepc/slepc-3.4.4/src/eps/interface/setup.c<br>
> [0]PETSC ERROR: EPSSolve() line 90 in<br>
> /opt/slepc/slepc-3.4.4/src/eps/interface/solve.c<br>
><br>
><br>
><br>
> On 18 August 2014 20:52, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
>><br>
>><br>
>> I would recommend just using AIJ matrices and not worry about SBAIJ at<br>
>> this stage. SBAIJ can save you memory but not operations and unless memory<br>
>> is really tight is not likely a useful optimization.<br>
>><br>
>> What eigensolvers have you tried and which eigenvalues do you want?<br>
>><br>
>> Barry<br>
>><br>
>> On Aug 18, 2014, at 5:46 AM, Toon Weyens <<a href="mailto:tweyens@fis.uc3m.es">tweyens@fis.uc3m.es</a>> wrote:<br>
>><br>
>> > Dear all,<br>
>> ><br>
>> > I am using PETSC and SLEPC to simulate a problem in MHD, described in<br>
>> > <a href="http://scitation.aip.org/content/aip/journal/pop/21/4/10.1063/1.4871859" target="_blank">http://scitation.aip.org/content/aip/journal/pop/21/4/10.1063/1.4871859</a>.<br>
>> ><br>
>> > I have a bit of experience with MPI but not too much with PETSC and<br>
>> > SLEPC. So after reading both user manuals and also the relevant chapters of<br>
>> > the PETSC developers manual, I still can't get it to work.<br>
>> ><br>
>> > The problem that I have to solve is a large generalized eigenvalue<br>
>> > system where the matrices are both Hermitian by blocks and tridiagonal,<br>
>> > e.g.:<br>
>> ><br>
>> > ( A11 A12 0 0 0 ) ( B11 B12 0<br>
>> > 0 0 )<br>
>> > ( A12* A22 A23 0 0 ) ( B12* B22 B23 0<br>
>> > 0 )<br>
>> > ( 0 A23* A33 A34 0 ) = lambda ( 0 B23* B33 B34<br>
>> > 0 )<br>
>> > ( 0 0 A34* A44 A45) ( 0 0<br>
>> > B34* B44 B45)<br>
>> > ( 0 0 A45* A55 0 ) ( 0 0<br>
>> > B45* B55 0 )<br>
>> ><br>
>> > where Aii = Aii*, with * the Hermitian conjugate. I apologize for the<br>
>> > ugly representation.<br>
>> ><br>
>> > The dimensions of both A and B are around 50 to 100 blocks (as there is<br>
>> > a block per discretized point) and the blocks themselves can vary from 1 to<br>
>> > more than 100x100 as well (as they correspond to a spectral decomposition).<br>
>> ><br>
>> > Now, my question is: how to solve this economically?<br>
>> ><br>
>> > What I have been trying to do is to make use of the fact that the<br>
>> > matrices are Hermitian and by using matcreatesbaij and through the<br>
>> > recommended matcreate, matsettype(matsbaij), etc.<br>
>> ><br>
>> > Could someone help me out? All help would be greatly appreciated!<br>
>> ><br>
>> > Thank you in advance,<br>
>> > Toon<br>
>> > UC3M<br>
>><br>
><br>
</div></div></blockquote></div><br></div>