<div dir="ltr"><div>Hi again,</div><div><br></div><div>I read the demo's and also consulted the src folder. Now, the PETSc part assembling the matrices is clear to me. I put together a very simple example by modifying the ex1.py. My code is attached. The problem now for me is that it seems I can't switch the SLEPc.EPS to a parallel one.</div><div><br></div><div>I tried to run the attached code on my laptop which has 2 physical cores. The results of parallel running are (I use mpirun -np 2 python matricesEigenvalue.py to run parallelly)</div><div><div><br></div><div> from 0 to 8192 on rank 0 # The matrix is of 128*128, the first half is allocated on rank 0</div><div> from 8192 to 16384 on rank 1 # so it's from 0 to 8192, the rank 1 for the rest</div><div> ******************************</div><div> Using SLEPc4py for solving the EVP</div><div> ******************************</div><div> </div><div> Elapsed time of EPS is 4.9312 on rank 0</div><div> </div><div> ******************************</div><div> *** SLEPc Solution Results ***</div><div> ******************************</div><div> </div><div> Number of iterations of the method: 90</div><div> Solution method: krylovschur</div><div> Number of requested eigenvalues: 3</div><div> Stopping condition: tol=1e-08, maxit=1820</div><div> Number of converged eigenpairs 3</div><div> </div><div> k ||Ax-kx||/||kx|| </div><div> ----------------- ------------------</div><div> 129012.869061 7.58376e-09</div><div> 128984.178324 7.86259e-09</div><div> 128955.487588 8.10532e-09</div></div><div><br></div><div><br></div><div>while, the results of serial running are</div><div><br></div><div><div> from 0 to 16384 on rank 0</div><div> ******************************</div><div> Using SLEPc4py for solving the EVP</div><div> ******************************</div><div> </div><div> Elapsed time of EPS is 4.5439 on rank 0</div><div> </div><div> ******************************</div><div> *** SLEPc Solution Results ***</div><div> ******************************</div><div> </div><div> Number of iterations of the method: 90</div><div> Solution method: krylovschur</div><div> Number of requested eigenvalues: 3</div><div> Stopping condition: tol=1e-08, maxit=1820</div><div> Number of converged eigenpairs 3</div><div> </div><div> k ||Ax-kx||/||kx|| </div><div> ----------------- ------------------</div><div> 129012.869061 7.58376e-09</div><div> 128984.178324 7.86259e-09</div><div> 128955.487588 8.10532e-09</div></div><div><br></div><div><br></div><div>It seems that the parallel running doesn't work, the elapsed time is roughly the same. You see that in the process of assembling the matrix in the parallel case, the two cores are working according to</div><div><br></div><div><div> from 0 to 8192 on rank 0</div><div> from 8192 to 16384 on rank 1</div></div><div><br></div><div>but in the SLEPc phase, only rank 0 is working (the print command only works for rank 0, but not for rank 1). I also tried to run the ex1.py in parallel and in serial, the computation times for a relative large matrix are also roughly the same for the two runnings, or in some cases, parallel running is even longer.</div><div><br></div><div>Could you help me on this? Thanks a lot.</div>
</div><div class="gmail_extra"><br><div class="gmail_quote">2014-10-29 15:37 GMT-02:00 Mengqi Zhang <span dir="ltr"><<a href="mailto:jollage@gmail.com" target="_blank">jollage@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks a lot, Jose. I'll try to read and I'll come back to you if I have any problem.<div><br></div><div>Mengqi</div></div><div class="gmail_extra"><br><div class="gmail_quote">2014-10-29 12:23 GMT-02:00 Jose E. Roman <span dir="ltr"><<a href="mailto:jroman@dsic.upv.es" target="_blank">jroman@dsic.upv.es</a>></span>:<div><div class="h5"><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
El 29/10/2014, a las 14:56, Mengqi Zhang escribió:<br>
<span><br>
> Hi, Jose<br>
><br>
> Thanks for your reply.<br>
> I will specify the problem more exactly. I want to solve a big generalized eigenvalue problem. The matrices are sparse. My question lies on how to allocate the matrices among the processors. Do I have to do it by myself? Or there is a routine to do so? I notice that one should do something like<br>
> from petsc4py import PETSc<br>
><br>
><br>
> A = PETSc.Mat().create()<br>
><br>
><br>
> A.setType('aij')<br>
><br>
><br>
> A.setSizes([M,N])<br>
><br>
><br>
> A.setPreallocationNNZ([diag_nz, offdiag_nz]) # optional<br>
> A.setUp()<br>
> I have several question regarding these lines.<br>
> (1) I presume M and N are the dimension of the matrix. Then how do the processors divide the matrix? I guess setPreallocationNNZ does the allocation of the matrix among the processors. What does nz mean here? Why here appears diag and offdiag?<br>
> (2) I actually saw somewhere else that people use A.setPreallocationNNZ(5), with a single parameter 5. What does 5 mean here?<br>
> (3) I want to make sure that the matrix so generated is sparse (since it uses aij). It is sparse right? I feel it tricky since if the matrix is stored as sparse, will the allocation/parallelization destroy the efficiency of sparse matrix?<br>
><br>
><br>
> After the matrix is set up, I would like to use SLEPc4py to solve the generalized eigenvalue problem. The example code I got online is like<br>
> E = SLEPc.EPS(); E.create()<br>
> E.setOperators(A)<br>
> E.setProblemType(SLEPc.EPS.ProblemType.GNHEP)<br>
> E.setFromOptions()<br>
> E.solve()<br>
> I'm afraid this script is not designed for the parallel computation since in the options there is no indication of parallelization. Do you know how to set it up?<br>
><br>
> Thank you very much for your time. I appreciate it very much.<br>
> Best,<br>
> Mengqi<br>
><br>
<br>
</span>Almost all examples under slepc4py/demo are parallel. You can take for instance ex1.py as a basis for your own scripts. In these examples, every process fills its part of the matrix (locally owned rows), as obtained from getOwnershipRange(). See PETSc's documentation here:<br>
<a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetOwnershipRange.html" target="_blank">http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetOwnershipRange.html</a><br>
<br>
The meaning of the preallocation arguments is also explained in PETSc's manpages:<br>
<a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html" target="_blank">http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html</a><br>
<br>
petsc4py/slepc4py are just python wrappers to PETSc/SLEPc, so you must understand how PETSc and SLEPc work, and dig into their manuals and manpages.<br>
<span><font color="#888888"><br>
Jose<br>
<br>
</font></span></blockquote></div></div></div><br></div>
</blockquote></div><br></div>