[petsc-users] On what condition is useful MPI-based solution?

Jose E. Roman jroman at dsic.upv.es
Thu Jan 14 02:41:04 CST 2010


El 14/01/2010, a las 01:04, Takuya Sekikawa escribió:

> Dear SLEPc/PETSc team,
> 
> On Wed, 13 Jan 2010 09:50:57 +0100
> "Jose E. Roman" <jroman at dsic.upv.es> wrote:
> 
>>> [Q2]
>>> Generally, Is MPI only useful in very large matrix?
>>> Now I have to solve eigenvalue problem of 1M x 1M matrix,
>>> Should I use MPI-based system?
>> 
>> For a 1 million matrix I would suggest to run in parallel on an MPI cluster. However, a single computer might be enough if the matrix is very sparse, you need very few eigenvalues, and/or the system has enough memory (but in that case, be prepared for very long response times, depending on how your problem converges).
>> Jose
> 
> Do you have any example of how many PCs need to solve this level of
> problem? and also how many memories do each PC should have?
> 
> I would like to know how much resources do I need (PCs, memories)
> and how long it takes to solve. (not precisely, rough estimation
> is enough)
> 
> Problem is 1M x 1M symmetric sparse matrix, and only a few eigenpairs 
> (at least 1) I need. so currently I plan to use lanczos or krylov-schur
> method, with EPS_NEV=1.

For nev=1, the workspace used by the solver is moderate. Maybe 20 vectors of length 1M (i.e. 160 Mbytes). If the matrix is really sparse, say 30 nonzero elements per row, then the matrix is not a problem either (roughly 364 Mbytes). So one PC may be enough. If the matrix is much denser, or you have convergence problems, or you need to do shift-and-invert, then things get worse.

The execution time basically depends on convergence. For instance, ex1 with n=1M will have very bad convergence, but your problem may not. Run with -eps_monitor or -eps_monitor_draw to see how the solver is progressing.

Jose


> 
> Takuya
> ---------------------------------------------------------------
>   Takuya Sekikawa
>         Mathematical Systems, Inc
>                    sekikawa at msi.co.jp
> ---------------------------------------------------------------
> 
> 



More information about the petsc-users mailing list