[petsc-users] Trivial parallelizing in SLEPc

Jose E. Roman jroman at dsic.upv.es
Wed Sep 23 12:31:24 CDT 2015


> El 23/9/2015, a las 17:37, Hong <hzhang at mcs.anl.gov> escribió:
> 
> Anders:
> Dear PETSc Users,
> 
> I want to find the smallest eigenpairs of a hermitian operator, A, implemented as a matrix-less operator.
>  
> Do you mean matrix-free operator?
> There are slepc examples  using shell matrix:
> slepc/src/eps/examples/tutorials
> grep -i shell *.c 
> ex3.c
> ex9.c
> ex10.c 
> ...
> Take a look at these examples.
> 
> When computing y = Ax, I need the entire vector x, and I calculate the entire vector y, therefore I want to avoid the ownership of x and y to be spread over different processes.
>  
> Do you mean you want  x and y owned by a single process, not distributed?
> Suggest doing sequential computation first, then move to parallel execution.
> 
> To what extend can this problem be parallelized in SLEPc. For instance can I use a block Krylov or block JD algorithm where different processes compute different matrix vector multiplications. If I provide the correct operations when constructing my MatShell, can I expect the FEAST algorithm to compute each contour point on a different process?
> 
> Slepc developer might answer this question.
> 
> Hong
> 

Parallelization in SLEPc is usually based on parallel matrix-vector and vector-vector operations, so if your matrix is sequential it cannot take advantage of parallelism. In addition, there are two exceptions that can use hierarchical parallelism, namely spectrum slicing within Krylov-Schur and the contour integral solver (CISS) - but these are useful only for specific cases. The FEAST algorithm is similar to CISS, but in the SLEPc interface to FEAST we only allow matrix-based parallelism.

Jose



More information about the petsc-users mailing list