[petsc-users] Mysterious error code 77

Quentin Chevalier quentin.chevalier at polytechnique.edu
Fri May 6 08:28:13 CDT 2022


Sorry for forgetting the list. Making two matrices was more of a precaution
then a carefully thought strategy.

It would seem the MWE  as I provided it above (with a setDimensions to
reduce calculation time) does work in sequential and gives the eigenvalue
118897.88711586884.

I had a working MUMPS solver config for a similar problem so I simply put
it there. This gives the following code.

However, running it even in sequential gives a "MatSolverType mumps does
not support matrix type python" error.
petsc4py.PETSc.Error: error code 92
[0] EPSSolve() at /usr/local/slepc/src/eps/interface/epssolve.c:136
[0] EPSSetUp() at /usr/local/slepc/src/eps/interface/epssetup.c:350
[0] STSetUp() at /usr/local/slepc/src/sys/classes/st/interface/stsolve.c:582
[0] STSetUp_Sinvert() at
/usr/local/slepc/src/sys/classes/st/impls/sinvert/sinvert.c:123
[0] KSPSetUp() at /usr/local/petsc/src/ksp/ksp/interface/itfunc.c:408
[0] PCSetUp() at /usr/local/petsc/src/ksp/pc/interface/precon.c:1016
[0] PCSetUp_LU() at /usr/local/petsc/src/ksp/pc/impls/factor/lu/lu.c:82
[0] MatGetFactor() at /usr/local/petsc/src/mat/interface/matrix.c:4779
[0] See https://petsc.org/release/overview/linear_solve_table/ for possible
LU and Cholesky solvers
[0] MatSolverType mumps does not support matrix type python

Did I do something wrong ? I'm a bit confused by your FAQ entry, I never
had a problem with this configuration in parallel.

from petsc4py import PETSc as pet
from slepc4py import SLEPc as slp
from mpi4py.MPI import COMM_WORLD

dir="./sanity_check_mats/"

# Global sizes
m=980143; m_local=m
n=904002; n_local=n
if COMM_WORLD.size>1:
m_local=COMM_WORLD.rank*(490363-489780)+489780
n_local=COMM_WORLD.rank*(452259-451743)+451743
dir+="par/"
else:
m_local=m
n_local=n
dir+="seq/"

QE=pet.Mat().createAIJ([[m_local,m],[n_local,n]])
L=pet.Mat().createAIJ([[m_local,m],[m_local,m]])
Mf=pet.Mat().createAIJ([[n_local,n],[n_local,n]])

viewerQE = pet.Viewer().createMPIIO(dir+"QE.dat", 'r', COMM_WORLD)
QE.load(viewerQE)
viewerL = pet.Viewer().createMPIIO(dir+"L.dat", 'r', COMM_WORLD)
L.load(viewerL)
viewerMf = pet.Viewer().createMPIIO(dir+"Mf.dat", 'r', COMM_WORLD)
Mf.load(viewerMf)

QE.assemble()
L.assemble()
Mf.assemble()

KSPs = []
# Useful solvers (here to put options for computing a smart R)
for Mat in [L,L.hermitianTranspose()]:
KSP = pet.KSP().create()
KSP.setOperators(Mat)
KSP.setFromOptions()
KSPs.append(KSP)
class LHS_class:
def mult(self,A,x,y):
w=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
z=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
QE.mult(x,w)
KSPs[0].solve(w,z)
KSPs[1].solve(z,w)
QE.multTranspose(w,y)

# Matrix free operator
LHS=pet.Mat()
LHS.create(comm=COMM_WORLD)
LHS.setSizes([[n_local,n],[n_local,n]])
LHS.setType(pet.Mat.Type.PYTHON)
LHS.setPythonContext(LHS_class())
LHS.setUp()

x, y = LHS.createVecs()
x.set(1)
LHS.mult(x,y)
print(y.norm())

# Eigensolver
EPS = slp.EPS(); EPS.create()
EPS.setOperators(LHS,Mf) # Solve QE^T*L^-1H*L^-1*QE*f=sigma^2*Mf*f (cheaper
than a proper SVD)
EPS.setProblemType(slp.EPS.ProblemType.GHEP) # Specify that A is hermitian
(by construction), but M is semi-positive
EPS.setWhichEigenpairs(EPS.Which.LARGEST_MAGNITUDE) # Find largest
eigenvalues
# Spectral transform
ST = EPS.getST(); ST.setType('sinvert')
# Krylov subspace
KSP = ST.getKSP(); KSP.setType('preonly')
# Preconditioner
PC = KSP.getPC(); PC.setType('lu')
PC.setFactorSolverType('mumps')
EPS.setDimensions(1,5)
EPS.setFromOptions()
EPS.solve()
print(EPS.getEigenvalue(0))

Quentin CHEVALIER – IA parcours recherche
LadHyX - Ecole polytechnique
__________



On Fri, 6 May 2022 at 14:47, Jose E. Roman <jroman at dsic.upv.es> wrote:
>
> Please respond to the list.
>
> Why would you need different matrices for sequential and parallel? PETSc
takes care of loading matrices from binary files in parallel codes. If I
use the 'seq' matrices for both sequential and parallel runs it works. I
get the eigenvalue 128662.745858
>
> Take into account that in parallel you need to use a parallel linear
solver, e.g., configure PETSc with MUMPS. See the FAQ #10
https://slepc.upv.es/documentation/faq.htm#faq10
>
> Jose
>
> > El 5 may 2022, a las 14:57, Quentin Chevalier <
quentin.chevalier at polytechnique.edu> escribió:
> >
> > Thank you for your answer, Jose.
> >
> > It would appear my problem is slightly more complicated than it
appears. With slight modification of the original MWE to account for
functioning in serial or parallel, and computing the matrices in either
sequential or parallel (apologies, it's a large file), and including a
slightly modified version of your test case, I obtain two different results
in serial and parallel (commands python3 MWE.py and mpirun -n 2 python3
MWE.py).
> >
> > Serial gives my a finite norm (around 28) and parallel gives me two
norms of 0 and a code 77.
> >
> > This is still a problem as I would really like my code to be parallel.
I saved my matrices using :
> > viewerQE = pet.Viewer().createMPIIO("QE.dat", 'w', COMM_WORLD)
> > QE.view(viewerQE)
> > viewerL = pet.Viewer().createMPIIO("L.dat", 'w', COMM_WORLD)
> > L.view(viewerL)
> > viewerMq = pet.Viewer().createMPIIO("Mq.dat", 'w', COMM_WORLD)
> > Mq.view(viewerMq)
> > viewerMf = pet.Viewer().createMPIIO("Mf.dat", 'w', COMM_WORLD)
> > Mf.view(viewerMf)
> >
> > New MWE (also attached for convenience) :
> > from petsc4py import PETSc as pet
> > from slepc4py import SLEPc as slp
> > from mpi4py.MPI import COMM_WORLD
> >
> > dir="./mats/"
> >
> > # Global sizes
> > m=980143; m_local=m
> > n=904002; n_local=n
> > if COMM_WORLD.size>1:
> >     m_local=COMM_WORLD.rank*(490363-489780)+489780
> >     n_local=COMM_WORLD.rank*(452259-451743)+451743
> >     dir+="par/"
> > else:
> >     m_local=m
> >     n_local=n
> >     dir+="seq/"
> >
> > QE=pet.Mat().createAIJ([[m_local,m],[n_local,n]])
> > L=pet.Mat().createAIJ([[m_local,m],[m_local,m]])
> > Mf=pet.Mat().createAIJ([[n_local,n],[n_local,n]])
> >
> > viewerQE = pet.Viewer().createMPIIO(dir+"QE.dat", 'r', COMM_WORLD)
> > QE.load(viewerQE)
> > viewerL = pet.Viewer().createMPIIO(dir+"L.dat", 'r', COMM_WORLD)
> > L.load(viewerL)
> > viewerMf = pet.Viewer().createMPIIO(dir+"Mf.dat", 'r', COMM_WORLD)
> > Mf.load(viewerMf)
> >
> > QE.assemble()
> > L.assemble()
> > Mf.assemble()
> >
> > KSPs = []
> > # Useful solvers (here to put options for computing a smart R)
> > for Mat in [L,L.hermitianTranspose()]:
> >     KSP = pet.KSP().create()
> >     KSP.setOperators(Mat)
> >     KSP.setFromOptions()
> >     KSPs.append(KSP)
> >
> > class LHS_class:
> >     def mult(self,A,x,y):
> >         w=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
> >         z=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
> >         QE.mult(x,w)
> >         KSPs[0].solve(w,z)
> >         KSPs[1].solve(z,w)
> >         QE.multTranspose(w,y)
> >
> > # Matrix free operator
> > LHS=pet.Mat()
> > LHS.create(comm=COMM_WORLD)
> > LHS.setSizes([[n_local,n],[n_local,n]])
> > LHS.setType(pet.Mat.Type.PYTHON)
> > LHS.setPythonContext(LHS_class())
> > LHS.setUp()
> >
> > x, y = LHS.createVecs()
> > x.set(1)
> > LHS.mult(x,y)
> > print(y.norm())
> >
> > # Eigensolver
> > EPS = slp.EPS(); EPS.create()
> > EPS.setOperators(LHS,Mf) # Solve QE^T*L^-1H*L^-1*QE*f=sigma^2*Mf*f
(cheaper than a proper SVD)
> > EPS.setProblemType(slp.EPS.ProblemType.GHEP) # Specify that A is
hermitian (by construction), but M is semi-positive
> > EPS.setWhichEigenpairs(EPS.Which.LARGEST_MAGNITUDE) # Find largest
eigenvalues
> > EPS.setFromOptions()
> > EPS.solve()
> >
> >
> > Quentin CHEVALIER – IA parcours recherche
> > LadHyX - Ecole polytechnique
> > __________
> >
> >
> >
> >
> >
> > Quentin CHEVALIER – IA parcours recherche
> > LadHyX - Ecole polytechnique
> >
> > __________
> >
> >
> >
> > On Thu, 5 May 2022 at 12:05, Jose E. Roman <jroman at dsic.upv.es> wrote:
> > Your operator is not well formed. If you do this:
> >
> > x, y = LHS.createVecs()
> > x.set(1)
> > LHS.mult(x,y)
> > y.view()
> >
> > you will see that the output is all zeros. That is why SLEPc complains
that "Initial vector is zero or belongs to the deflation space".
> >
> > Jose
> >
> >
> > > El 5 may 2022, a las 10:46, Quentin Chevalier <
quentin.chevalier at polytechnique.edu> escribió:
> > >
> > > Just a quick amend on the previous statement ; the problem arises in
> > > sequential and parallel. The MWE as is is provided for the parallel
> > > case, but imposing m_local=m makes it go sequential.
> > >
> > > Cheers,
> > >
> > > Quentin CHEVALIER – IA parcours recherche
> > > LadHyX - Ecole polytechnique
> > > __________
> > >
> > >
> > >
> > > On Thu, 5 May 2022 at 10:34, Quentin Chevalier
> > > <quentin.chevalier at polytechnique.edu> wrote:
> > >>
> > >> Hello all and thanks for your great work in bringing this very
helpful package to the community !
> > >>
> > >> That said, I wouldn't need this mailing list if everything was
running smoothly. I have a rather involved eigenvalue problem that I've
been working on that's been throwing a mysterious error :
> > >>>
> > >>> petsc4py.PETSc.Error: error code 77
> > >>>
> > >>> [1] EPSSolve() at /usr/local/slepc/src/eps/interface/epssolve.c:149
> > >>> [1] EPSSolve_KrylovSchur_Default() at
/usr/local/slepc/src/eps/impls/krylov/krylovschur/krylovschur.c:289
> > >>> [1] EPSGetStartVector() at
/usr/local/slepc/src/eps/interface/epssolve.c:824
> > >>> [1] Petsc has generated inconsistent data
> > >>> [1] Initial vector is zero or belongs to the deflation space
> > >>
> > >>
> > >> This problem occurs in parallel with two processors, using the
petsc4py library using the dolfinx/dolfinx docker container. I have PETSc
version 3.16.0, in complex mode, python 3, and I'm running all of that on a
OpenSUSE Leap 15.2 machine (but I think the docker container has a Ubuntu
OS).
> > >>
> > >> I wrote a minimal working example below, but I'm afraid the process
for building the matrices is involved, so I decided to directly share the
matrices instead :
https://seminaris.polytechnique.fr/share/s/ryJ6L2nR4ketDwP
> > >>
> > >> They are in binary format, but inside the container I hope someone
could reproduce my issue. A word on the structure and intent behind these
matrices :
> > >>
> > >> QE is a diagonal rectangular real matrix. Think of it as some sort
of preconditioner
> > >> L is the least dense of them all, the only one that is complex, and
in order to avoid inverting it I'm using two KSPs to compute solve problems
on the fly
> > >> Mf is a diagonal square real matrix, its on the right-hand side of
the Generalised Hermitian Eigenvalue problem (I'm solving
QE^H*L^-1H*L^-1*QE*x=lambda*Mf*x
> > >>
> > >> Full MWE is below :
> > >>
> > >> from petsc4py import PETSc as pet
> > >> from slepc4py import SLEPc as slp
> > >> from mpi4py.MPI import COMM_WORLD
> > >>
> > >> # Global sizes
> > >> m_local=COMM_WORLD.rank*(490363-489780)+489780
> > >> n_local=COMM_WORLD.rank*(452259-451743)+451743
> > >> m=980143
> > >> n=904002
> > >>
> > >> QE=pet.Mat().createAIJ([[m_local,m],[n_local,n]])
> > >> L=pet.Mat().createAIJ([[m_local,m],[m_local,m]])
> > >> Mf=pet.Mat().createAIJ([[n_local,n],[n_local,n]])
> > >>
> > >> viewerQE = pet.Viewer().createMPIIO("QE.dat", 'r', COMM_WORLD)
> > >> QE.load(viewerQE)
> > >> viewerL = pet.Viewer().createMPIIO("L.dat", 'r', COMM_WORLD)
> > >> L.load(viewerL)
> > >> viewerMf = pet.Viewer().createMPIIO("Mf.dat", 'r', COMM_WORLD)
> > >> Mf.load(viewerMf)
> > >>
> > >> QE.assemble()
> > >> L.assemble()
> > >>
> > >> KSPs = []
> > >> # Useful solvers (here to put options for computing a smart R)
> > >> for Mat in [L,L.hermitianTranspose()]:
> > >> KSP = pet.KSP().create()
> > >> KSP.setOperators(Mat)
> > >> KSP.setFromOptions()
> > >> KSPs.append(KSP)
> > >> class LHS_class:
> > >> def mult(self,A,x,y):
> > >> w=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
> > >> z=pet.Vec().createMPI([m_local,m],comm=COMM_WORLD)
> > >> QE.mult(x,w)
> > >> KSPs[0].solve(w,z)
> > >> KSPs[1].solve(z,w)
> > >> QE.multTranspose(w,y)
> > >>
> > >> # Matrix free operator
> > >> LHS=pet.Mat()
> > >> LHS.create(comm=COMM_WORLD)
> > >> LHS.setSizes([[n_local,n],[n_local,n]])
> > >> LHS.setType(pet.Mat.Type.PYTHON)
> > >> LHS.setPythonContext(LHS_class())
> > >> LHS.setUp()
> > >>
> > >> # Eigensolver
> > >> EPS = slp.EPS(); EPS.create()
> > >> EPS.setOperators(LHS,Mf) # Solve QE^T*L^-1H*L^-1*QE*x=lambda*Mf*x
> > >> EPS.setProblemType(slp.EPS.ProblemType.GHEP) # Specify that A is
hermitian (by construction), and B is semi-positive
> > >> EPS.setWhichEigenpairs(EPS.Which.LARGEST_MAGNITUDE) # Find largest
eigenvalues
> > >> EPS.setFromOptions()
> > >> EPS.solve()
> > >>
> > >> Quentin CHEVALIER – IA parcours recherche
> > >>
> > >> LadHyX - Ecole polytechnique
> > >>
> > >> __________
> >
> > <image003.jpg><MWE.py>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220506/bb8d5529/attachment-0001.html>


More information about the petsc-users mailing list