[petsc-users] KSP not solving correctly

bichinhoverde bichinhoverde at spwinternet.com.br
Thu Feb 5 04:38:16 CST 2015


If your linear system comes from a Laplace equation with Neumann boundary
conditions (singular matrix) it means for a given b there are infinite
possible x.

So, you get one x and calculate the corresponding b. When you solve Ax=b,
you might not get the exact same x, since there are infinite possible x for
the same b.






On Thu, Feb 5, 2015 at 8:14 AM, Massimiliano Leoni <
leoni.massimiliano1 at gmail.com> wrote:

> Hi petsc-users,
>   I stumbled across a curious issue while trying to setup some solver for
> Navier-Stokes.
> In one sentence, I tested a solver computing b = Ax with x known, then
> solving
> the system with b to find the original x back, but it's not working.
>
> In detail, I want to implement a matrix free method for a matrix C2 = D^T
> M_L^{-1} D, where M_L^{-1} is the inverse lumped of M; here's what I do
>
> [I'm sorry for posting code, but I suspect the error could be quite
> subtle...]
>
> ////// define utility struct
> typedef struct _C2ctx
> {
>     Mat D;
>     Vec Mdiag;
> } C2ctx;
>
> ////// define multiplication
> int C2mult(Mat A,Vec x,Vec y)
> {
>     C2ctx* ctx;
>     MatShellGetContext(A,&ctx);
>     Mat D = ctx->D;
>     Vec Mdiag = ctx->Mdiag;
>     int Nu;
>     VecGetSize(Mdiag,&Nu);
>
>     Vec tmp;
>     VecCreate(PETSC_COMM_WORLD,&tmp);
>     VecSetType(tmp,VECSTANDARD);
>     VecSetSizes(tmp,PETSC_DECIDE,Nu);
>
>     MatMultTranspose(D,x,tmp);
>     VecPointwiseDivide(tmp,tmp,Mdiag);
>     MatMult(D,tmp,y);
>
>     VecDestroy(&tmp);
>     return 0;
> }
>
> Later on, I assemble D and M, and then
>
> ////// lump M into _Mdiag
>     Vec _Mdiag;
>     VecCreate(PETSC_COMM_WORLD,&_Mdiag);
>     VecSetType(_Mdiag,VECSTANDARD);
>     VecSetSizes(_Mdiag,PETSC_DECIDE,Nu);
>     MatGetRowSum(_M,_Mdiag);
>
> ////// set context
>     C2ctx ctx;
>     ctx.D = _D;
>     ctx.Mdiag = _Mdiag;
>
> ////// create _C2 [= A]
>     Mat _C2;
>
> MatCreateShell(PETSC_COMM_WORLD,Np,Np,PETSC_DETERMINE,PETSC_DETERMINE,&ctx,&_C2);
>     MatShellSetOperation(_C2,MATOP_MULT,(void(*)(void))C2mult);
>
> ////// create _b2 [= b]
>     Vec _b2;
>     VecCreate(PETSC_COMM_WORLD,&_b2);
>     VecSetType(_b2,VECSTANDARD);
>     VecSetSizes(_b2,PETSC_DECIDE,Np);
>
> ////// create _deltaP [= x]
>     Vec _deltaP;
>     VecCreate(PETSC_COMM_WORLD,&_deltaP);
>     VecSetType(_deltaP,VECSTANDARD);
>     VecSetSizes(_deltaP,PETSC_DECIDE,Np);
>
> ////// set _deltaP = 1 and _b2 = _C2*_deltaP, then change _deltaP for check
>     VecSet(_deltaP,1.0);
>     MatMult(_C2,_deltaP,_b2);
>     VecSet(_deltaP,2.0);
>
> ////// setup KSP
>     KSP _ksp2;
>     KSPCreate(PETSC_COMM_WORLD,&_ksp2);
>     KSPSetOperators(_ksp2,_C2,_C2,DIFFERENT_NONZERO_PATTERN);
>     KSPSetType(_ksp2,"cg");
>
>     KSPSolve(_ksp2,_b2,_deltaP);
>
> According to my understanding, now _deltaP should be a nice row on 1s, but
> what happens is that it has some [apparently] random zeros here and there.
>
> Additional infos:
> [*] the problem comes from the cavity problem with navier-stokes. This
> particular step is to solve for the pressure and the matrix _C2 is singular
> [it's a laplacian] with nullspace the constants. I was told I can ignore
> this
> and avoid setting the nullspace as long as I use a CG or GMRES solver.
>
> [*] the number of "random zero entries" is very high with P2-P1 elements,
> and
> is significantly lower with P1+Bubble - P1.
>
> [*] Matrices M and D are automatically assembled by FEniCS, not by me.
>
> Can anybody please advice on what I am doing wrong? What could be causing
> this?
>
> Thanks in advance
> Massimiliano
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150205/ae6d79eb/attachment.html>


More information about the petsc-users mailing list