Schur system + MatShell

Matthew Knepley knepley at gmail.com
Tue Apr 22 07:16:23 CDT 2008


On Tue, Apr 22, 2008 at 7:11 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>  On Apr 22, 2008, at 7:06 AM, tribur at vision.ee.ethz.ch wrote:
>
>
> > Dear Satish, dear Barry, dear rest,
> >
> > Thank you for your response.
> >
> >
> > >
> > > > Being concerned with b in the moment, I managed to set up and solve
> the global
> > > > Schur system using MATDENSE. The solving works well with, e.g.,
> gmres+jacobi,
> > > > but the assembling of the global Schur matrix takes too long.
> > > >
> > >
> > > Hmm - with dense - if you have some other efficient way of assembling
> > > the matrix - you can specify this directly to MatCreateMPIDense() - [or
> > > use MatGetArray() - and set the values directly into this array]
> > >
> >
> > I don't see an alternative, as the partitioning of PETSc has nothing to do
> with my partitioning (unstructured mesh, partitioned with Metis). Moreover,
> in case of 2 Subdomains, e.g., the local Schur complements S1 and S2 have
> the same size as the global one, S=S1+S2, and there is no matrix format in
> PETSc supporting this, isn't it?

This does not make sense to me. You decide how PETSc partitions things (if
you want), And, I really do not understand what you want in parallel.
If you mean
that you solve the local Schur complements independently, then use a local
matrix for each one. The important thing is to work out the linear algebra prior
to coding. Then wrapping it with PETSc Mat/Vec is easy.

   Matt

> >
> >
> >
> > >
> > > > 2) Using KSPBICG, it iterates without error message, but the result is
> wrong
> > > > (norm of residual 1.42768 instead of something like 1.0e-10), although
> my
> > > > Mat-functions PETSC_SchurMatMult and PETSC_SchurMatMultTranspose seem
> to be
> > > > correct. I tested the latter comparing the vectors y1 and y2 computed
> by,
> > > > e.g., MatMult(S,x,y1) and PETS_SchurMatMult(S,x,y2). Norm(y1-y2) was <
> e-15
> > > > for both functions.
> > > >
> > >
> > > Not sure what the problem could be. Can you confirm that the code is
> > > valgrind clean? It could explain the issue 1 aswell.
> > >
> >
> > Valgrind didn't find an error in my PETSC_SchurMatMult, but PETSc gave me
> now also an error message when running with KSPBICG (same MatShell-code as
> in my previous e-mail):
> >
> > [1]PETSC ERROR: MatMult() line 1632 in src/mat/interface/matrix.c
> > [1]PETSC ERROR: KSPSolve_BiCG() line 95 in src/ksp/ksp/impls/bicg/bicg.c
> > [1]PETSC ERROR: KSPSolve() line 379 in src/ksp/ksp/interface/itfunc.c
> >
> >
>
>   What is the error message? This just tells you a problem in MatMult(). You
> need to send EVERYTHING that was
>  printed when the program stopped with an error. Send it to
> petsc-maint at mcs.anl.gov
>
>
>    Barry
>
>
>
>
> > The error seems to occurr at the second call of PETSC_SchurMatMult.
> > I attached the related source files (petsc version petsc-2.3.3-p8,
> downloaded about 4 months ago), and below you find additionally my
> MatMult-function PETSC_SchurMatMult (maybe there is problem with ctx?).
> >
> > I'm stuck and I'll be very grateful for any help,
> > Kathrin
> >
> >
> > PS My user defined MatMult-function:
> >
> > typedef struct {
> > int NPb;
> > IS is;       //int *uBId_global;
> > double *Sloc;
> > } localData;
> >
> >
> > void PETSC_SchurMatMult(Mat Stot, Vec xtot, Vec ytot){
> >  localData * ctx;
> > MatShellGetContext(Stot, (void**) &ctx);
> > int NPb = ctx->NPb;   IS is   = ctx->is;
> > double *Sloc = ctx->Sloc;
> >
> > //extracting local vector xloc
> > Vec xloc;
> > VecCreateSeq(PETSC_COMM_SELF, NPb, &xloc);
> > VecScatter ctx2;
> > VecScatterCreate(xtot,is,xloc,PETSC_NULL, &ctx2);
> > VecScatterBegin(ctx2,xtot,xloc,INSERT_VALUES,SCATTER_FORWARD);
> > VecScatterEnd(ctx2,xtot,xloc,INSERT_VALUES,SCATTER_FORWARD);
> > VecScatterDestroy(ctx2);
> >
> > //local matrix multiplication
> > vector<double> yloc_array(NPb,0);
> > PetscScalar *xloc_array;
> > VecGetArray(xloc, &xloc_array);
> > for(int k=0; k<NPb; k++)
> >  for(int l=0; l<NPb; l++)
> >    yloc_array[k] += Sloc[k*NPb+l] * xloc_array[l];
> > VecRestoreArray(xloc, &xloc_array);
> > VecDestroy(xloc);
> >
> > //scatter yloc to ytot
> > Vec yloc;
> > VecCreateSeqWithArray(PETSC_COMM_SELF, NPb, PETSC_NULL, &yloc);
> > VecPlaceArray(yloc,&yloc_array[0]);
> > VecScatter ctx3;
> > VecScatterCreate(yloc, PETSC_NULL, ytot, is, &ctx3);
> > VecScatterBegin(ctx3, yloc, ytot, ADD_VALUES, SCATTER_FORWARD);
> > VecScatterEnd(ctx3, yloc, ytot, ADD_VALUES, SCATTER_FORWARD);
> > VecScatterDestroy(ctx3);
> > VecDestroy(yloc);
> >
> > }
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener




More information about the petsc-users mailing list