[petsc-users] MATBDIAG in > 3.1
Parker, Andrew (UK Filton)
Andrew.Parker2 at baesystems.com
Fri Dec 2 06:40:31 CST 2011
Ok, So I've tried what you suggested, I get this:
[0]PETSC ERROR: MatInvertBlockDiagonal_SeqBAIJ() line 129 in
src/mat/impls/baij/seq/baij.c not supported for block size 7
I'm using these settings:
-ksp_type preonly
-pc_type pbjacobi
I have created a SeqBAIJ mat
Any thoughts, must be missing something....
Many thanks,
Andy
-----Original Message-----
From: petsc-users-bounces at mcs.anl.gov
[mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith
Sent: 01 December 2011 19:57
To: PETSc users list
Subject: Re: [petsc-users] MATBDIAG in > 3.1
*** WARNING ***
This message has originated outside your organisation,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message.
My approach will be a million times faster.
Barry
On Dec 1, 2011, at 8:50 AM, Matthew Knepley wrote:
> On Thu, Dec 1, 2011 at 8:44 AM, Parker, Andrew (UK Filton)
<Andrew.Parker2 at baesystems.com> wrote:
> I have tried to use the following, this was before I had both you're
> emails (Barry, Matthew) ;-)
>
> 1) First I set this up
>
> PetscInt size = 7;
> MatCreate(MPI_COMM_SELF,&LHS);
> MatSetSizes(LHS,size,size,size,size);
> MatSetType(LHS,MATSEQDENSE); //**************NOTE THIS LINE
> MatAssemblyBegin(LHS,MAT_FINAL_ASSEMBLY);
> MatAssemblyEnd(LHS,MAT_FINAL_ASSEMBLY);
>
> VecCreate(PETSC_COMM_SELF,&RHS);
> VecSetSizes(RHS,PETSC_DECIDE, size);
> VecSetFromOptions(RHS);
> VecAssemblyBegin(RHS);
> VecAssemblyEnd(RHS);
>
> VecCreate(PETSC_COMM_SELF,&Updates);
> VecSetSizes(RHS,PETSC_DECIDE, size);
> VecSetFromOptions(Updates);
> VecAssemblyBegin(Updates);
> VecAssemblyEnd(Updates);
>
> KSPCreate(PETSC_COMM_SELF,&Solver);
> KSPSetOperators(Solver, LHS, LHS, SAME_NONZERO_PATTERN);
> KSPSetType(Solver, KSPPREONLY); //**************NOTE THIS LINE
> KSPGetPC(Solver, &prec);
> PCSetType(prec, PCLU); //**************NOTE THIS LINE
> KSPSetFromOptions(Solver);
> KSPSetUp(Solver);
>
> For each cell I then do:
>
> 2) I copy out the appropriate data from global structures into LHS and
> RHS
>
> 3) KSPSolve(Solver, RHS, Updates);
>
> 4) Stick the local Updates back into my global data structure
>
> Could I have your thoughts on whether the above is ok? In terms of
> efficiency and the correct thing to do? Would I be correct in
assuming
> that Barry's method and mine are identical with the exception that
I've
> got some copies going on to set up these small 7x7 and 1x7 matrices
and
> vectors, but Barry's method does this inline by doing a preonly, lu
> solve DIRECT solve on each 7x7??
>
> With Barry's method you can do this from the command line so you can
easily compare
> with other solvers without changing your code.
>
> Matt
>
> Many thanks,
> Andy
>
> -----Original Message-----
> From: petsc-users-bounces at mcs.anl.gov
> [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith
> Sent: 01 December 2011 14:09
> To: PETSc users list
> Subject: Re: [petsc-users] MATBDIAG in > 3.1
>
>
> *** WARNING ***
>
> This message has originated outside your organisation,
> either from an external partner or the Global Internet.
> Keep this in mind if you answer this message.
>
>
> On Dec 1, 2011, at 7:05 AM, Matthew Knepley wrote:
>
> > You can get this effect using PCBJACOBI, setting the number of local
> blocks, and using -sub_pc_type lu -sub_ksp_type preonly.
>
> Do not do this. It will give bad performance since it explicitly
> pulls out each block with MatGetSubMatrices() and solves each small
> matrix as a separate PETSc matrix.
>
> Instead use the BAIJ matrix format with a block size of 7 and then
> use a PCPBJACOBI (note the P in the name for point block, each point
> block is that little 7 by 7 block that it solves efficiently using a
> tuned kernel) with -ksp_type preonly. This should be very fast and
does
> not do any any neccessary scatters or communication.
>
> Barry
>
> >
> > Matt
> >
> > On Thu, Dec 1, 2011 at 4:28 AM, Parker, Andrew (UK Filton)
> <Andrew.Parker2 at baesystems.com> wrote:
> > Hi,
> >
> >
> >
> > I've been using PETSc for a conventional full parallel implicit
> solver, currently in MPIBAIJ format. However, I'd like to understand
> the performance verse memory and speed trade-offs of just having a
block
> diagonal system. In this case I do not scatter to the off diagonal
> blocks. Each block is 7x7 in size and fully dense. In my view this
> should be a point implicit solution of the system. I am looking for
the
> best way to solve this in PETSc, clearly because of the lack of
coupling
> between neighbouring cells by only having a block diagonal it's
actually
> a 7x7 dense linear solve, but to test my theory at present I'd prefer
to
> just not scatter to off diagonals, and use:
> >
> >
> >
> > MATBDIAG with my current ksp solver and take it from there. BDIAG
> seems to have been removed as off 3.0:
> >
> >
> >
> > http://www.mcs.anl.gov/petsc/documentation/changes/300.html
> >
> >
> >
> > Can anybody help me with this, your thoughts on the above and what I
> should use to solve this reduced system would be appreciated??
> >
> >
> > Cheers,
> >
> > Andy
> >
> >
> > ********************************************************************
> > This email and any attachments are confidential to the intended
> > recipient and may also be privileged. If you are not the intended
> > recipient please delete it from your system and notify the sender.
> > You should not copy it or use it for any purpose nor disclose or
> > distribute its contents to any other person.
> > ********************************************************************
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> > -- Norbert Wiener
>
>
>
>
>
> --
> What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
> -- Norbert Wiener
More information about the petsc-users
mailing list