Multilevel solver
Matthew Knepley
knepley at gmail.com
Thu Apr 24 09:19:47 CDT 2008
On Thu, Apr 24, 2008 at 8:58 AM, <Amit.Itagi at seagate.com> wrote:
> Barry,
>
> I have been trying out the PCFIELDSPLIT. I have not yet gotten it to work.
> I have some follow up questions which might help solve my problem.
>
> Consider the simple case of a 4x4 matrix equation being solved on two
> processes. I have vector elements 0 and 1 belonging to rank 0, and elements
> 2 and 3 belonging to rank 1.
>
> 1) For my example, can the index sets have staggered indices i.e. is1-> 0,2
> and is2->1,3 (each is spans across ranks) ?
Yes.
> 2) When I provide the -field_split_<n>_pc_type option on the command line,
> is the index <n> in the same order that the PCFieldSplitSetIS function
> called in ?
> So if I have PCFieldSplitSetIS(pc,is2) before
> PCFieldSplitSetIS(pc,is1), will -field_split_0_... correspond to is2 and
> -field_split_1_... to is1 ?
Yes.
> 3) Since I want to set PC type to lu for field 0, and I want to use MUMPS
> for parallel LU, where do I set the submatrix type to MATAIJMUMPS ? In this
> case, will a second copy of the submatrix be generated - one of type MUMPS
> for the PC and the other of the original MATAIJ type for the KSP ?
I will have to check. However if we are consistent, then it should be
-field_split_0_mat_type aijmumps
> 4) How is the PC applied when I do PC_COMPOSITE_SYMMETRIC_MULTIPLICATIVE ?
It is just the composition of the preconditioners, which is what you want here.
Matt
> Thanks
>
> Rgds,
> Amit
>
>
>
>
> Barry Smith
> <bsmith at mcs.anl.g
> ov> To
> Sent by: petsc-users at mcs.anl.gov
> owner-petsc-users cc
> @mcs.anl.gov
> No Phone Info Subject
> Available Re: Multilevel solver
>
>
> 04/22/2008 10:08
> PM
>
>
> Please respond to
> petsc-users at mcs.a
> nl.gov
>
>
>
>
>
>
> Amit,
>
> Using a a PCSHELL should be fine (it can be used with GMRES),
> my guess is there is a memory corruption error somewhere that is
> causing the crash. This could be tracked down with www.valgrind.com
>
> Another way to you could implement this is with some very recent
> additions I made to PCFIELDSPLIT that are in petsc-dev
> (http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html)
> With this you would chose
> PCSetType(pc,PCFIELDSPLIT
> PCFieldSplitSetIS(pc,is1
> PCFieldSplitSetIS(pc,is2
> PCFieldSplitSetType(pc,PC_COMPOSITE_SYMMETRIC_MULTIPLICATIVE
> to use LU on A11 use the command line options
> -fieldsplit_0_pc_type lu -fieldsplit_0_ksp_type preonly
> and SOR on A22
> -fieldsplit_1_pc_type sor -fieldsplit_1_ksp_type preonly -
> fieldsplit_1_pc_sor_lits <lits> where
> <its> is the number of iterations you want to use block A22
>
> is1 is the IS that contains the indices for all the vector entries in
> the 1 block while is2 is all indices in the
> vector for the 2 block. You can use ISCreateGeneral() to create these.
>
> Probably it is easiest just to try this out.
>
> Barry
>
>
> On Apr 22, 2008, at 8:45 PM, Amit.Itagi at seagate.com wrote:
>
> >
> > Hi,
> >
> > I am trying to implement a multilevel method for an EM problem. The
> > reference is : "Comparison of hierarchical basis functions for
> > efficient
> > multilevel solvers", P. Ingelstrom, V. Hill and R. Dyczij-Edlinger,
> > IET
> > Sci. Meas. Technol. 2007, 1(1), pp 48-52.
> >
> > Here is the summary:
> >
> > The matrix equation Ax=b is solved using GMRES with a multilevel
> > pre-conditioner. A has a block structure.
> >
> > A11 A12 * x1 = b1
> > A21 A22 x2 b2
> >
> > A11 is mxm and A33 is nxn, where m is not equal to n.
> >
> > Step 1 : Solve A11 * e1 = b1 (parallel LU using
> > superLU or
> > MUMPS)
> >
> > Step 2: Solve A22 * e2 =b2-A21*e1 (might either user
> > a SOR
> > solver or a parallel LU)
> >
> > Step 3: Solve A11* e1 = b1-A12*e2 (parallel LU)
> >
> > This gives the approximate solution to
> >
> > A11 A12 * e1 = b1
> > A21 A22 e2 b2
> >
> > and is used as the pre-conditioner for the GMRES.
> >
> >
> > Which PetSc method can implement this pre-conditioner ? I tried a
> > PCSHELL
> > type PC. With Hong's help, I also got the parallel LU to work
> > withSuperLU/MUMPS. My program runs successfully on multiple
> > processes on a
> > single machine. But when I submit the program over multiple
> > machines, I get
> > a crash in the PCApply routine after several GMRES iterations. I
> > think this
> > has to do with using PCSHELL with GMRES (which is not a good idea). Is
> > there a different way to implement this ? Does this resemble the usage
> > pattern of one of the AMG preconditioners ?
> >
> >
> > Thanks
> >
> > Rgds,
> > Amit
> >
>
>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener
More information about the petsc-users
mailing list