<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <font face="Ubuntu">Hi Karli,<br>
      <br>
      thank you for your hint: now it works. <br>
      Now I would like to speed up the solution: I was counting on
      increasing the number of levels/the number of processors used, but
      now I see I cannot do that.<br>
      Do you have any hint to achieve better speed?<br>
      Thanks!<br>
      <br>
      Best,<br>
      Michele<br>
      <br>
    </font>
    <div class="moz-cite-prefix">On 08/13/2013 01:33 PM, Karl Rupp
      wrote:<br>
    </div>
    <blockquote cite="mid:520A9815.7030400@mcs.anl.gov" type="cite">Hi
      Michele,
      <br>
      <br>
      I suggest you try a different decomposition of your grid. With k
      levels, you should have at least 2^{k-1} grid nodes per coordinate
      direction in order to be able to correctly build a coarser mesh.
      In your case, you should have at least 8 nodes (leading to coarser
      levels of size 4, 2, and 1) in z direction.
      <br>
      <br>
      Best regards,
      <br>
      Karli
      <br>
      <br>
      <br>
      On 08/13/2013 02:28 PM, Michele Rosso wrote:
      <br>
      <blockquote type="cite">Hi Barry,
        <br>
        <br>
        I was finally able to try multigrid with a singular system and a
        finer grid.
        <br>
        GAMG works perfectly and has no problem in handling the singular
        system.
        <br>
        On the other hand, MG is giving me problem:
        <br>
        <br>
        [0]PETSC ERROR: --------------------- Error Message
        <br>
        ------------------------------------
        <br>
        [0]PETSC ERROR: Argument out of range!
        <br>
        [0]PETSC ERROR: Partition in x direction is too fine! 32 64!
        <br>
        [0]PETSC ERROR:
        <br>
------------------------------------------------------------------------
        <br>
        [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013
        <br>
        [0]PETSC ERROR: See docs/changes/index.html for recent updates.
        <br>
        [0]PETSC ERROR: See docs/faq.html for hints about trouble
        shooting.
        <br>
        [0]PETSC ERROR: See docs/index.html for manual pages.
        <br>
        [0]PETSC ERROR:
        <br>
------------------------------------------------------------------------
        <br>
        [0]PETSC ERROR: ./hit on a arch-cray-xt5-pkgs-opt named nid01332
        by
        <br>
        Unknown Tue Aug 13 15:06:21 2013
        <br>
        [0]PETSC ERROR: Libraries linked from
        <br>
        /nics/c/home/mrosso/LIBS/petsc-3.4.2/arch-cray-xt5-pkgs-opt/lib
        <br>
        [0]PETSC ERROR: Configure run at Wed Jul 31 22:48:06 2013
        <br>
        <br>
        The input I used is:
        <br>
        -ksp_monitor -ksp_converged_reason -pc_type mg  -pc_mg_galerkin
        <br>
        -pc_mg_levels 4 -options_left
        <br>
        <br>
        I am simulating a 256^3 grid with 256 processors. Since I am
        using a 2D
        <br>
        domain decomposition, each sub-domain contains 256x64x4 grid
        points.
        <br>
        To be consistent with my code indexing, I had to initialize DMDA
        with
        <br>
        reverse ordering, that is z,y,x, so when the error message says
        "x is
        <br>
        too fine" it actually means "z is too fine".
        <br>
        I was wondering what is the minimum number of nodes per
        direction that
        <br>
        would avoid this problem and how the number of levels is related
        to
        <br>
        minimum grid size required.
        <br>
        Thank you!
        <br>
        <br>
        Michele
        <br>
        <br>
        <br>
        On 08/02/2013 03:11 PM, Barry Smith wrote:
        <br>
        <blockquote type="cite">On Aug 2, 2013, at 4:52 PM, Michele
          Rosso<a class="moz-txt-link-rfc2396E" href="mailto:mrosso@uci.edu"><mrosso@uci.edu></a>  wrote:
          <br>
          <br>
          <blockquote type="cite">Barry,
            <br>
            <br>
            thank you very much for your help. I was trying to debug the
            error with no success!
            <br>
            Now it works like a charm for me too!
            <br>
            I have still two questions for you:
            <br>
            <br>
            1) How did you choose the number of levels to use: trial and
            error?
            <br>
          </blockquote>
              I just used 2 because it is more than one level :-).  When
          you use a finer mesh you can use more levels.
          <br>
          <br>
          <blockquote type="cite">2) For a singular system (periodic),
            besides the nullspace removal, should I change any
            parameter?
            <br>
          </blockquote>
              I don't know of anything.
          <br>
          <br>
              But there is a possible problem with -pc_mg_galerkin,
          PETSc does not transfer the null space information from the
          fine mesh to the other meshes and technically we really want
          the multigrid to remove the null space on all the levels but
          usually it will work without worrying about that.
          <br>
          <br>
              Barry
          <br>
          <br>
          <blockquote type="cite">Again, thank you very much!
            <br>
            <br>
            Michele
            <br>
            <br>
            On 08/02/2013 02:38 PM, Barry Smith wrote:
            <br>
            <blockquote type="cite">    Finally got it. My failing
              memory. I had to add the line
              <br>
              <br>
                  call KSPSetDMActive(ksp,PETSC_FALSE,ierr)
              <br>
              <br>
                  immediately after KSPSetDM()   and
              <br>
              <br>
                  change
              <br>
              <br>
                   call DMCreateMatrix(da,MATMPIAIJ,A,ierr)
              <br>
              <br>
                  to
              <br>
              <br>
                   call DMCreateMatrix(da,MATAIJ,A,ierr)
              <br>
              <br>
                   so it will work in both parallel and sequential then
              <br>
              <br>
              ksp_monitor -ksp_converged_reason -pc_type mg -ksp_view 
              -pc_mg_galerkin -pc_mg_levels 2
              <br>
              <br>
              works great with 2 levels.
              <br>
              <br>
                  Barry
              <br>
              <br>
              <br>
              <br>
              <br>
              On Aug 1, 2013, at 6:29 PM, Michele Rosso
              <br>
              <a class="moz-txt-link-rfc2396E" href="mailto:mrosso@uci.edu"><mrosso@uci.edu></a>
              <br>
                wrote:
              <br>
              <br>
              <br>
              <blockquote type="cite">Barry,
                <br>
                <br>
                no problem. I attached the full code in
                test_poisson_solver.tar.gz.
                <br>
                My test code is a very reduced version of my productive
                code (incompressible DNS code) thus fftw3 and the
                library 2decomp&fft are needed to run it.
                <br>
                I attached the 2decomp&fft version I used: it is a
                matter of minutes to install it, so you should not have
                any problem.
                <br>
                Please, contact me for any question/suggestion.
                <br>
                I the mean time I will try to debug it.
                <br>
                <br>
                Michele
                <br>
                <br>
                <br>
                <br>
                <br>
                On 08/01/2013 04:19 PM, Barry Smith wrote:
                <br>
                <br>
                <blockquote type="cite">    Run on one process until
                  this is debugged. You can try the option
                  <br>
                  <br>
                  -start_in_debugger noxterm
                  <br>
                  <br>
                  and then call VecView(vec,0) in the debugger when it
                  gives the error below. It seems like some objects are
                  not getting their initial values set properly. Are you
                  able to email the code so we can run it and figure out
                  what is going on?
                  <br>
                  <br>
                      Barry
                  <br>
                  <br>
                  On Aug 1, 2013, at 5:52 PM, Michele Rosso
                  <br>
                  <br>
                  <a class="moz-txt-link-rfc2396E" href="mailto:mrosso@uci.edu"><mrosso@uci.edu></a>
                  <br>
                  <br>
                    wrote:
                  <br>
                  <br>
                  <br>
                  <br>
                  <blockquote type="cite">Barry,
                    <br>
                    <br>
                    I checked the matrix: the element (0,0) is not zero,
                    nor any other diagonal element is.
                    <br>
                    The matrix is symmetric positive define (i.e. the
                    standard Poisson matrix).
                    <br>
                    Also, -da_refine is never used (see previous
                    output).
                    <br>
                    I tried to run with -pc_type mg -pc_mg_galerkin
                    -mg_levels_pc_type jacobi -mg_levels_ksp_type
                    chebyshev
                    -mg_levels_ksp_chebyshev_estimate_eigenvalues 
                    -ksp_view -options_left
                    <br>
                    <br>
                    and now the error is different:
                    <br>
                    0]PETSC ERROR: [1]PETSC ERROR: ---------------------
                    Error Message ------------------------------------
                    <br>
                    [1]PETSC ERROR: Floating point exception!
                    <br>
                    [1]PETSC ERROR: Vec entry at local location 0 is
                    not-a-number or infinite at beginning of function:
                    Parameter number 2!
                    <br>
                    [1]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [1]PETSC ERROR: Petsc Release Version 3.4.2, Jul,
                    02, 2013
                    <br>
                    [1]PETSC ERROR: See docs/changes/index.html for
                    recent updates.
                    <br>
                    [1]PETSC ERROR: See docs/faq.html for hints about
                    trouble shooting.
                    <br>
                    [2]PETSC ERROR: --------------------- Error Message
                    ------------------------------------
                    <br>
                    [2]PETSC ERROR: Floating point exception!
                    <br>
                    [2]PETSC ERROR: Vec entry at local location 0 is
                    not-a-number or infinite at beginning of function:
                    Parameter number 2!
                    <br>
                    [2]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [2]PETSC ERROR: Petsc Release Version 3.4.2, Jul,
                    02, 2013
                    <br>
                    [2]PETSC ERROR: See docs/changes/index.html for
                    recent updates.
                    <br>
                    [2]PETSC ERROR: See docs/faq.html for hints about
                    trouble shooting.
                    <br>
                    [2]PETSC ERROR: [3]PETSC ERROR:
                    --------------------- Error Message
                    ------------------------------------
                    <br>
                    [3]PETSC ERROR: Floating point exception!
                    <br>
                    [3]PETSC ERROR: Vec entry at local location 0 is
                    not-a-number or infinite at beginning of function:
                    Parameter number 2!
                    <br>
                    [3]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [3]PETSC ERROR: Petsc Release Version 3.4.2, Jul,
                    02, 2013
                    <br>
                    [3]PETSC ERROR: See docs/changes/index.html for
                    recent updates.
                    <br>
                    [3]PETSC ERROR: See docs/faq.html for hints about
                    trouble shooting.
                    <br>
                    [3]PETSC ERROR: See docs/index.html for manual
                    pages.
                    <br>
                    [3]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [1]PETSC ERROR: See docs/index.html for manual
                    pages.
                    <br>
                    [1]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [1]PETSC ERROR: ./test on a linux-gnu-dbg named
                    enterprise-A by mic Thu Aug  1 15:43:16 2013
                    <br>
                    [1]PETSC ERROR: Libraries linked from
                    /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
                    <br>
                    [1]PETSC ERROR: See docs/index.html for manual
                    pages.
                    <br>
                    [2]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [2]PETSC ERROR: ./test on a linux-gnu-dbg named
                    enterprise-A by mic Thu Aug  1 15:43:16 2013
                    <br>
                    [2]PETSC ERROR: Libraries linked from
                    /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
                    <br>
                    [2]PETSC ERROR: Configure run at Thu Aug  1 12:01:44
                    2013
                    <br>
                    [2]PETSC ERROR: [3]PETSC ERROR: ./test on a
                    linux-gnu-dbg named enterprise-A by mic Thu Aug  1
                    15:43:16 2013
                    <br>
                    [3]PETSC ERROR: Libraries linked from
                    /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
                    <br>
                    [3]PETSC ERROR: Configure run at Thu Aug  1 12:01:44
                    2013
                    <br>
                    [3]PETSC ERROR: Configure options
                    <br>
                    Configure run at Thu Aug  1 12:01:44 2013
                    <br>
                    [1]PETSC ERROR: Configure options
                    <br>
                    [1]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [1]PETSC ERROR: VecValidValues() line 28 in
                    src/vec/vec/interface/rvector.c
                    <br>
                    Configure options
                    <br>
                    [2]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [2]PETSC ERROR: VecValidValues() line 28 in
                    src/vec/vec/interface/rvector.c
                    <br>
                    [3]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [3]PETSC ERROR: VecValidValues() line 28 in
                    src/vec/vec/interface/rvector.c
                    <br>
                    [3]PETSC ERROR: [1]PETSC ERROR: MatMult() line 2174
                    in src/mat/interface/matrix.c
                    <br>
                    [1]PETSC ERROR: [2]PETSC ERROR: MatMult() line 2174
                    in src/mat/interface/matrix.c
                    <br>
                    [2]PETSC ERROR: KSP_MatMult() line 204 in
src/ksp/ksp/impls/cheby//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                    MatMult() line 2174 in src/mat/interface/matrix.c
                    <br>
                    [3]PETSC ERROR: KSP_MatMult() line 204 in
src/ksp/ksp/impls/cheby//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                    [3]PETSC ERROR: KSP_MatMult() line 204 in
src/ksp/ksp/impls/cheby//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                    [1]PETSC ERROR: KSPSolve_Chebyshev() line 504 in
                    src/ksp/ksp/impls/cheby/cheby.c
                    <br>
                    [2]PETSC ERROR: KSPSolve_Chebyshev() line 504 in
                    src/ksp/ksp/impls/cheby/cheby.c
                    <br>
                    [2]PETSC ERROR: KSPSolve_Chebyshev() line 504 in
                    src/ksp/ksp/impls/cheby/cheby.c
                    <br>
                    [3]PETSC ERROR: KSPSolve() line 441 in
                    src/ksp/ksp/interface/itfunc.c
                    <br>
                    [1]PETSC ERROR: KSPSolve() line 441 in
                    src/ksp/ksp/interface/itfunc.c
                    <br>
                    [1]PETSC ERROR: KSPSolve() line 441 in
                    src/ksp/ksp/interface/itfunc.c
                    <br>
                    [2]PETSC ERROR: PCMGMCycle_Private() line 19 in
                    src/ksp/pc/impls/mg/mg.c
                    <br>
                    [3]PETSC ERROR: PCMGMCycle_Private() line 19 in
                    src/ksp/pc/impls/mg/mg.c
                    <br>
                    PCMGMCycle_Private() line 19 in
                    src/ksp/pc/impls/mg/mg.c
                    <br>
                    [1]PETSC ERROR: PCApply_MG() line 330 in
                    src/ksp/pc/impls/mg/mg.c
                    <br>
                    [2]PETSC ERROR: PCApply_MG() line 330 in
                    src/ksp/pc/impls/mg/mg.c
                    <br>
                    [2]PETSC ERROR: PCApply() line 442 in
                    src/ksp/pc/interface/precon.c
                    <br>
                    [3]PETSC ERROR: PCApply_MG() line 330 in
                    src/ksp/pc/impls/mg/mg.c
                    <br>
                    [3]PETSC ERROR: PCApply() line 442 in
                    src/ksp/pc/interface/precon.c
                    <br>
                    [1]PETSC ERROR: PCApply() line 442 in
                    src/ksp/pc/interface/precon.c
                    <br>
                    [1]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                    [2]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                    [2]PETSC ERROR: [3]PETSC ERROR: KSP_PCApply() line
                    227 in
src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                    [3]PETSC ERROR: KSPSolve_CG() line 175 in
                    src/ksp/ksp/impls/cg/cg.c
                    <br>
                    [1]PETSC ERROR: KSPSolve_CG() line 175 in
                    src/ksp/ksp/impls/cg/cg.c
                    <br>
                    [1]PETSC ERROR: KSPSolve() line 441 in
                    src/ksp/ksp/interface/itfunc.c
                    <br>
                    KSPSolve_CG() line 175 in src/ksp/ksp/impls/cg/cg.c
                    <br>
                    [2]PETSC ERROR: KSPSolve() line 441 in
                    src/ksp/ksp/interface/itfunc.c
                    <br>
                    [3]PETSC ERROR: KSPSolve() line 441 in
                    src/ksp/ksp/interface/itfunc.c
                    <br>
                    --------------------- Error Message
                    ------------------------------------
                    <br>
                    [0]PETSC ERROR: Floating point exception!
                    <br>
                    [0]PETSC ERROR: Vec entry at local location 0 is
                    not-a-number or infinite at beginning of function:
                    Parameter number 2!
                    <br>
                    [0]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul,
                    02, 2013
                    <br>
                    [0]PETSC ERROR: See docs/changes/index.html for
                    recent updates.
                    <br>
                    [0]PETSC ERROR: See docs/faq.html for hints about
                    trouble shooting.
                    <br>
                    [0]PETSC ERROR: See docs/index.html for manual
                    pages.
                    <br>
                    [0]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [0]PETSC ERROR: ./test on a linux-gnu-dbg named
                    enterprise-A by mic Thu Aug  1 15:43:16 2013
                    <br>
                    [0]PETSC ERROR: Libraries linked from
                    /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
                    <br>
                    [0]PETSC ERROR: Configure run at Thu Aug  1 12:01:44
                    2013
                    <br>
                    [0]PETSC ERROR: Configure options
                    <br>
                    [0]PETSC ERROR:
                    ------------------------------------------------------------------------
                    <br>
                    [0]PETSC ERROR: VecValidValues() line 28 in
                    src/vec/vec/interface/rvector.c
                    <br>
                    [0]PETSC ERROR: MatMult() line 2174 in
                    src/mat/interface/matrix.c
                    <br>
                    [0]PETSC ERROR: KSP_MatMult() line 204 in
src/ksp/ksp/impls/cheby//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                    [0]PETSC ERROR: KSPSolve_Chebyshev() line 504 in
                    src/ksp/ksp/impls/cheby/cheby.c
                    <br>
                    [0]PETSC ERROR: KSPSolve() line 441 in
                    src/ksp/ksp/interface/itfunc.c
                    <br>
                    [0]PETSC ERROR: PCMGMCycle_Private() line 19 in
                    src/ksp/pc/impls/mg/mg.c
                    <br>
                    [0]PETSC ERROR: PCApply_MG() line 330 in
                    src/ksp/pc/impls/mg/mg.c
                    <br>
                    [0]PETSC ERROR: PCApply() line 442 in
                    src/ksp/pc/interface/precon.c
                    <br>
                    [0]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                    [0]PETSC ERROR: KSPSolve_CG() line 175 in
                    src/ksp/ksp/impls/cg/cg.c
                    <br>
                    [0]PETSC ERROR: KSPSolve() line 441 in
                    src/ksp/ksp/interface/itfunc.c
                    <br>
                    <br>
                    #PETSc Option Table entries:
                    <br>
                    -ksp_view
                    <br>
                    -mg_levels_ksp_chebyshev_estimate_eigenvalues
                    <br>
                    -mg_levels_ksp_type chebyshev
                    <br>
                    -mg_levels_pc_type jacobi
                    <br>
                    -options_left
                    <br>
                    -pc_mg_galerkin
                    <br>
                    -pc_type mg
                    <br>
                    #End of PETSc Option Table entries
                    <br>
                    There are no unused options.
                    <br>
                    <br>
                    Michele
                    <br>
                    <br>
                    <br>
                    On 08/01/2013 03:27 PM, Barry Smith wrote:
                    <br>
                    <br>
                    <br>
                    <blockquote type="cite">   Do a MatView() on A
                      before the solve (remove the -da_refine 4) so it
                      is small. Is the 0,0 entry 0? If the matrix has
                      zero on the diagonals you cannot us Gauss-Seidel
                      as the smoother.  You can start with
                      -mg_levels_pc_type jacobi -mg_levels_ksp_type
                      chebychev
                      -mg_levels_ksp_chebyshev_estimate_eigenvalues
                      <br>
                      <br>
                          Is the matrix a Stokes-like matrix? If so then
                      different preconditioners are in order.
                      <br>
                      <br>
                          Barry
                      <br>
                      <br>
                      On Aug 1, 2013, at 5:21 PM, Michele Rosso
                      <br>
                      <br>
                      <br>
                      <a class="moz-txt-link-rfc2396E" href="mailto:mrosso@uci.edu"><mrosso@uci.edu></a>
                      <br>
                      <br>
                      <br>
                        wrote:
                      <br>
                      <br>
                      <br>
                      <br>
                      <br>
                      <blockquote type="cite">Barry,
                        <br>
                        <br>
                        here it is the fraction of code where I set the
                        rhs term and the matrix.
                        <br>
                        <br>
                                ! Create matrix
                        <br>
                                call form_matrix( A , qrho, lsf, head )
                        <br>
                                call
                        MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)
                        <br>
                                call
                        MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)
                        <br>
                        <br>
                                ! Create rhs term
                        <br>
                                call form_rhs(work, qrho, lsf, b , head)
                        <br>
                        <br>
                                ! Solve system
                        <br>
                                call KSPSetFromOptions(ksp,ierr)
                        <br>
                                call KSPSetUp(ksp,ierr)
                        <br>
                                call KSPSolve(ksp,b,x,ierr)
                        <br>
                                call KSPGetIterationNumber(ksp, iiter
                        ,ierr)
                        <br>
                        <br>
                        The subroutine form_matrix returns the Mat
                        object A that is filled by using 
                        MatSetValuesStencil.
                        <br>
                        qrho, lsf and head are additional arguments that
                        are needed to compute the matrix value.
                        <br>
                        <br>
                        <br>
                        Michele
                        <br>
                        <br>
                        <br>
                        <br>
                        On 08/01/2013 03:11 PM, Barry Smith wrote:
                        <br>
                        <br>
                        <br>
                        <br>
                        <blockquote type="cite">   Where are you putting
                          the values into the matrix? It seems the
                          matrix has no values in it? The code is
                          stopping because in the Gauss-Seidel smoothing
                          it has detected zero diagonals.
                          <br>
                          <br>
                              Barry
                          <br>
                          <br>
                          <br>
                          On Aug 1, 2013, at 4:47 PM, Michele Rosso
                          <br>
                          <br>
                          <br>
                          <a class="moz-txt-link-rfc2396E" href="mailto:mrosso@uci.edu"><mrosso@uci.edu></a>
                          <br>
                          <br>
                          <br>
                            wrote:
                          <br>
                          <br>
                          <br>
                          <br>
                          <br>
                          <blockquote type="cite">Barry,
                            <br>
                            <br>
                            I run with :   -pc_type mg -pc_mg_galerkin
                            -da_refine 4  -ksp_view -options_left
                            <br>
                            <br>
                            For the test I use a 64^3 grid and 4
                            processors.
                            <br>
                            <br>
                            The output is:
                            <br>
                            <br>
                            [2]PETSC ERROR: --------------------- Error
                            Message ------------------------------------
                            <br>
                            [2]PETSC ERROR: Arguments are incompatible!
                            <br>
                            [2]PETSC ERROR: Zero diagonal on row 0!
                            <br>
                            [2]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            [2]PETSC ERROR: Petsc Release Version 3.4.2,
                            Jul, 02, 2013
                            <br>
                            [2]PETSC ERROR: See docs/changes/index.html
                            for recent updates.
                            <br>
                            [2]PETSC ERROR: See docs/faq.html for hints
                            about trouble shooting.
                            <br>
                            [2]PETSC ERROR: See docs/index.html for
                            manual pages.
                            <br>
                            [2]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            [2]PETSC ERROR: ./test on a linux-gnu-dbg
                            named enterprise-A by mic Thu Aug  1
                            14:44:04 2013
                            <br>
                            [0]PETSC ERROR: [2]PETSC ERROR: Libraries
                            linked from
                            /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
                            <br>
                            [2]PETSC ERROR: Configure run at Thu Aug  1
                            12:01:44 2013
                            <br>
                            [2]PETSC ERROR: Configure options
                            <br>
                            [2]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            [2]PETSC ERROR: MatInvertDiagonal_SeqAIJ()
                            line 1457 in src/mat/impls/aij/seq/aij.c
                            <br>
                            [2]PETSC ERROR: MatSOR_SeqAIJ() line 1489 in
                            src/mat/impls/aij/seq/aij.c
                            <br>
                            --------------------- Error Message
                            ------------------------------------
                            <br>
                            [2]PETSC ERROR: MatSOR_MPIAIJ() line 1623 in
                            src/mat/impls/aij/mpi/mpiaij.c
                            <br>
                            [2]PETSC ERROR: MatSOR() line 3649 in
                            src/mat/interface/matrix.c
                            <br>
                            [2]PETSC ERROR: [0]PETSC ERROR:
                            PCApply_SOR() line 35 in
                            src/ksp/pc/impls/sor/sor.c
                            <br>
                            [2]PETSC ERROR: PCApply() line 442 in
                            src/ksp/pc/interface/precon.c
                            <br>
                            [2]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/interface//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                            Arguments are incompatible!
                            <br>
                            [2]PETSC ERROR: KSPInitialResidual() line 64
                            in src/ksp/ksp/interface/itres.c
                            <br>
                            [2]PETSC ERROR: KSPSolve_GMRES() line 239 in
                            src/ksp/ksp/impls/gmres/gmres.c
                            <br>
                            [2]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            [2]PETSC ERROR: [0]PETSC ERROR:
                            KSPSolve_Chebyshev() line 409 in
                            src/ksp/ksp/impls/cheby/cheby.c
                            <br>
                            [2]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            [2]PETSC ERROR: PCMGMCycle_Private() line 19
                            in src/ksp/pc/impls/mg/mg.c
                            <br>
                            Zero diagonal on row 0!
                            <br>
                            [2]PETSC ERROR: PCApply_MG() line 330 in
                            src/ksp/pc/impls/mg/mg.c
                            <br>
                            [2]PETSC ERROR: PCApply() line 442 in
                            src/ksp/pc/interface/precon.c
                            <br>
                            [0]PETSC ERROR: [2]PETSC ERROR:
                            KSP_PCApply() line 227 in
src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                            [2]PETSC ERROR: KSPSolve_CG() line 175 in
                            src/ksp/ksp/impls/cg/cg.c
                            <br>
------------------------------------------------------------------------
                            <br>
                            [2]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            [0]PETSC ERROR: Petsc Release Version 3.4.2,
                            Jul, 02, 2013
                            <br>
                            [0]PETSC ERROR: See docs/changes/index.html
                            for recent updates.
                            <br>
                            [3]PETSC ERROR: [0]PETSC ERROR: See
                            docs/faq.html for hints about trouble
                            shooting.
                            <br>
                            [0]PETSC ERROR: --------------------- Error
                            Message ------------------------------------
                            <br>
                            [3]PETSC ERROR: Arguments are incompatible!
                            <br>
                            [3]PETSC ERROR: Zero diagonal on row 0!
                            <br>
                            [3]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            [3]PETSC ERROR: Petsc Release Version 3.4.2,
                            Jul, 02, 2013
                            <br>
                            [3]PETSC ERROR: See docs/changes/index.html
                            for recent updates.
                            <br>
                            [3]PETSC ERROR: See docs/faq.html for hints
                            about trouble shooting.
                            <br>
                            [3]PETSC ERROR: See docs/index.html for
                            manual pages.
                            <br>
                            [3]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            See docs/index.html for manual pages.
                            <br>
                            [3]PETSC ERROR: ./test on a linux-gnu-dbg
                            named enterprise-A by mic Thu Aug  1
                            14:44:04 2013
                            <br>
                            [3]PETSC ERROR: Libraries linked from
                            /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
                            <br>
                            [1]PETSC ERROR: [3]PETSC ERROR: Configure
                            run at Thu Aug  1 12:01:44 2013
                            <br>
                            [3]PETSC ERROR: Configure options
                            <br>
                            [3]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            [3]PETSC ERROR: --------------------- Error
                            Message ------------------------------------
                            <br>
                            MatInvertDiagonal_SeqAIJ() line 1457 in
                            src/mat/impls/aij/seq/aij.c
                            <br>
                            [3]PETSC ERROR: MatSOR_SeqAIJ() line 1489 in
                            src/mat/impls/aij/seq/aij.c
                            <br>
                            [3]PETSC ERROR: [0]PETSC ERROR:
                            MatSOR_MPIAIJ() line 1623 in
                            src/mat/impls/aij/mpi/mpiaij.c
                            <br>
                            [1]PETSC ERROR: Arguments are incompatible!
                            <br>
                            [1]PETSC ERROR: Zero diagonal on row 0!
                            <br>
                            [1]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            [1]PETSC ERROR: Petsc Release Version 3.4.2,
                            Jul, 02, 2013
                            <br>
                            [1]PETSC ERROR: See docs/changes/index.html
                            for recent updates.
                            <br>
                            [1]PETSC ERROR: See docs/faq.html for hints
                            about trouble shooting.
                            <br>
                            [1]PETSC ERROR: See docs/index.html for
                            manual pages.
                            <br>
                            [1]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            [1]PETSC ERROR: ./test on a linux-gnu-dbg
                            named enterprise-A by mic Thu Aug  1
                            14:44:04 2013
                            <br>
                            [1]PETSC ERROR: Libraries linked from
                            /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
                            <br>
                            [1]PETSC ERROR: Configure run at Thu Aug  1
                            12:01:44 2013
                            <br>
                            [1]PETSC ERROR: Configure options
                            <br>
                            [1]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            [1]PETSC ERROR: MatInvertDiagonal_SeqAIJ()
                            line 1457 in src/mat/impls/aij/seq/aij.c
                            <br>
                            [1]PETSC ERROR: [3]PETSC ERROR: MatSOR()
                            line 3649 in src/mat/interface/matrix.c
                            <br>
                            [3]PETSC ERROR: PCApply_SOR() line 35 in
                            src/ksp/pc/impls/sor/sor.c
                            <br>
                            [3]PETSC ERROR: PCApply() line 442 in
                            src/ksp/pc/interface/precon.c
                            <br>
                            [3]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/interface//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                            [3]PETSC ERROR: KSPInitialResidual() line 64
                            in src/ksp/ksp/interface/itres.c
                            <br>
                            [3]PETSC ERROR: KSPSolve_GMRES() line 239 in
                            src/ksp/ksp/impls/gmres/gmres.c
                            <br>
                            [3]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            [3]PETSC ERROR: KSPSolve_Chebyshev() line
                            409 in src/ksp/ksp/impls/cheby/cheby.c
                            <br>
                            [3]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            [3]PETSC ERROR: PCMGMCycle_Private() line 19
                            in src/ksp/pc/impls/mg/mg.c
                            <br>
                            [3]PETSC ERROR: PCApply_MG() line 330 in
                            src/ksp/pc/impls/mg/mg.c
                            <br>
                            [3]PETSC ERROR: PCApply() line 442 in
                            src/ksp/pc/interface/precon.c
                            <br>
                            [3]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                            [3]PETSC ERROR: KSPSolve_CG() line 175 in
                            src/ksp/ksp/impls/cg/cg.c
                            <br>
                            [3]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            MatSOR_SeqAIJ() line 1489 in
                            src/mat/impls/aij/seq/aij.c
                            <br>
                            [1]PETSC ERROR: MatSOR_MPIAIJ() line 1623 in
                            src/mat/impls/aij/mpi/mpiaij.c
                            <br>
                            [1]PETSC ERROR: MatSOR() line 3649 in
                            src/mat/interface/matrix.c
                            <br>
                            [1]PETSC ERROR: PCApply_SOR() line 35 in
                            src/ksp/pc/impls/sor/sor.c
                            <br>
                            [1]PETSC ERROR: PCApply() line 442 in
                            src/ksp/pc/interface/precon.c
                            <br>
                            [1]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/interface//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                            [1]PETSC ERROR: KSPInitialResidual() line 64
                            in src/ksp/ksp/interface/itres.c
                            <br>
                            [1]PETSC ERROR: KSPSolve_GMRES() line 239 in
                            src/ksp/ksp/impls/gmres/gmres.c
                            <br>
                            [1]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            [1]PETSC ERROR: KSPSolve_Chebyshev() line
                            409 in src/ksp/ksp/impls/cheby/cheby.c
                            <br>
                            [1]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            [1]PETSC ERROR: PCMGMCycle_Private() line 19
                            in src/ksp/pc/impls/mg/mg.c
                            <br>
                            [1]PETSC ERROR: PCApply_MG() line 330 in
                            src/ksp/pc/impls/mg/mg.c
                            <br>
                            [1]PETSC ERROR: PCApply() line 442 in
                            src/ksp/pc/interface/precon.c
                            <br>
                            [1]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                            [1]PETSC ERROR: KSPSolve_CG() line 175 in
                            src/ksp/ksp/impls/cg/cg.c
                            <br>
                            [1]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
------------------------------------------------------------------------
                            <br>
                            [0]PETSC ERROR: ./test on a linux-gnu-dbg
                            named enterprise-A by mic Thu Aug  1
                            14:44:04 2013
                            <br>
                            [0]PETSC ERROR: Libraries linked from
                            /opt/petsc/petsc-3.4.2/linux-gnu-dbg/lib
                            <br>
                            [0]PETSC ERROR: Configure run at Thu Aug  1
                            12:01:44 2013
                            <br>
                            [0]PETSC ERROR: Configure options
                            <br>
                            [0]PETSC ERROR:
                            ------------------------------------------------------------------------
                            <br>
                            [0]PETSC ERROR: MatInvertDiagonal_SeqAIJ()
                            line 1457 in src/mat/impls/aij/seq/aij.c
                            <br>
                            [0]PETSC ERROR: MatSOR_SeqAIJ() line 1489 in
                            src/mat/impls/aij/seq/aij.c
                            <br>
                            [0]PETSC ERROR: MatSOR_MPIAIJ() line 1623 in
                            src/mat/impls/aij/mpi/mpiaij.c
                            <br>
                            [0]PETSC ERROR: MatSOR() line 3649 in
                            src/mat/interface/matrix.c
                            <br>
                            [0]PETSC ERROR: PCApply_SOR() line 35 in
                            src/ksp/pc/impls/sor/sor.c
                            <br>
                            [0]PETSC ERROR: PCApply() line 442 in
                            src/ksp/pc/interface/precon.c
                            <br>
                            [0]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/interface//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                            [0]PETSC ERROR: KSPInitialResidual() line 64
                            in src/ksp/ksp/interface/itres.c
                            <br>
                            [0]PETSC ERROR: KSPSolve_GMRES() line 239 in
                            src/ksp/ksp/impls/gmres/gmres.c
                            <br>
                            [0]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            [0]PETSC ERROR: KSPSolve_Chebyshev() line
                            409 in src/ksp/ksp/impls/cheby/cheby.c
                            <br>
                            [0]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            [0]PETSC ERROR: PCMGMCycle_Private() line 19
                            in src/ksp/pc/impls/mg/mg.c
                            <br>
                            [0]PETSC ERROR: PCApply_MG() line 330 in
                            src/ksp/pc/impls/mg/mg.c
                            <br>
                            [0]PETSC ERROR: PCApply() line 442 in
                            src/ksp/pc/interface/precon.c
                            <br>
                            [0]PETSC ERROR: KSP_PCApply() line 227 in
src/ksp/ksp/impls/cg//opt/petsc/petsc-3.4.2/include/petsc-private/kspimpl.h<br>
                            [0]PETSC ERROR: KSPSolve_CG() line 175 in
                            src/ksp/ksp/impls/cg/cg.c
                            <br>
                            [0]PETSC ERROR: KSPSolve() line 441 in
                            src/ksp/ksp/interface/itfunc.c
                            <br>
                            #PETSc Option Table entries:
                            <br>
                            -da_refine 4
                            <br>
                            -ksp_view
                            <br>
                            -options_left
                            <br>
                            -pc_mg_galerkin
                            <br>
                            -pc_type mg
                            <br>
                            #End of PETSc Option Table entries
                            <br>
                            There is one unused database option. It is:
                            <br>
                            Option left: name:-da_refine value: 4
                            <br>
                            <br>
                            <br>
                            Here is the code I use to setup DMDA and
                            KSP:
                            <br>
                            <br>
                                 call DMDACreate3d( PETSC_COMM_WORLD
                            ,                                  &
                            <br>
                                                 &
                            DMDA_BOUNDARY_PERIODIC ,
                            DMDA_BOUNDARY_PERIODIC,     &
                            <br>
                                                 &
                            DMDA_BOUNDARY_PERIODIC ,
                            DMDA_STENCIL_STAR,          &
                            <br>
                                                 & N_Z , N_Y , N_X ,
                            N_B3 , N_B2 , 1_ip,  1_ip , 1_ip , &
                            <br>
                                                 & int(NNZ,ip)
                            ,int(NNY,ip) , NNX, da , ierr)
                            <br>
                                      ! Create Global Vectors
                            <br>
                                 call DMCreateGlobalVector(da,b,ierr)
                            <br>
                                 call VecDuplicate(b,x,ierr)
                            <br>
                                      ! Set initial guess for first use
                            of the module to 0
                            <br>
                                 call VecSet(x,0.0_rp,ierr)
                            <br>
                                      ! Create matrix
                            <br>
                                 call
                            DMCreateMatrix(da,MATMPIAIJ,A,ierr)
                            <br>
                                       ! Create solver
                            <br>
                                 call
                            KSPCreate(PETSC_COMM_WORLD,ksp,ierr)
                            <br>
                                 call KSPSetDM(ksp,da,ierr)
                            <br>
                                 call
                            KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,ierr)
                            <br>
                            !    call
                            KSPSetOperators(ksp,A,A,SAME_NONZERO_PATTERN,ierr)
                            <br>
                                 call KSPSetType(ksp,KSPCG,ierr)
                            <br>
                                 call
                            KSPSetNormType(ksp,KSP_NORM_UNPRECONDITIONED,ierr)
                            ! Real residual
                            <br>
                                 call
                            KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr)
                            <br>
                                 call KSPSetTolerances(ksp, tol
                            ,PETSC_DEFAULT_DOUBLE_PRECISION,&
                            <br>
                                     &
                            PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,ierr)
                            <br>
                            <br>
                                 ! To allow using option from command
                            line
                            <br>
                                 call KSPSetFromOptions(ksp,ierr)
                            <br>
                            <br>
                            <br>
                            Michele
                            <br>
                            <br>
                            <br>
                            <br>
                            <br>
                            On 08/01/2013 01:04 PM, Barry Smith wrote:
                            <br>
                            <br>
                            <br>
                            <br>
                            <blockquote type="cite">    You can use the
                              option -pc_mg_galerkin  and then MG will
                              compute the coarser matrices with a sparse
                              matrix matrix matrix product so you should
                              not need to change your code to try it
                              out.  You still need to use the KSPSetDM()
                              and -da_refine n to get it working
                              <br>
                              <br>
                                  If it doesn't work, send us all the
                              output.
                              <br>
                              <br>
                                  Barry
                              <br>
                              <br>
                              <br>
                              On Aug 1, 2013, at 2:47 PM, Michele Rosso
                              <br>
                              <br>
                              <br>
                              <br>
                              <a class="moz-txt-link-rfc2396E" href="mailto:mrosso@uci.edu"><mrosso@uci.edu></a>
                              <br>
                              <br>
                              <br>
                              <br>
                                wrote:
                              <br>
                              <br>
                              <br>
                              <br>
                              <br>
                              <br>
                              <blockquote type="cite">Barry,
                                <br>
                                you are correct, I did not use it. I
                                think I get now where is the problem.
                                Correct me if I am wrong, but for the
                                <br>
                                geometric multigrid to work, ksp must be
                                provided with subroutines to compute the
                                matrix and the rhs at any level through
                                <br>
                                KSPSetComputeOperators and
                                KSPSetComputeRHS.
                                <br>
                                I do not do that, I simply build a rhs
                                vector and a matrix and then I solve the
                                system.
                                <br>
                                If you confirm what I just wrote, I will
                                try to modify my code accordingly and
                                get back to you.
                                <br>
                                Thank you,
                                <br>
                                Michele
                                <br>
                                  On 08/01/2013 11:48 AM, Barry Smith
                                wrote:
                                <br>
                                <br>
                                <br>
                                <br>
                                <br>
                                <blockquote type="cite">   Do you use
                                  KSPSetDM(ksp,da);  ?  See
                                  src/ksp/ksp/examples/tutorials/ex19.c
                                  <br>
                                  <br>
                                      Barry
                                  <br>
                                  <br>
                                  On Aug 1, 2013, at 1:35 PM, Michele
                                  Rosso
                                  <br>
                                  <br>
                                  <br>
                                  <br>
                                  <br>
                                  <a class="moz-txt-link-rfc2396E" href="mailto:mrosso@uci.edu"><mrosso@uci.edu></a>
                                  <br>
                                  <br>
                                  <br>
                                  <br>
                                  <br>
                                    wrote:
                                  <br>
                                  <br>
                                  <br>
                                  <br>
                                  <br>
                                  <br>
                                  <br>
                                  <blockquote type="cite">Barry,
                                    <br>
                                    <br>
                                    I am using a finite difference
                                    Cartesian uniform grid and DMDA and
                                    so far it has not given me any
                                    problem.
                                    <br>
                                    I am using a ksp solver (not snes).
                                    In a previous thread, I was told an
                                    odd number of grid points was needed
                                    for the geometric multigrid, is that
                                    correct?
                                    <br>
                                    I tried to run my case with
                                    <br>
                                    <br>
                                    <br>
                                    -pc_type mg -da_refine 4
                                    <br>
                                    <br>
                                    <br>
                                    <br>
                                    but it does not seem to use the
                                    -da_refine option:
                                    <br>
                                    <br>
                                    mpiexec   -np 4 ./test  -pc_type mg
                                    -da_refine 4  -ksp_view
                                    -options_left
                                    <br>
                                    <br>
                                    <br>
                                    KSP Object: 4 MPI processes
                                    <br>
                                      type: cg
                                    <br>
                                      maximum iterations=10000
                                    <br>
                                      tolerances:  relative=1e-08,
                                    absolute=1e-50, divergence=10000
                                    <br>
                                      left preconditioning
                                    <br>
                                      using nonzero initial guess
                                    <br>
                                      using UNPRECONDITIONED norm type
                                    for convergence test
                                    <br>
                                    PC Object: 4 MPI processes
                                    <br>
                                      type: mg
                                    <br>
                                        MG: type is MULTIPLICATIVE,
                                    levels=1 cycles=v
                                    <br>
                                          Cycles per PCApply=1
                                    <br>
                                          Not using Galerkin computed
                                    coarse grid matrices
                                    <br>
                                      Coarse grid solver -- level
                                    -------------------------------
                                    <br>
                                        KSP Object:   
                                    (mg_levels_0_)     4 MPI processes
                                    <br>
                                          type: chebyshev
                                    <br>
                                            Chebyshev: eigenvalue
                                    estimates:  min = 0.134543, max =
                                    1.47998
                                    <br>
                                            Chebyshev: estimated using: 
                                    [0 0.1; 0 1.1]
                                    <br>
                                            KSP Object:       
                                    (mg_levels_0_est_)         4 MPI
                                    processes
                                    <br>
                                              type: gmres
                                    <br>
                                                GMRES: restart=30, using
                                    Classical (unmodified) Gram-Schmidt
                                    Orthogonalization with no iterative
                                    refinement
                                    <br>
                                                GMRES: happy breakdown
                                    tolerance 1e-30
                                    <br>
                                              maximum iterations=10,
                                    initial guess is zero
                                    <br>
                                              tolerances: 
                                    relative=1e-05, absolute=1e-50,
                                    divergence=10000
                                    <br>
                                              left preconditioning
                                    <br>
                                              using NONE norm type for
                                    convergence test
                                    <br>
                                            PC Object:       
                                    (mg_levels_0_)         4 MPI
                                    processes
                                    <br>
                                              type: sor
                                    <br>
                                                SOR: type =
                                    local_symmetric, iterations = 1,
                                    local iterations = 1, omega = 1
                                    <br>
                                              linear system matrix =
                                    precond matrix:
                                    <br>
                                              Matrix Object:           4
                                    MPI processes
                                    <br>
                                                type: mpiaij
                                    <br>
                                                rows=262144, cols=262144
                                    <br>
                                                total: nonzeros=1835008,
                                    allocated nonzeros=1835008
                                    <br>
                                                total number of mallocs
                                    used during MatSetValues calls =0
                                    <br>
                                          maximum iterations=1, initial
                                    guess is zero
                                    <br>
                                          tolerances:  relative=1e-05,
                                    absolute=1e-50, divergence=10000
                                    <br>
                                          left preconditioning
                                    <br>
                                          using NONE norm type for
                                    convergence test
                                    <br>
                                        PC Object:    (mg_levels_0_)    
                                    4 MPI processes
                                    <br>
                                          type: sor
                                    <br>
                                            SOR: type = local_symmetric,
                                    iterations = 1, local iterations =
                                    1, omega = 1
                                    <br>
                                          linear system matrix = precond
                                    matrix:
                                    <br>
                                          Matrix Object:       4 MPI
                                    processes
                                    <br>
                                            type: mpiaij
                                    <br>
                                            rows=262144, cols=262144
                                    <br>
                                            total: nonzeros=1835008,
                                    allocated nonzeros=1835008
                                    <br>
                                            total number of mallocs used
                                    during MatSetValues calls =0
                                    <br>
                                      linear system matrix = precond
                                    matrix:
                                    <br>
                                      Matrix Object:   4 MPI processes
                                    <br>
                                        type: mpiaij
                                    <br>
                                        rows=262144, cols=262144
                                    <br>
                                        total: nonzeros=1835008,
                                    allocated nonzeros=1835008
                                    <br>
                                        total number of mallocs used
                                    during MatSetValues calls =0
                                    <br>
                                    Solution       =    1.53600013    
                                    sec
                                    <br>
                                    #PETSc Option Table entries:
                                    <br>
                                    -da_refine 4
                                    <br>
                                    -ksp_view
                                    <br>
                                    -options_left
                                    <br>
                                    -pc_type mg
                                    <br>
                                    #End of PETSc Option Table entries
                                    <br>
                                    There is one unused database option.
                                    It is:
                                    <br>
                                    Option left: name:-da_refine value:
                                    4
                                    <br>
                                    <br>
                                    Michele
                                    <br>
                                    <br>
                                    On 08/01/2013 11:21 AM, Barry Smith
                                    wrote:
                                    <br>
                                    <br>
                                    <br>
                                    <br>
                                    <br>
                                    <br>
                                    <blockquote type="cite">    What
                                      kind of mesh are you using? Are
                                      you using DMDA? If you are using
                                      DMDA (and have written your code
                                      to use it "correctly") then it
                                      should be trivial to run with
                                      geometric multigrid and geometric
                                      multigrid should be a bit faster.
                                      <br>
                                      <br>
                                          For example on
                                      src/snes/examples/tutorials/ex19.c  
                                      I run with ./ex19 -pc_type mg
                                      -da_refine 4 and it refines the
                                      original DMDA 4 times and uses
                                      geometric multigrid with 5 levels.
                                      <br>
                                      <br>
                                      <br>
                                          Barry
                                      <br>
                                      <br>
                                      <br>
                                      On Aug 1, 2013, at 1:14 PM,
                                      Michele Rosso
                                      <br>
                                      <br>
                                      <br>
                                      <br>
                                      <br>
                                      <a class="moz-txt-link-rfc2396E" href="mailto:mrosso@uci.edu"><mrosso@uci.edu></a>
                                      <br>
                                      <br>
                                      <br>
                                      <br>
                                      <br>
                                        wrote:
                                      <br>
                                      <br>
                                      <br>
                                      <br>
                                      <br>
                                      <br>
                                      <br>
                                      <blockquote type="cite">Hi,
                                        <br>
                                        <br>
                                        I am successfully using PETSc
                                        (v3.4.2)  to solve a 3D
                                        Poisson's equation with CG +
                                        GAMG as I was suggested to do in
                                        a previous thread.
                                        <br>
                                        So far I am using GAMG with the
                                        default settings, i.e.
                                        <br>
                                        <br>
                                        -pc_type gamg
                                        -pc_gamg_agg_nsmooths 1
                                        <br>
                                        <br>
                                        The speed of the solution is
                                        satisfactory, but I would like
                                        to know if you have any
                                        suggestions to further speed it
                                        up, particularly
                                        <br>
                                        if there is any parameters worth
                                        looking into to achieve an even
                                        faster solution, for example
                                        number of levels and so on.
                                        <br>
                                        So far I am using Dirichlet's
                                        BCs for my test case, but I will
                                        soon have periodic conditions:
                                        in this case, does GAMG require
                                        particular settings?
                                        <br>
                                        Finally, I did not try geometric
                                        multigrid: do you think it is
                                        worth a shot?
                                        <br>
                                        <br>
                                        Here are my current settings:
                                        <br>
                                        <br>
                                        I run with
                                        <br>
                                        <br>
                                        -pc_type gamg
                                        -pc_gamg_agg_nsmooths 1
                                        -ksp_view -options_left
                                        <br>
                                        <br>
                                        and the output is:
                                        <br>
                                        <br>
                                        KSP Object: 4 MPI processes
                                        <br>
                                           type: cg
                                        <br>
                                           maximum iterations=10000
                                        <br>
                                           tolerances:  relative=1e-08,
                                        absolute=1e-50, divergence=10000
                                        <br>
                                           left preconditioning
                                        <br>
                                           using nonzero initial guess
                                        <br>
                                           using UNPRECONDITIONED norm
                                        type for convergence test
                                        <br>
                                        PC Object: 4 MPI processes
                                        <br>
                                           type: gamg
                                        <br>
                                             MG: type is MULTIPLICATIVE,
                                        levels=3 cycles=v
                                        <br>
                                               Cycles per PCApply=1
                                        <br>
                                               Using Galerkin computed
                                        coarse grid matrices
                                        <br>
                                           Coarse grid solver -- level
                                        -------------------------------
                                        <br>
                                             KSP Object:   
                                        (mg_coarse_)     4 MPI processes
                                        <br>
                                               type: preonly
                                        <br>
                                               maximum iterations=1,
                                        initial guess is zero
                                        <br>
                                               tolerances: 
                                        relative=1e-05, absolute=1e-50,
                                        divergence=10000
                                        <br>
                                               left preconditioning
                                        <br>
                                               using NONE norm type for
                                        convergence test
                                        <br>
                                             PC Object:   
                                        (mg_coarse_)     4 MPI processes
                                        <br>
                                               type: bjacobi
                                        <br>
                                                 block Jacobi: number of
                                        blocks = 4
                                        <br>
                                                 Local solve info for
                                        each block is in the following
                                        KSP and PC objects:
                                        <br>
                                               [0] number of local
                                        blocks = 1, first local block
                                        number = 0
                                        <br>
                                                         [0] local block
                                        number 0
                                        <br>
                                        KSP Object:         
                                        (mg_coarse_sub_)         1 MPI
                                        processes
                                        <br>
                                                   type: preonly
                                        <br>
                                                   maximum iterations=1,
                                        initial guess is zero
                                        <br>
                                                         tolerances: 
                                        relative=1e-05, absolute=1e-50,
                                        divergence=10000
                                        <br>
                                        KSP Object:       
                                        (mg_coarse_sub_)            left
                                        preconditioning
                                        <br>
                                                   using NONE norm type
                                        for convergence test
                                        <br>
                                                   PC Object:       
                                        (mg_coarse_sub_)       1 MPI
                                        processes
                                        <br>
                                                   type: preonly
                                        <br>
                                                  1 MPI processes
                                        <br>
                                                   type: lu
                                        <br>
                                                   maximum iterations=1,
                                        initial guess is zero
                                        <br>
                                                   tolerances: 
                                        relative=1e-05, absolute=1e-50,
                                        divergence=10000
                                        <br>
                                                   LU: out-of-place
                                        factorization
                                        <br>
                                                     left
                                        preconditioning
                                        <br>
                                                   using NONE norm type
                                        for convergence test
                                        <br>
                                                   PC Object:       
                                        (mg_coarse_sub_)         1 MPI
                                        processes
                                        <br>
                                                   type: lu
                                        <br>
                                                   tolerance for zero
                                        pivot 2.22045e-14
                                        <br>
                                                     using diagonal
                                        shift on blocks to prevent zero
                                        pivot
                                        <br>
                                                     matrix ordering: nd
                                        <br>
                                                     LU: out-of-place
                                        factorization
                                        <br>
                                                     tolerance for zero
                                        pivot 2.22045e-14
                                        <br>
                                                     using diagonal
                                        shift on blocks to prevent zero
                                        pivot
                                        <br>
                                                     matrix ordering: nd
                                        <br>
                                                     factor fill ratio
                                        given 5, needed 0
                                        <br>
                                                       Factored matrix
                                        follows:
                                        <br>
                                                     factor fill ratio
                                        given 5, needed 4.13207
                                        <br>
                                                       Factored matrix
                                        follows:
                                        <br>
                                                           Matrix
                                        Object:              Matrix
                                        Object:                 1 MPI
                                        processes
                                        <br>
                                                           type: seqaij
                                        <br>
                                                             rows=395,
                                        cols=395
                                        <br>
                                                             package
                                        used to perform factorization:
                                        petsc
                                        <br>
                                                           total:
                                        nonzeros=132379, allocated
                                        nonzeros=132379
                                        <br>
                                                           total number
                                        of mallocs used during
                                        MatSetValues calls =0
                                        <br>
                                                                 not
                                        using I-node routines
                                        <br>
                                                    1 MPI processes
                                        <br>
                                                           type: seqaij
                                        <br>
                                                   linear system matrix
                                        = precond matrix:
                                        <br>
                                                             rows=0,
                                        cols=0
                                        <br>
                                                             package
                                        used to perform factorization:
                                        petsc
                                        <br>
                                                           total:
                                        nonzeros=1, allocated nonzeros=1
                                        <br>
                                                             total
                                        number of mallocs used during
                                        MatSetValues calls =0
                                        <br>
                                                               not using
                                        I-node routines
                                        <br>
                                                       linear system
                                        matrix = precond matrix:
                                        <br>
                                           Matrix Object:             1
                                        MPI processes
                                        <br>
                                                     type: seqaij
                                        <br>
                                                   Matrix Object:KSP
                                        Object:           1 MPI
                                        processes
                                        <br>
                                                     type: seqaij
                                        <br>
                                                     rows=0, cols=0
                                        <br>
                                                     total: nonzeros=0,
                                        allocated nonzeros=0
                                        <br>
                                                     total number of
                                        mallocs used during MatSetValues
                                        calls =0
                                        <br>
                                                         not using
                                        I-node routines
                                        <br>
                                                   rows=395, cols=395
                                        <br>
                                                     total:
                                        nonzeros=32037, allocated
                                        nonzeros=32037
                                        <br>
                                                     total number of
                                        mallocs used during MatSetValues
                                        calls =0
                                        <br>
                                                       not using I-node
                                        routines
                                        <br>
                                                   - - - - - - - - - - -
                                        - - - - - - -
                                        <br>
                                                   KSP Object:       
                                        (mg_coarse_sub_)         1 MPI
                                        processes
                                        <br>
                                                   type: preonly
                                        <br>
                                                   maximum iterations=1,
                                        initial guess is zero
                                        <br>
                                                   tolerances: 
                                        relative=1e-05, absolute=1e-50,
                                        divergence=10000
                                        <br>
                                                   left preconditioning
                                        <br>
                                                   using NONE norm type
                                        for convergence test
                                        <br>
                                                 PC Object:       
                                        (mg_coarse_sub_)         1 MPI
                                        processes
                                        <br>
                                                   type: lu
                                        <br>
                                                     LU: out-of-place
                                        factorization
                                        <br>
                                                     tolerance for zero
                                        pivot 2.22045e-14
                                        <br>
                                                     using diagonal
                                        shift on blocks to prevent zero
                                        pivot
                                        <br>
                                                     matrix ordering: nd
                                        <br>
                                                     factor fill ratio
                                        given 5, needed 0
                                        <br>
                                                       Factored matrix
                                        follows:
                                        <br>
                                                         Matrix
                                        Object:                 1 MPI
                                        processes
                                        <br>
                                                           type: seqaij
                                        <br>
                                                           rows=0,
                                        cols=0
                                        <br>
                                                           package used
                                        to perform factorization: petsc
                                        <br>
                                                           total:
                                        nonzeros=1, allocated nonzeros=1
                                        <br>
                                                           total number
                                        of mallocs used during
                                        MatSetValues calls =0
                                        <br>
                                                             not using
                                        I-node routines
                                        <br>
                                                   linear system matrix
                                        = precond matrix:
                                        <br>
                                                   Matrix
                                        Object:           1 MPI
                                        processes
                                        <br>
                                                     type: seqaij
                                        <br>
                                                     rows=0, cols=0
                                        <br>
                                                     total: nonzeros=0,
                                        allocated nonzeros=0
                                        <br>
                                                     total number of
                                        mallocs used during MatSetValues
                                        calls =0
                                        <br>
                                                       not using I-node
                                        routines
                                        <br>
                                           (mg_coarse_sub_)         1
                                        MPI processes
                                        <br>
                                                   type: preonly
                                        <br>
                                                   maximum iterations=1,
                                        initial guess is zero
                                        <br>
                                                   tolerances: 
                                        relative=1e-05, absolute=1e-50,
                                        divergence=10000
                                        <br>
                                                   left preconditioning
                                        <br>
                                                   using NONE norm type
                                        for convergence test
                                        <br>
                                                 PC Object:       
                                        (mg_coarse_sub_)         1 MPI
                                        processes
                                        <br>
                                                   type: lu
                                        <br>
                                                     LU: out-of-place
                                        factorization
                                        <br>
                                                     tolerance for zero
                                        pivot 2.22045e-14
                                        <br>
                                                     using diagonal
                                        shift on blocks to prevent zero
                                        pivot
                                        <br>
                                                     matrix ordering: nd
                                        <br>
                                                     factor fill ratio
                                        given 5, needed 0
                                        <br>
                                                       Factored matrix
                                        follows:
                                        <br>
                                                         Matrix
                                        Object:                 1 MPI
                                        processes
                                        <br>
                                                           type: seqaij
                                        <br>
                                                           rows=0,
                                        cols=0
                                        <br>
                                                           package used
                                        to perform factorization: petsc
                                        <br>
                                                           total:
                                        nonzeros=1, allocated nonzeros=1
                                        <br>
                                                           total number
                                        of mallocs used during
                                        MatSetValues calls =0
                                        <br>
                                                             not using
                                        I-node routines
                                        <br>
                                                   linear system matrix
                                        = precond matrix:
                                        <br>
                                                   Matrix
                                        Object:           1 MPI
                                        processes
                                        <br>
                                                     type: seqaij
                                        <br>
                                                     rows=0, cols=0
                                        <br>
                                                     total: nonzeros=0,
                                        allocated nonzeros=0
                                        <br>
                                                     total number of
                                        mallocs used during MatSetValues
                                        calls =0
                                        <br>
                                                       not using I-node
                                        routines
                                        <br>
                                               [1] number of local
                                        blocks = 1, first local block
                                        number = 1
                                        <br>
                                                 [1] local block number
                                        0
                                        <br>
                                                 - - - - - - - - - - - -
                                        - - - - - -
                                        <br>
                                               [2] number of local
                                        blocks = 1, first local block
                                        number = 2
                                        <br>
                                                 [2] local block number
                                        0
                                        <br>
                                                 - - - - - - - - - - - -
                                        - - - - - -
                                        <br>
                                               [3] number of local
                                        blocks = 1, first local block
                                        number = 3
                                        <br>
                                                 [3] local block number
                                        0
                                        <br>
                                                 - - - - - - - - - - - -
                                        - - - - - -
                                        <br>
                                               linear system matrix =
                                        precond matrix:
                                        <br>
                                               Matrix Object:       4
                                        MPI processes
                                        <br>
                                                 type: mpiaij
                                        <br>
                                                 rows=395, cols=395
                                        <br>
                                                 total: nonzeros=32037,
                                        allocated nonzeros=32037
                                        <br>
                                                 total number of mallocs
                                        used during MatSetValues calls
                                        =0
                                        <br>
                                                   not using I-node (on
                                        process 0) routines
                                        <br>
                                           Down solver (pre-smoother) on
                                        level 1
                                        -------------------------------
                                        <br>
                                             KSP Object:   
                                        (mg_levels_1_)     4 MPI
                                        processes
                                        <br>
                                               type: chebyshev
                                        <br>
                                                 Chebyshev: eigenvalue
                                        estimates:  min = 0.0636225, max
                                        = 1.33607
                                        <br>
                                               maximum iterations=2
                                        <br>
                                               tolerances: 
                                        relative=1e-05, absolute=1e-50,
                                        divergence=10000
                                        <br>
                                               left preconditioning
                                        <br>
                                               using nonzero initial
                                        guess
                                        <br>
                                               using NONE norm type for
                                        convergence test
                                        <br>
                                             PC Object:   
                                        (mg_levels_1_)     4 MPI
                                        processes
                                        <br>
                                               type: jacobi
                                        <br>
                                               linear system matrix =
                                        precond matrix:
                                        <br>
                                               Matrix Object:       4
                                        MPI processes
                                        <br>
                                                 type: mpiaij
                                        <br>
                                                 rows=23918, cols=23918
                                        <br>
                                                 total: nonzeros=818732,
                                        allocated nonzeros=818732
                                        <br>
                                                 total number of mallocs
                                        used during MatSetValues calls
                                        =0
                                        <br>
                                                   not using I-node (on
                                        process 0) routines
                                        <br>
                                           Up solver (post-smoother)
                                        same as down solver
                                        (pre-smoother)
                                        <br>
                                           Down solver (pre-smoother) on
                                        level 2
                                        -------------------------------
                                        <br>
                                             KSP Object:   
                                        (mg_levels_2_)     4 MPI
                                        processes
                                        <br>
                                               type: chebyshev
                                        <br>
                                                 Chebyshev: eigenvalue
                                        estimates:  min = 0.0971369, max
                                        = 2.03987
                                        <br>
                                               maximum iterations=2
                                        <br>
                                               tolerances: 
                                        relative=1e-05, absolute=1e-50,
                                        divergence=10000
                                        <br>
                                               left preconditioning
                                        <br>
                                               using nonzero initial
                                        guess
                                        <br>
                                               using NONE norm type for
                                        convergence test
                                        <br>
                                             PC Object:   
                                        (mg_levels_2_)     4 MPI
                                        processes
                                        <br>
                                               type: jacobi
                                        <br>
                                               linear system matrix =
                                        precond matrix:
                                        <br>
                                               Matrix Object:       4
                                        MPI processes
                                        <br>
                                                 type: mpiaij
                                        <br>
                                                 rows=262144,
                                        cols=262144
                                        <br>
                                                 total:
                                        nonzeros=1835008, allocated
                                        nonzeros=1835008
                                        <br>
                                                 total number of mallocs
                                        used during MatSetValues calls
                                        =0
                                        <br>
                                           Up solver (post-smoother)
                                        same as down solver
                                        (pre-smoother)
                                        <br>
                                           linear system matrix =
                                        precond matrix:
                                        <br>
                                           Matrix Object:   4 MPI
                                        processes
                                        <br>
                                             type: mpiaij
                                        <br>
                                             rows=262144, cols=262144
                                        <br>
                                             total: nonzeros=1835008,
                                        allocated nonzeros=1835008
                                        <br>
                                             total number of mallocs
                                        used during MatSetValues calls
                                        =0
                                        <br>
                                        #PETSc Option Table entries:
                                        <br>
                                        -ksp_view
                                        <br>
                                        -options_left
                                        <br>
                                        -pc_gamg_agg_nsmooths 1
                                        <br>
                                        -pc_type gamg
                                        <br>
                                        #End of PETSc Option Table
                                        entries
                                        <br>
                                        There are no unused options.
                                        <br>
                                        <br>
                                        <br>
                                        Thank you,
                                        <br>
                                        Michele
                                        <br>
                                        <br>
                                        <br>
                                        <br>
                                        <br>
                                        <br>
                                      </blockquote>
                                    </blockquote>
                                  </blockquote>
                                </blockquote>
                              </blockquote>
                            </blockquote>
                          </blockquote>
                        </blockquote>
                      </blockquote>
                    </blockquote>
                  </blockquote>
                </blockquote>
<test_poisson_solver.tar.gz><2decomp_fft-1.5.847-modified.tar.gz>
                <br>
                <br>
              </blockquote>
            </blockquote>
          </blockquote>
          <br>
        </blockquote>
        <br>
      </blockquote>
      <br>
    </blockquote>
    <br>
  </body>
</html>