[petsc-users] Increasing norm with finer mesh
Weizhuo Wang
weizhuo2 at illinois.edu
Tue Oct 2 13:24:37 CDT 2018
The example code and makefile are attached below. The whole thing started
as I tried to build a Helmholtz solver, and the mean error (calculated by:
sum( | numerical_sol - analytical_sol | / analytical_sol ) ) increases as I
use finer and finer grids. Then I looked at the example 12 (Laplacian
solver) which is similar to what I did to see if I have missed something.
The example is using 2_norm. I have made some minor modifications (3
places) on the code, you can search 'Modified' in the code to see them.
If this helps: I configured the PETSc to use real and double precision.
Changed the name of the example code from ex12.c to ex12c.c
Thanks for all your reply!
Weizhuo
Smith, Barry F. <bsmith at mcs.anl.gov>
> Please send your version of the example that computes the mean norm of
> the grid; I suspect we are talking apples and oranges
>
> Barry
>
>
>
> > On Oct 1, 2018, at 7:51 PM, Weizhuo Wang <weizhuo2 at illinois.edu> wrote:
> >
> > I also tried to divide the norm by m*n , which is the number of grids,
> the trend of norm still increases.
> >
> > Thanks!
> >
> > Weizhuo
> >
> > Matthew Knepley <knepley at gmail.com>
> > On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang <weizhuo2 at illinois.edu>
> wrote:
> > Hi!
> >
> > I'm recently trying out the example code provided with the KSP solver
> (ex12.c). I noticed that the mean norm of the grid increases as I use finer
> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
> m=100, n=100, mean norm increases to 9.55e-6. This seems counter intuitive,
> since most of the time error should decreases when using finer grid. Am I
> doing this wrong?
> >
> > The norm is misleading in that it is the l_2 norm, meaning just the sqrt
> of the sum of the squares of
> > the vector entries. It should be scaled by the volume element to
> approximate a scale-independent
> > norm (like the L_2 norm).
> >
> > Thanks,
> >
> > Matt
> >
> > Thanks!
> > --
> > Wang Weizhuo
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
> >
> >
> > --
> > Wang Weizhuo
>
>
--
Wang Weizhuo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181002/049c9268/attachment.html>
-------------- next part --------------
static char help[] = "Solves a linear system in parallel with KSP.\n\
Input parameters include:\n\
-m <mesh_x> : number of mesh points in x-direction\n\
-n <mesh_n> : number of mesh points in y-direction\n\n";
/*T
Concepts: KSP^solving a system of linear equations
Concepts: KSP^Laplacian, 2d
Concepts: PC^registering preconditioners
Processors: n
T*/
/*
Demonstrates registering a new preconditioner (PC) type.
To register a PC type whose code is linked into the executable,
use PCRegister(). To register a PC type in a dynamic library use PCRegister()
Also provide the prototype for your PCCreate_XXX() function. In
this example we use the PETSc implementation of the Jacobi method,
PCCreate_Jacobi() just as an example.
See the file src/ksp/pc/impls/jacobi/jacobi.c for details on how to
write a new PC component.
See the manual page PCRegister() for details on how to register a method.
*/
/*
Include "petscksp.h" so that we can use KSP solvers. Note that this file
automatically includes:
petscsys.h - base PETSc routines petscvec.h - vectors
petscmat.h - matrices
petscis.h - index sets petscksp.h - Krylov subspace methods
petscviewer.h - viewers petscpc.h - preconditioners
*/
#include <petscksp.h>
PETSC_EXTERN PetscErrorCode PCCreate_Jacobi(PC);
int main(int argc,char **args)
{
Vec x,b,u; /* approx solution, RHS, exact solution */
Mat A; /* linear system matrix */
KSP ksp; /* linear solver context */
PetscReal norm; /* norm of solution error */
PetscInt i,j,Ii,J,Istart,Iend,m = 8,n = 7,its;
PetscErrorCode ierr;
PetscScalar v,one = 1.0;
PC pc; /* preconditioner context */
ierr = PetscInitialize(&argc,&args,(char*)0,help);if (ierr) return ierr;
ierr = PetscOptionsGetInt(NULL,NULL,"-m",&m,NULL);CHKERRQ(ierr);
ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr);
/* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Compute the matrix and right-hand-side vector that define
the linear system, Ax = b.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/*
Create parallel matrix, specifying only its global dimensions.
When using MatCreate(), the matrix format can be specified at
runtime. Also, the parallel partitioning of the matrix can be
determined by PETSc at runtime.
*/
ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr);
ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,m*n,m*n);CHKERRQ(ierr);
ierr = MatSetFromOptions(A);CHKERRQ(ierr);
ierr = MatSetUp(A);CHKERRQ(ierr);
/*
Currently, all PETSc parallel matrix formats are partitioned by
contiguous chunks of rows across the processors. Determine which
rows of the matrix are locally owned.
*/
ierr = MatGetOwnershipRange(A,&Istart,&Iend);CHKERRQ(ierr);
/*
Set matrix elements for the 2-D, five-point stencil in parallel.
- Each processor needs to insert only elements that it owns
locally (but any non-local elements will be sent to the
appropriate processor during matrix assembly).
- Always specify global rows and columns of matrix entries.
*/
for (Ii=Istart; Ii<Iend; Ii++) {
v = -1.0; i = Ii/n; j = Ii - i*n;
if (i>0) {J = Ii - n; ierr = MatSetValues(A,1,&Ii,1,&J,&v,INSERT_VALUES);CHKERRQ(ierr);}
if (i<m-1) {J = Ii + n; ierr = MatSetValues(A,1,&Ii,1,&J,&v,INSERT_VALUES);CHKERRQ(ierr);}
if (j>0) {J = Ii - 1; ierr = MatSetValues(A,1,&Ii,1,&J,&v,INSERT_VALUES);CHKERRQ(ierr);}
if (j<n-1) {J = Ii + 1; ierr = MatSetValues(A,1,&Ii,1,&J,&v,INSERT_VALUES);CHKERRQ(ierr);}
v = 4.0; ierr = MatSetValues(A,1,&Ii,1,&Ii,&v,INSERT_VALUES);CHKERRQ(ierr);
}
/*
Assemble matrix, using the 2-step process:
MatAssemblyBegin(), MatAssemblyEnd()
Computations can be done while messages are in transition
by placing code between these two statements.
*/
ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
/*
Create parallel vectors.
- When using VecCreate(), VecSetSizes() and VecSetFromOptions(),
we specify only the vector's global
dimension; the parallel partitioning is determined at runtime.
- When solving a linear system, the vectors and matrices MUST
be partitioned accordingly. PETSc automatically generates
appropriately partitioned matrices and vectors when MatCreate()
and VecCreate() are used with the same communicator.
- Note: We form 1 vector from scratch and then duplicate as needed.
*/
ierr = VecCreate(PETSC_COMM_WORLD,&u);CHKERRQ(ierr);
ierr = VecSetSizes(u,PETSC_DECIDE,m*n);CHKERRQ(ierr);
ierr = VecSetFromOptions(u);CHKERRQ(ierr);
ierr = VecDuplicate(u,&b);CHKERRQ(ierr);
ierr = VecDuplicate(b,&x);CHKERRQ(ierr);
/*
Set exact solution; then compute right-hand-side vector.
Use an exact solution of a vector with all elements of 1.0;
*/
ierr = VecSet(u,one);CHKERRQ(ierr);
ierr = MatMult(A,u,b);CHKERRQ(ierr);
/* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Create the linear solver and set various options
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/*
Create linear solver context
*/
ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr);
/*
Set operators. Here the matrix that defines the linear system
also serves as the preconditioning matrix.
*/
ierr = KSPSetOperators(ksp,A,A);CHKERRQ(ierr);
// Modified: Make sure it uses GMRES
ierr = KSPSetType(ksp,KSPGMRES);
// Modified: Make sure it uses desired tolerance
ierr = KSPSetOptionsPrefix(ksp,"ksp_atol 1e-12");
ierr = KSPSetOptionsPrefix(ksp,"ksp_rtol 1e-9");
/*
First register a new PC type with the command PCRegister()
*/
ierr = PCRegister("ourjacobi",PCCreate_Jacobi);CHKERRQ(ierr);
/*
Set the PC type to be the new method
*/
ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr);
ierr = PCSetType(pc,"ourjacobi");CHKERRQ(ierr);
/*
Set runtime options, e.g.,
-ksp_type <type> -pc_type <type> -ksp_monitor -ksp_rtol <rtol>
These options will override those specified above as long as
KSPSetFromOptions() is called _after_ any other customization
routines.
*/
ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr);
/* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Solve the linear system
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
ierr = KSPSolve(ksp,b,x);CHKERRQ(ierr);
/* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Check solution and clean up
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/*
Check the error
*/
ierr = VecAXPY(x,-1.0,u);CHKERRQ(ierr);
ierr = VecNorm(x,NORM_2,&norm);CHKERRQ(ierr);
ierr = KSPGetIterationNumber(ksp,&its);CHKERRQ(ierr);
/*
Print convergence information. PetscPrintf() produces a single
print statement from all processes that share a communicator.
*/
// Modified: replaced (double)norm with (double)norm/(m*n)
ierr = PetscPrintf(PETSC_COMM_WORLD,"Norm of error %g iterations %D\n",(double)norm/(m*n),its);CHKERRQ(ierr);
/*
Free work space. All PETSc objects should be destroyed when they
are no longer needed.
*/
ierr = KSPDestroy(&ksp);CHKERRQ(ierr);
ierr = VecDestroy(&u);CHKERRQ(ierr); ierr = VecDestroy(&x);CHKERRQ(ierr);
ierr = VecDestroy(&b);CHKERRQ(ierr); ierr = MatDestroy(&A);CHKERRQ(ierr);
/*
Always call PetscFinalize() before exiting a program. This routine
- finalizes the PETSc libraries as well as MPI
- provides summary and diagnostic information if certain runtime
options are chosen (e.g., -log_view).
*/
ierr = PetscFinalize();
return ierr;
}
/*TEST
test:
args: -ksp_gmres_cgs_refinement_type refine_always
TEST*/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: makefile
Type: application/octet-stream
Size: 959 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181002/049c9268/attachment.obj>
More information about the petsc-users
mailing list