[petsc-users] understanding MatNullSpaceTest

Matthew Knepley knepley at gmail.com
Tue Jul 3 12:51:52 CDT 2012


On Tue, Jul 3, 2012 at 6:18 AM, Klaij, Christiaan <C.Klaij at marin.nl> wrote:

> I'm trying to understand the use of null spaces. Whatever I do, it
> always seem to pass the null space test. Could you please tell me
> what's wrong with this example (c++, petsc-3.3-p1):
>

Yes, there was a bug in 3.3. I pushed a fix which should go out in the next
patch, and
its in petsc-dev.

  Thanks,

     Matt


> $ cat nullsp.cc
> // test null space check
>
> #include <petscksp.h>
>
> int main(int argc, char **argv) {
>
>   PetscErrorCode ierr;
>   PetscInt row,start,end;
>   PetscScalar val[1];
>   PetscReal norm;
>   Mat A;
>   Vec x,y;
>   MatNullSpace nullsp;
>   PetscBool isNull;
>
>   ierr = PetscInitialize(&argc, &argv, PETSC_NULL, PETSC_NULL);
> CHKERRQ(ierr);
>
>   // diagonal matrix
>   ierr = MatCreate(PETSC_COMM_WORLD,&A); CHKERRQ(ierr);
>   ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,24,24); CHKERRQ(ierr);
>   ierr = MatSetType(A,MATMPIAIJ); CHKERRQ(ierr);
>   ierr = MatMPIAIJSetPreallocation(A,1,PETSC_NULL,1,PETSC_NULL);
> CHKERRQ(ierr);
>   ierr = MatGetOwnershipRange(A,&start,&end); CHKERRQ(ierr);
>   for (row=start; row<end; row++) {
>     val[0] = 1.0;
>     ierr = MatSetValues(A,1,&row,1,&row,val,INSERT_VALUES); CHKERRQ(ierr);
>   }
>   ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
>   ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr);
>
>   // random vector
>   ierr = VecCreate(PETSC_COMM_WORLD,&x); CHKERRQ(ierr);
>   ierr = VecSetSizes(x,PETSC_DECIDE,24); CHKERRQ(ierr);
>   ierr = VecSetType(x,VECMPI); CHKERRQ(ierr);
>   ierr = VecSetRandom(x,PETSC_NULL); CHKERRQ(ierr);
>
>   // null space
>   ierr = MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_FALSE,1,&x,&nullsp);
> CHKERRQ(ierr);
>   ierr = MatNullSpaceTest(nullsp,A,&isNull); CHKERRQ(ierr);
>   if (isNull==PETSC_TRUE) {
>     ierr = PetscPrintf(PETSC_COMM_WORLD,"null space check passed\n");
>   }
>   else {
>     ierr = PetscPrintf(PETSC_COMM_WORLD,"null space check failed\n");
>   }
>
>   // check null space
>   ierr = VecDuplicate(x,&y); CHKERRQ(ierr);
>   ierr = MatMult(A,x,y); CHKERRQ(ierr);
>   ierr = VecNorm(y,NORM_2,&norm); CHKERRQ(ierr);
>   ierr = PetscPrintf(PETSC_COMM_WORLD,"|Ax| = %G\n",norm); CHKERRQ(ierr);
>
>   ierr = PetscFinalize(); CHKERRQ(ierr);
>
>   return 0;
> }
>
> $ mpiexec -n 2 ./nullsp -log_summary
> null space check passed
> |Ax| = 2.51613
>
> ************************************************************************************************************************
> ***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r
> -fCourier9' to print this document            ***
>
> ************************************************************************************************************************
>
> ---------------------------------------------- PETSc Performance Summary:
> ----------------------------------------------
>
> ./nullsp on a linux_64b named lin0133 with 2 processors, by cklaij Tue Jul
>  3 14:03:34 2012
> Using Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012
>
>                          Max       Max/Min        Avg      Total
> Time (sec):           4.794e-03      1.00000   4.794e-03
> Objects:              1.400e+01      1.00000   1.400e+01
> Flops:                7.200e+01      1.00000   7.200e+01  1.440e+02
> Flops/sec:            1.502e+04      1.00000   1.502e+04  3.004e+04
> Memory:               5.936e+04      1.00000              1.187e+05
> MPI Messages:         0.000e+00      0.00000   0.000e+00  0.000e+00
> MPI Message Lengths:  0.000e+00      0.00000   0.000e+00  0.000e+00
> MPI Reductions:       4.100e+01      1.00000
>
> Flop counting convention: 1 flop = 1 real number operation of type
> (multiply/divide/add/subtract)
>                             e.g., VecAXPY() for real vectors of length N
> --> 2N flops
>                             and VecAXPY() for complex vectors of length N
> --> 8N flops
>
> Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages
> ---  -- Message Lengths --  -- Reductions --
>                         Avg     %Total     Avg     %Total   counts
> %Total     Avg         %Total   counts   %Total
>  0:      Main Stage: 4.7846e-03  99.8%  1.4400e+02 100.0%  0.000e+00
> 0.0%  0.000e+00        0.0%  4.000e+01  97.6%
>
>
> ------------------------------------------------------------------------------------------------------------------------
> See the 'Profiling' chapter of the users' manual for details on
> interpreting output.
> Phase summary info:
>    Count: number of times phase was executed
>    Time and Flops: Max - maximum over all processors
>                    Ratio - ratio of maximum to minimum over all processors
>    Mess: number of messages sent
>    Avg. len: average message length
>    Reduct: number of global reductions
>    Global: entire computation
>    Stage: stages of a computation. Set stages with PetscLogStagePush() and
> PetscLogStagePop().
>       %T - percent time in this phase         %f - percent flops in this
> phase
>       %M - percent messages in this phase     %L - percent message lengths
> in this phase
>       %R - percent reductions in this phase
>    Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time
> over all processors)
>
> ------------------------------------------------------------------------------------------------------------------------
>
>
>       ##########################################################
>       #                                                        #
>       #                          WARNING!!!                    #
>       #                                                        #
>       #   This code was compiled with a debugging option,      #
>       #   To get timing results run ./configure                #
>       #   using --with-debugging=no, the performance will      #
>       #   be generally two or three times faster.              #
>       #                                                        #
>       ##########################################################
>
>
> Event                Count      Time (sec)     Flops
>       --- Global ---  --- Stage ---   Total
>                    Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len
> Reduct  %T %f %M %L %R  %T %f %M %L %R Mflop/s
>
> ------------------------------------------------------------------------------------------------------------------------
>
> --- Event Stage 0: Main Stage
>
> MatMult                1 1.0 4.0054e-05 1.1 1.20e+01 1.0 0.0e+00 0.0e+00
> 0.0e+00  1 17  0  0  0   1 17  0  0  0     1
> MatAssemblyBegin       1 1.0 3.7909e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 2.0e+00  1  0  0  0  5   1  0  0  0  5     0
> MatAssemblyEnd         1 1.0 4.3201e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 1.9e+01  9  0  0  0 46   9  0  0  0 48     0
> VecNorm                2 1.0 2.9700e-03 1.0 4.80e+01 1.0 0.0e+00 0.0e+00
> 2.0e+00 62 67  0  0  5  62 67  0  0  5     0
> VecSet                 1 1.0 5.0068e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecScatterBegin        2 1.0 7.8678e-06 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecScatterEnd          2 1.0 4.0531e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecSetRandom           1 1.0 9.0599e-06 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
>
> ------------------------------------------------------------------------------------------------------------------------
>
> Memory usage is given in bytes:
>
> Object Type          Creations   Destructions     Memory  Descendants' Mem.
> Reports information only for process 0.
>
> --- Event Stage 0: Main Stage
>
>               Matrix     3              0            0     0
>    Matrix Null Space     1              0            0     0
>               Vector     5              1         1504     0
>       Vector Scatter     1              0            0     0
>            Index Set     2              2         1496     0
>          PetscRandom     1              1          616     0
>               Viewer     1              0            0     0
>
> ========================================================================================================================
> Average time to get PetscTime(): 9.53674e-08
> Average time for MPI_Barrier(): 4.29153e-07
> Average time for zero size MPI_Send(): 8.58307e-06
> #PETSc Option Table entries:
> -log_summary
> #End of PETSc Option Table entries
> Compiled without FORTRAN kernels
> Compiled with full precision matrices (default)
> sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
> sizeof(PetscScalar) 8 sizeof(PetscInt) 4
> Configure run at: Wed Jun 20 12:08:20 2012
> Configure options:
> --with-mpi-dir=/opt/refresco/libraries_cklaij/openmpi-1.4.5
> --with-clanguage=c++ --with-x=1 --with-debugging=1
> --with-hypre-include=/opt/refresco/libraries_cklaij/hypre-2.7.0b/include
> --with-hypre-lib=/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib/libHYPRE.a
> --with-ml-include=/opt/refresco/libraries_cklaij/ml-6.2/include
> --with-ml-lib=/opt/refresco/libraries_cklaij/ml-6.2/lib/libml.a
> --with-blas-lapack-dir=/opt/intel/mkl
> -----------------------------------------
> Libraries compiled on Wed Jun 20 12:08:20 2012 on lin0133
> Machine characteristics:
> Linux-2.6.32-41-generic-x86_64-with-Ubuntu-10.04-lucid
> Using PETSc directory:
> /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1
> Using PETSc arch: linux_64bit_debug
> -----------------------------------------
>
> Using C compiler: /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpicxx
>  -wd1572 -g     ${COPTFLAGS} ${CFLAGS}
> Using Fortran compiler:
> /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpif90  -g   ${FOPTFLAGS}
> ${FFLAGS}
> -----------------------------------------
>
> Using include paths:
> -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/include
> -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/include
> -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/include
> -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/include
> -I/opt/refresco/libraries_cklaij/hypre-2.7.0b/include
> -I/opt/refresco/libraries_cklaij/ml-6.2/include
> -I/opt/refresco/libraries_cklaij/openmpi-1.4.5/include
> -----------------------------------------
>
> Using C linker: /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpicxx
> Using Fortran linker:
> /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpif90
> Using libraries:
> -Wl,-rpath,/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/lib
> -L/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/lib
> -lpetsc -lX11 -lpthread
> -Wl,-rpath,/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib
> -L/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib -lHYPRE
> -Wl,-rpath,/opt/refresco/libraries_cklaij/ml-6.2/lib
> -L/opt/refresco/libraries_cklaij/ml-6.2/lib -lml -Wl,-rpath,/opt/intel/mkl
> -L/opt/intel/mkl -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5
> -lpthread -L/opt/refresco/libraries_cklaij/openmpi-1.4.5/lib
> -L/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64
> -L/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/intel64
> -L/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/intel64
> -L/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21
> -L/usr/lib/gcc/x86_64-linux-gnu/4.4.3 -L/usr/lib/x86_64-linux-gnu -lmpi_f90
> -lmpi_f77 -lifport -lifcore -lm -lm -lmpi_cxx -ldl -lmpi -lopen-rte
> -lopen-pal -lnsl -lutil -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++
> -lgcc_s -lirc -lpthread -lirc_s -ldl
> -----------------------------------------
>
>
> dr. ir. Christiaan Klaij
> CFD Researcher
> Research & Development
> E mailto:C.Klaij at marin.nl
> T +31 317 49 33 44
>
> MARIN
> 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
> T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120703/8fb0efb4/attachment-0001.html>


More information about the petsc-users mailing list