From mirzadeh at gmail.com Sat Sep 1 00:21:40 2012 From: mirzadeh at gmail.com (Mohammad Mirzadeh) Date: Fri, 31 Aug 2012 22:21:40 -0700 Subject: [petsc-users] Petsc crashes with intel compiler In-Reply-To: <7EA29707-596C-4F49-AB71-3E477AEF7223@mcs.anl.gov> References: <7EA29707-596C-4F49-AB71-3E477AEF7223@mcs.anl.gov> Message-ID: tried OpenMPI and still have the same problem! Barry, unfortunately there are no other intel compilers and pgi ... well I never really figured how to get that thing working! Also, tried one of petsc examples and that one seems to work with petsc builds using intel ... I'm really baffled here. One the one had petsc examples work, on the other hand my code also works if I use gcc and is valgrind clean. It could be a bug in intel compiler that does show itself in petsc examples or it could be a bug in my code that does not show itself under gcc; who knows! For the moment I'm just gonna use gcc hoping my simulation is not gonna crash next month! I hate memory issues ... On Fri, Aug 31, 2012 at 7:27 PM, Barry Smith wrote: > > Try a different version of the Intel compiler; it is a good chance it is > intel compiler bugs. > > Barry > > On Aug 31, 2012, at 6:53 PM, Mohammad Mirzadeh wrote: > > > I'll try it right away > > > > On Fri, Aug 31, 2012 at 4:51 PM, Jed Brown wrote: > > Can you remove pthreadclasses from your configure? If the problem is > still there, it looks a bit like a misconfigured MPI. But get rid of the > pthreadclasses first. > > > > > > On Fri, Aug 31, 2012 at 6:38 PM, Mohammad Mirzadeh > wrote: > > hi guys, > > > > After doing many many tests across different machine (Linux box and a > Mac) and clusters (Gordon and Lonestar) I'm able to reproduce a test in > which my code crashes if I use a petsc version that is compiled with intel > compiler. This does not happen if I use gcc. When I use BiCGSTAB solver > along with hypre's BoomerAMG, I get the following error message: > > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > > [0]PETSC ERROR: Error in external library! > > [0]PETSC ERROR: Error in HYPRE solver, error code 1! > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 > 15:42:00 CDT 2012 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: ./2D_cylinders on a arch-linu named > c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012 > > [0]PETSC ERROR: Libraries linked from > /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib > > [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012 > > [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug > --with-clanguage=cxx --with-pthreadclasses=1 > --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6 > --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1 > --download-hdf5=1 --download-metis=1 --download-parmetis=1 > --download-superlu_dist=1 > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: PCApply_HYPRE() line 160 in > src/ksp/pc/impls/hypre/hypre.c > > [0]PETSC ERROR: PCApply() line 384 in src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: KSPInitialResidual() line 57 in > src/ksp/ksp/interface/itres.c > > [0]PETSC ERROR: KSPSolve_BCGS() line 65 in src/ksp/ksp/impls/bcgs/bcgs.c > > [0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: solve() line 402 in > "unknowndirectory/"/work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > > [0]PETSC ERROR: Error in external library! > > [0]PETSC ERROR: Error in HYPRE_IJMatrixDestroy()! > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 > 15:42:00 CDT 2012 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: ./2D_cylinders on a arch-linu named > c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012 > > [0]PETSC ERROR: Libraries linked from > /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib > > [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012 > > [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug > --with-clanguage=cxx --with-pthreadclasses=1 > --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6 > --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1 > --download-hdf5=1 --download-metis=1 --download-parmetis=1 > --download-superlu_dist=1 > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: PCDestroy_HYPRE() line 178 in > src/ksp/pc/impls/hypre/hypre.c > > [0]PETSC ERROR: PCDestroy() line 119 in src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: KSPDestroy() line 786 in src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: clear() line 438 in > "unknowndirectory/"/work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > > [0]PETSC ERROR: likely location of problem given in stack below > > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > > [0]PETSC ERROR: INSTEAD the line number of the start of the > function > > [0]PETSC ERROR: is given. > > [0]PETSC ERROR: [0] VecDestroy line 574 src/vec/vec/interface/vector.c > > [0]PETSC ERROR: [0] KSPReset_BCGS line 194 src/ksp/ksp/impls/bcgs/bcgs.c > > [0]PETSC ERROR: [0] KSPReset line 729 src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: [0] KSPDestroy line 768 src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: [0] PetscError line 343 src/sys/error/err.c > > [0]PETSC ERROR: [0] HYPRE_IJMatrixDestroy line 0 unknownunknown > > [0]PETSC ERROR: [0] PCDestroy_HYPRE line 177 > src/ksp/pc/impls/hypre/hypre.c > > [0]PETSC ERROR: [0] PCDestroy line 110 src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: [0] KSPDestroy line 768 src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: [0] PetscError line 343 src/sys/error/err.c > > [0]PETSC ERROR: [0] PCApply_HYPRE line 149 src/ksp/pc/impls/hypre/hypre.c > > [0]PETSC ERROR: [0] PCApply line 373 src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: [0] KSPInitialResidual line 45 > src/ksp/ksp/interface/itres.c > > [0]PETSC ERROR: [0] KSPSolve_BCGS line 54 src/ksp/ksp/impls/bcgs/bcgs.c > > [0]PETSC ERROR: [0] KSPSolve line 351 src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > > [0]PETSC ERROR: Signal received! > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 > 15:42:00 CDT 2012 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: ./2D_cylinders on a arch-linu named > c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012 > > [0]PETSC ERROR: Libraries linked from > /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib > > [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012 > > [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug > --with-clanguage=cxx --with-pthreadclasses=1 > --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6 > --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1 > --download-hdf5=1 --download-metis=1 --download-parmetis=1 > --download-superlu_dist=1 > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > > > > Initially I thought this is somehow related to hypre, but even when I > run the code with -pc_type none I get: > > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > > [0]PETSC ERROR: Floating point exception! > > [0]PETSC ERROR: Infinite or not-a-number generated in norm! > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 > 15:42:00 CDT 2012 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: ./2D_cylinders on a arch-linu named > c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:15:04 2012 > > [0]PETSC ERROR: Libraries linked from > /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib > > [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012 > > [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug > --with-clanguage=cxx --with-pthreadclasses=1 > --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6 > --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1 > --download-hdf5=1 --download-metis=1 --download-parmetis=1 > --download-superlu_dist=1 > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: VecNorm() line 169 in src/vec/vec/interface/rvector.c > > [0]PETSC ERROR: KSPSolve_BCGS() line 78 in src/ksp/ksp/impls/bcgs/bcgs.c > > [0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: solve() line 402 in > "unknowndirectory/"/work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp > > N5PETSc9ExceptionE > > > > > > In both cases the solution to the linear system diverges. None of these > issues happen when I build petsc with gcc. > > > > Couple of points: > > > > 1) The gcc version is clean under valgrind (except the usual > PetscInitialize errors ...) but the intel versions generates all sort of > errors in valgrind. Please see blow (this is during one Poisson solve) > > 2) This is my petsc configure option: > > > > /configure PETSC_ARCH=arch-linux-gcc-debug --with-clanguage=cxx > --with-pthreadclasses=1 --with-mpi-dir=$MPICH_HOME > --with-blas-lapack-dir=$TACC_MKL_DIR --download-hypre=1 --download-hdf5=1 > --download-metis=1 --download-parmetis=1 --download-superlu_dist=1 > > > > ./configure PETSC_ARCH=arch-linux-intel-debug --with-clanguage=cxx > --with-pthreadclasses=1 --with-mpi-dir=$MPICH_HOME > --with-blas-lapack-dir=$TACC_MKL_DIR --download-hypre=1 --download-hdf5=1 > --download-metis=1 --download-parmetis=1 --download-superlu_dist=1 > > > > > > in both cases $TACC_MKL_DIR points to a mkl library compiled using intel > compiler. I don't think if that has to do with any of this (looking at > valgrind output) but then again that's the only version of mkl available on > the cluster (Lonestar). > > > > Do you think this is somehow a compiler bug(?) or something more like an > undefined behavior in my own code? > > > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74B236A: calloc (mvapich_malloc.c:3756) > > ==8404== by 0x747D79D: MPIU_Handle_obj_alloc_unsafe (handlemem.c:363) > > ==8404== by 0x747D665: MPIU_Handle_obj_alloc (handlemem.c:307) > > ==8404== by 0x744F596: MPIR_Comm_create (commutil.c:101) > > ==8404== by 0x744E84D: MPIR_Comm_copy (commutil.c:969) > > ==8404== by 0x7447F14: PMPI_Comm_dup (comm_dup.c:177) > > ==8404== by 0xB2292A: PCCreate_HYPRE (hypre.c:1018) > > ==8404== by 0xA64120: PCSetType(_p_PC*, char const*) (pcset.c:83) > > ==8404== by 0x46B3FE: PetscLinearSolver::setupSolver(MatStructure, > char const*, char const*) (petscLinearSolver.cpp:331) > > ==8404== by 0x46E7D1: amr2D::fvm::pnpSolver::initialize() > (pnpSolver_2d_FVM.cpp:46) > > ==8404== by 0x40D464: main (main_2D_cylinders.cpp:299) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74B21CD: calloc (mvapich_malloc.c:3825) > > ==8404== by 0x747D79D: MPIU_Handle_obj_alloc_unsafe (handlemem.c:363) > > ==8404== by 0x747D665: MPIU_Handle_obj_alloc (handlemem.c:307) > > ==8404== by 0x744F596: MPIR_Comm_create (commutil.c:101) > > ==8404== by 0x744E84D: MPIR_Comm_copy (commutil.c:969) > > ==8404== by 0x7447F14: PMPI_Comm_dup (comm_dup.c:177) > > ==8404== by 0xB2292A: PCCreate_HYPRE (hypre.c:1018) > > ==8404== by 0xA64120: PCSetType(_p_PC*, char const*) (pcset.c:83) > > ==8404== by 0x46B3FE: PetscLinearSolver::setupSolver(MatStructure, > char const*, char const*) (petscLinearSolver.cpp:331) > > ==8404== by 0x46E7D1: amr2D::fvm::pnpSolver::initialize() > (pnpSolver_2d_FVM.cpp:46) > > ==8404== by 0x40D464: main (main_2D_cylinders.cpp:299) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x8283154: __intel_new_memset (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x8261535: _intel_fast_memset.J (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x74B221A: calloc (mvapich_malloc.c:3826) > > ==8404== by 0x747D79D: MPIU_Handle_obj_alloc_unsafe (handlemem.c:363) > > ==8404== by 0x747D665: MPIU_Handle_obj_alloc (handlemem.c:307) > > ==8404== by 0x744F596: MPIR_Comm_create (commutil.c:101) > > ==8404== by 0x744E84D: MPIR_Comm_copy (commutil.c:969) > > ==8404== by 0x7447F14: PMPI_Comm_dup (comm_dup.c:177) > > ==8404== by 0xB2292A: PCCreate_HYPRE (hypre.c:1018) > > ==8404== by 0xA64120: PCSetType(_p_PC*, char const*) (pcset.c:83) > > ==8404== by 0x46B3FE: PetscLinearSolver::setupSolver(MatStructure, > char const*, char const*) (petscLinearSolver.cpp:331) > > ==8404== by 0x46E7D1: amr2D::fvm::pnpSolver::initialize() > (pnpSolver_2d_FVM.cpp:46) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74B236A: calloc (mvapich_malloc.c:3756) > > ==8404== by 0x747D714: MPIU_Handle_obj_alloc_unsafe (handlemem.c:363) > > ==8404== by 0x747D665: MPIU_Handle_obj_alloc (handlemem.c:307) > > ==8404== by 0x744F596: MPIR_Comm_create (commutil.c:101) > > ==8404== by 0x744E84D: MPIR_Comm_copy (commutil.c:969) > > ==8404== by 0x7447F14: PMPI_Comm_dup (comm_dup.c:177) > > ==8404== by 0xB2292A: PCCreate_HYPRE (hypre.c:1018) > > ==8404== by 0xA64120: PCSetType(_p_PC*, char const*) (pcset.c:83) > > ==8404== by 0x46B3FE: PetscLinearSolver::setupSolver(MatStructure, > char const*, char const*) (petscLinearSolver.cpp:331) > > ==8404== by 0x46E7D1: amr2D::fvm::pnpSolver::initialize() > (pnpSolver_2d_FVM.cpp:46) > > ==8404== by 0x40D464: main (main_2D_cylinders.cpp:299) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74B21CD: calloc (mvapich_malloc.c:3825) > > ==8404== by 0x747D714: MPIU_Handle_obj_alloc_unsafe (handlemem.c:363) > > ==8404== by 0x747D665: MPIU_Handle_obj_alloc (handlemem.c:307) > > ==8404== by 0x744F596: MPIR_Comm_create (commutil.c:101) > > ==8404== by 0x744E84D: MPIR_Comm_copy (commutil.c:969) > > ==8404== by 0x7447F14: PMPI_Comm_dup (comm_dup.c:177) > > ==8404== by 0xB2292A: PCCreate_HYPRE (hypre.c:1018) > > ==8404== by 0xA64120: PCSetType(_p_PC*, char const*) (pcset.c:83) > > ==8404== by 0x46B3FE: PetscLinearSolver::setupSolver(MatStructure, > char const*, char const*) (petscLinearSolver.cpp:331) > > ==8404== by 0x46E7D1: amr2D::fvm::pnpSolver::initialize() > (pnpSolver_2d_FVM.cpp:46) > > ==8404== by 0x40D464: main (main_2D_cylinders.cpp:299) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x8283136: __intel_new_memset (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x8261535: _intel_fast_memset.J (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x74B221A: calloc (mvapich_malloc.c:3826) > > ==8404== by 0x747D714: MPIU_Handle_obj_alloc_unsafe (handlemem.c:363) > > ==8404== by 0x747D665: MPIU_Handle_obj_alloc (handlemem.c:307) > > ==8404== by 0x744F596: MPIR_Comm_create (commutil.c:101) > > ==8404== by 0x744E84D: MPIR_Comm_copy (commutil.c:969) > > ==8404== by 0x7447F14: PMPI_Comm_dup (comm_dup.c:177) > > ==8404== by 0xB2292A: PCCreate_HYPRE (hypre.c:1018) > > ==8404== by 0xA64120: PCSetType(_p_PC*, char const*) (pcset.c:83) > > ==8404== by 0x46B3FE: PetscLinearSolver::setupSolver(MatStructure, > char const*, char const*) (petscLinearSolver.cpp:331) > > ==8404== by 0x46E7D1: amr2D::fvm::pnpSolver::initialize() > (pnpSolver_2d_FVM.cpp:46) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74B236A: calloc (mvapich_malloc.c:3756) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x14E6385: hypre_BoomerAMGCreate (par_amg.c:203) > > ==8404== by 0x14E2E10: HYPRE_BoomerAMGCreate (HYPRE_parcsr_amg.c:31) > > ==8404== by 0xB1C33B: PCHYPRESetType_HYPRE (hypre.c:817) > > ==8404== by 0xB2117F: PCSetFromOptions_HYPRE(_p_PC*) (hypre.c:895) > > ==8404== by 0xA65705: PCSetFromOptions(_p_PC*) (pcset.c:196) > > ==8404== by 0x46B487: PetscLinearSolver::setupSolver(MatStructure, > char const*, char const*) (petscLinearSolver.cpp:332) > > ==8404== by 0x46E7D1: amr2D::fvm::pnpSolver::initialize() > (pnpSolver_2d_FVM.cpp:46) > > ==8404== by 0x40D464: main (main_2D_cylinders.cpp:299) > > ==8404== > > Solving a nonlinear PB to get steady state solution ... > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74B21CD: calloc (mvapich_malloc.c:3825) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x14DB097: hypre_IJMatrixSetDiagOffdSizesParCSR > (IJMatrix_parcsr.c:179) > > ==8404== by 0x14D97C9: HYPRE_IJMatrixSetDiagOffdSizes > (HYPRE_IJMatrix.c:818) > > ==8404== by 0x11A2728: MatHYPRE_IJMatrixPreallocate(_p_Mat*, _p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:48) > > ==8404== by 0x11A3E9C: MatHYPRE_IJMatrixCreate(_p_Mat*, > hypre_IJMatrix_struct**) (mhyp.c:80) > > ==8404== by 0xAFDA92: PCSetUp_HYPRE(_p_PC*) (hypre.c:100) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74AECA4: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== > > ==8404== Syscall param mmap(length) contains uninitialised byte(s) > > ==8404== at 0x3E3CED10EA: mmap (in /lib64/libc-2.5.so) > > ==8404== by 0x74AECD1: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74AEA3E: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74AEA70: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AEA8A: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74AEA98: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74AD6E6: _int_free (mvapich_malloc.c:4366) > > ==8404== by 0x74AEAA8: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AD723: _int_free (mvapich_malloc.c:4389) > > ==8404== by 0x74AEAA8: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74AD755: _int_free (mvapich_malloc.c:4400) > > ==8404== by 0x74AEAA8: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AD75B: _int_free (mvapich_malloc.c:4402) > > ==8404== by 0x74AEAA8: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AD77E: _int_free (mvapich_malloc.c:4409) > > ==8404== by 0x74AEAA8: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AD7A9: _int_free (mvapich_malloc.c:4425) > > ==8404== by 0x74AEAA8: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74AD7B4: _int_free (mvapich_malloc.c:4455) > > ==8404== by 0x74AEAA8: _int_malloc (mvapich_malloc.c:4332) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5FA4BA: VecCreate_Seq (bvec3.c:40) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADDAE: _int_malloc (mvapich_malloc.c:4116) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADDD9: _int_malloc (mvapich_malloc.c:4122) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADDEC: _int_malloc (mvapich_malloc.c:4122) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADDFF: _int_malloc (mvapich_malloc.c:4122) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74ADE42: _int_malloc (mvapich_malloc.c:4124) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADE4C: _int_malloc (mvapich_malloc.c:4126) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74ADE88: _int_malloc (mvapich_malloc.c:4148) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74ADE9C: _int_malloc (mvapich_malloc.c:4152) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADF56: _int_malloc (mvapich_malloc.c:4220) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADF72: _int_malloc (mvapich_malloc.c:4227) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADF7E: _int_malloc (mvapich_malloc.c:4227) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0x5E9616: VecCreate_Seq_Private(_p_Vec*, double const*) > (bvec2.c:887) > > ==8404== by 0x5FA62A: VecCreate_Seq (bvec3.c:42) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x614EA3: VecCreate_Standard (pbvec.c:264) > > ==8404== by 0x5B36DC: VecSetType(_p_Vec*, char const*) (vecreg.c:53) > > ==8404== by 0x703809: MatGetVecs(_p_Mat*, _p_Vec**, _p_Vec**) > (matrix.c:8149) > > ==8404== by 0xAFDB5E: PCSetUp_HYPRE(_p_PC*) (hypre.c:104) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADED9: _int_malloc (mvapich_malloc.c:4168) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B62C5: hypre_SeqVectorInitialize (vector.c:94) > > ==8404== by 0x158CF25: hypre_ParVectorInitialize (par_vector.c:150) > > ==8404== by 0x14E0E57: hypre_IJVectorInitializePar > (IJVector_parcsr.c:124) > > ==8404== by 0x14DA2B9: HYPRE_IJVectorInitialize (HYPRE_IJVector.c:229) > > ==8404== by 0xE77A78: VecHYPRE_IJVectorCreate(_p_Vec*, > hypre_IJVector_struct**) (vhyp.c:21) > > ==8404== by 0xAFDC0C: PCSetUp_HYPRE(_p_PC*) (hypre.c:105) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADEE6: _int_malloc (mvapich_malloc.c:4165) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B62C5: hypre_SeqVectorInitialize (vector.c:94) > > ==8404== by 0x158CF25: hypre_ParVectorInitialize (par_vector.c:150) > > ==8404== by 0x14E0E57: hypre_IJVectorInitializePar > (IJVector_parcsr.c:124) > > ==8404== by 0x14DA2B9: HYPRE_IJVectorInitialize (HYPRE_IJVector.c:229) > > ==8404== by 0xE77A78: VecHYPRE_IJVectorCreate(_p_Vec*, > hypre_IJVector_struct**) (vhyp.c:21) > > ==8404== by 0xAFDC0C: PCSetUp_HYPRE(_p_PC*) (hypre.c:105) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADF31: _int_malloc (mvapich_malloc.c:4216) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B62C5: hypre_SeqVectorInitialize (vector.c:94) > > ==8404== by 0x158CF25: hypre_ParVectorInitialize (par_vector.c:150) > > ==8404== by 0x14E0E57: hypre_IJVectorInitializePar > (IJVector_parcsr.c:124) > > ==8404== by 0x14DA2B9: HYPRE_IJVectorInitialize (HYPRE_IJVector.c:229) > > ==8404== by 0xE77A78: VecHYPRE_IJVectorCreate(_p_Vec*, > hypre_IJVector_struct**) (vhyp.c:21) > > ==8404== by 0xAFDC0C: PCSetUp_HYPRE(_p_PC*) (hypre.c:105) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AD75B: _int_free (mvapich_malloc.c:4402) > > ==8404== by 0x74AF651: free (mvapich_malloc.c:3497) > > ==8404== by 0x494CCA: PetscFreeAlign(void*, int, char const*, char > const*, char const*) (mal.c:75) > > ==8404== by 0x497F91: PetscTrFreeDefault(void*, int, char const*, > char const*, char const*) (mtr.c:322) > > ==8404== by 0x5E866E: VecDestroy_Seq(_p_Vec*) (bvec2.c:777) > > ==8404== by 0x59F7CD: VecDestroy(_p_Vec**) (vector.c:580) > > ==8404== by 0xAFDD5D: PCSetUp_HYPRE(_p_PC*) (hypre.c:107) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AD775: _int_free (mvapich_malloc.c:4406) > > ==8404== by 0x74AF651: free (mvapich_malloc.c:3497) > > ==8404== by 0x494CCA: PetscFreeAlign(void*, int, char const*, char > const*, char const*) (mal.c:75) > > ==8404== by 0x497F91: PetscTrFreeDefault(void*, int, char const*, > char const*, char const*) (mtr.c:322) > > ==8404== by 0x5E866E: VecDestroy_Seq(_p_Vec*) (bvec2.c:777) > > ==8404== by 0x59F7CD: VecDestroy(_p_Vec**) (vector.c:580) > > ==8404== by 0xAFDD5D: PCSetUp_HYPRE(_p_PC*) (hypre.c:107) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AD7A9: _int_free (mvapich_malloc.c:4425) > > ==8404== by 0x74AF651: free (mvapich_malloc.c:3497) > > ==8404== by 0x494CCA: PetscFreeAlign(void*, int, char const*, char > const*, char const*) (mal.c:75) > > ==8404== by 0x497F91: PetscTrFreeDefault(void*, int, char const*, > char const*, char const*) (mtr.c:322) > > ==8404== by 0x5E866E: VecDestroy_Seq(_p_Vec*) (bvec2.c:777) > > ==8404== by 0x59F7CD: VecDestroy(_p_Vec**) (vector.c:580) > > ==8404== by 0xAFDD5D: PCSetUp_HYPRE(_p_PC*) (hypre.c:107) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADDA1: _int_malloc (mvapich_malloc.c:4106) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADDAE: _int_malloc (mvapich_malloc.c:4116) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADDD9: _int_malloc (mvapich_malloc.c:4122) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADDEC: _int_malloc (mvapich_malloc.c:4122) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADDFF: _int_malloc (mvapich_malloc.c:4122) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADE12: _int_malloc (mvapich_malloc.c:4122) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74ADE42: _int_malloc (mvapich_malloc.c:4124) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADE4C: _int_malloc (mvapich_malloc.c:4126) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74ADE88: _int_malloc (mvapich_malloc.c:4148) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74ADE9C: _int_malloc (mvapich_malloc.c:4152) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3AA1: hypre_CSRMatrixInitialize (csr_matrix.c:91) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AE1E7: _int_malloc (mvapich_malloc.c:4170) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3B06: hypre_CSRMatrixInitialize (csr_matrix.c:97) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74AE1EF: _int_malloc (mvapich_malloc.c:4173) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3B06: hypre_CSRMatrixInitialize (csr_matrix.c:97) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AE266: _int_malloc (mvapich_malloc.c:4188) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3B06: hypre_CSRMatrixInitialize (csr_matrix.c:97) > > ==8404== by 0x158732D: hypre_ParCSRMatrixInitialize > (par_csr_matrix.c:200) > > ==8404== by 0x14DB366: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:272) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADF87: _int_malloc (mvapich_malloc.c:4237) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x1626536: hypre_AuxParCSRMatrixInitialize > (aux_parcsr_matrix.c:177) > > ==8404== by 0x14DB372: hypre_IJMatrixInitializeParCSR > (IJMatrix_parcsr.c:273) > > ==8404== by 0x14D8D44: HYPRE_IJMatrixInitialize (HYPRE_IJMatrix.c:303) > > ==8404== by 0x11A7FFF: MatHYPRE_IJMatrixFastCopy_MPIAIJ(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:211) > > ==8404== by 0x11A4BFD: MatHYPRE_IJMatrixCopy(_p_Mat*, > hypre_IJMatrix_struct*) (mhyp.c:122) > > ==8404== by 0xAFE3DC: PCSetUp_HYPRE(_p_PC*) (hypre.c:118) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADECC: _int_malloc (mvapich_malloc.c:4165) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B62C5: hypre_SeqVectorInitialize (vector.c:94) > > ==8404== by 0x158CF25: hypre_ParVectorInitialize (par_vector.c:150) > > ==8404== by 0x14ECF47: hypre_BoomerAMGSetup (par_amg_setup.c:464) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADF72: _int_malloc (mvapich_malloc.c:4227) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x14FC3F9: hypre_BoomerAMGCoarsenRuge (par_coarsen.c:940) > > ==8404== by 0x14FF1EE: hypre_BoomerAMGCoarsenFalgout > (par_coarsen.c:1918) > > ==8404== by 0x14EDE45: hypre_BoomerAMGSetup (par_amg_setup.c:758) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADF7E: _int_malloc (mvapich_malloc.c:4227) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x14FC3F9: hypre_BoomerAMGCoarsenRuge (par_coarsen.c:940) > > ==8404== by 0x14FF1EE: hypre_BoomerAMGCoarsenFalgout > (par_coarsen.c:1918) > > ==8404== by 0x14EDE45: hypre_BoomerAMGSetup (par_amg_setup.c:758) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AE293: _int_malloc (mvapich_malloc.c:4254) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x153D557: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1012) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74AE29B: _int_malloc (mvapich_malloc.c:4257) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x153D557: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1012) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AE327: _int_malloc (mvapich_malloc.c:4278) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x153D557: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1012) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74ADE4E: _int_malloc (mvapich_malloc.c:4129) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x153DAD9: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1122) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADE56: _int_malloc (mvapich_malloc.c:4129) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x153DAD9: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1122) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74ADE98: _int_malloc (mvapich_malloc.c:4151) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x153DAD9: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1122) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AD771: _int_free (mvapich_malloc.c:4406) > > ==8404== by 0x74AF651: free (mvapich_malloc.c:3497) > > ==8404== by 0x16240BA: hypre_Free (hypre_memory.c:196) > > ==8404== by 0x153F2EA: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1685) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== by 0x472A92: amr2D::fvm::pbSolver::SolveNonLinearPB(double, > int) (pbSolver_2d_FVM.cpp:148) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADF56: _int_malloc (mvapich_malloc.c:4220) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x1586F83: hypre_ParCSRMatrixCreate (par_csr_matrix.c:60) > > ==8404== by 0x1554BE0: hypre_BoomerAMGCreateS (par_strength.c:156) > > ==8404== by 0x14EDD1B: hypre_BoomerAMGSetup (par_amg_setup.c:736) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x828313B: __intel_new_memset (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x8261535: _intel_fast_memset.J (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x74B221A: calloc (mvapich_malloc.c:3826) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x14FC57D: hypre_BoomerAMGCoarsenRuge (par_coarsen.c:988) > > ==8404== by 0x14FF1EE: hypre_BoomerAMGCoarsenFalgout > (par_coarsen.c:1918) > > ==8404== by 0x14EDE45: hypre_BoomerAMGSetup (par_amg_setup.c:758) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x8283145: __intel_new_memset (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x8261535: _intel_fast_memset.J (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x74B221A: calloc (mvapich_malloc.c:3826) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x153E2B7: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1354) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AE1F1: _int_malloc (mvapich_malloc.c:4174) > > ==8404== by 0x74B2189: calloc (mvapich_malloc.c:3765) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x15B3457: hypre_CSRMatrixTranspose (csr_matop.c:354) > > ==8404== by 0x153B7B8: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:372) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x74AD77E: _int_free (mvapich_malloc.c:4409) > > ==8404== by 0x74AF651: free (mvapich_malloc.c:3497) > > ==8404== by 0x16240BA: hypre_Free (hypre_memory.c:196) > > ==8404== by 0x15B3A2F: hypre_CSRMatrixDestroy (csr_matrix.c:69) > > ==8404== by 0x153F6C2: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1767) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== > > ==8404== Use of uninitialised value of size 8 > > ==8404== at 0x8283131: __intel_new_memset (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x8261535: _intel_fast_memset.J (in > /opt/apps/intel/11.1/lib/intel64/libirc.so) > > ==8404== by 0x74B221A: calloc (mvapich_malloc.c:3826) > > ==8404== by 0x1623FF2: hypre_CAlloc (hypre_memory.c:121) > > ==8404== by 0x153E2DD: hypre_BoomerAMGBuildCoarseOperator > (par_rap.c:1355) > > ==8404== by 0x14F1CCB: hypre_BoomerAMGSetup (par_amg_setup.c:1650) > > ==8404== by 0x14E2EA6: HYPRE_BoomerAMGSetup (HYPRE_parcsr_amg.c:58) > > ==8404== by 0xAFF31D: PCSetUp_HYPRE(_p_PC*) (hypre.c:122) > > ==8404== by 0x12293CE: PCSetUp(_p_PC*) (precon.c:832) > > ==8404== by 0xC1D524: KSPSetUp(_p_KSP*) (itfunc.c:278) > > ==8404== by 0xC1EFD9: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:402) > > ==8404== by 0x46C59F: PetscLinearSolver::solve(ArrayV&, > PetscBool) (petscLinearSolver.cpp:402) > > ==8404== > > ==8404== Conditional jump or move depends on uninitialised value(s) > > ==8404== at 0x74ADED9: _int_malloc (mvapich_malloc.c:4168) > > ==8404== by 0x74AEE8D: malloc (mvapich_malloc.c:3409) > > ==8404== by 0x74AFB0F: memalign (mvapich_malloc.c:3626) > > ==8404== by 0x494BF3: PetscMallocAlign(unsigned long, int, char > const*, char const*, char const*, void**) (mal.c:30) > > ==8404== by 0x496DFD: PetscTrMallocDefault(unsigned long, int, char > const*, char const*, char const*, void**) (mtr.c:190) > > ==8404== by 0xE71ECC: VecStashCreate_Private(int, int, VecStash*) > (vecstash.c:37) > > ==8404== by 0x6145A1: VecCreate_MPI_Private(_p_Vec*, PetscBool, int, > double const*) (pbvec.c:207) > > ==8404== by 0x613475: VecDuplicate_MPI(_p_Vec*, _p_Vec**) (pbvec.c:70) > > ==8404== by 0x59EF10: VecDuplicate(_p_Vec*, _p_Vec**) (vector.c:551) > > ==8404== by 0xC4F687: KSPDefaultConverged(_p_KSP*, int, double, > KSPConvergedReason*, void*) (iterativ.c:585) > > ==8404== by 0xC6CB92: KSPSolve_BCGS(_p_KSP*) (bcgs.c:86) > > ==8404== by 0xC1FC32: KSPSolve(_p_KSP*, _p_Vec*, _p_Vec*) > (itfunc.c:446) > > ==8404== > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenway at utias.utoronto.ca Sat Sep 1 12:04:57 2012 From: kenway at utias.utoronto.ca (Gaetan Kenway) Date: Sat, 1 Sep 2012 13:04:57 -0400 Subject: [petsc-users] MG Preconditioning In-Reply-To: References: Message-ID: I believe I partially tracked down my problem. The issue arises at line 644 in mg.c with a call to KSPGetVecs(). This is used to get the coarse grid vectors for the RHS. The issue is that up until this point, the actual operator for mglevels[i]->smoothd have not actually been set and therefore it cannot get the vectors. I am using call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) with the intention of having the coarse matrices approximately automatically. However, from looking over the code, it appears when it is used in this manner, the coarse grid operators never get set. From what I can tell, it is only possible to create these matrices automatically if (pc->dm) is True which I think means that the preconditioner is derived from a distributed matrix. I've set my own restriction operator, but mg->galerkin has a value of -1 in mg.c and none of the KSPSetOperators() are executed. I've also tried to generate the matrices myself using MatRARt() or MatPtAP, but I receive error code 56 "* /* no support for requested operation */" . *Is this op only implemented from matrices derived from from DM objects? Any other suggestions for creating the coarse grid operators manually? Thank you, Gaetan On Sat, Sep 1, 2012 at 12:29 AM, Jed Brown wrote: > On Fri, Aug 31, 2012 at 11:22 PM, Gaetan Kenway wrote: > >> Hi Again >> >> I also tried petsc-3.2 version and I still get the same backtrace. >> >> If its not possible to figure out where the communicator segfault is >> coming from its not a huge deal...I've just set the option using >> PetscOptionsSetValue() and then use PCSetFromOptions() to pull it back out. >> That seems to work fine. >> >> Even avoiding the above problem, with the PetscOptionsSetValue I'm still >> receiving an error code 73 when I run the multigrid solver. I've included >> the backtrace output below but its not a lot of help since the code exited >> cleaning using my error checking procedure >> >> ================================================================= >> PETSc or MPI Error. Error Code 73. Detected on Proc 0 >> Error at line: 122 in file: solveADjointTransposePETSc.F90 >> ================================================================= >> >> Program received signal SIGSEGV, Segmentation fault. >> 0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0 >> (gdb) bt >> #0 0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0 >> #1 0xb519f190 in pmpi_abort__ () from /usr/local/lib/libmpi_f77.so.0 >> #2 0xb44c9e4c in echk (ierr=@0x49, file=..., line=@0x7a, >> .tmp.FILE.len_V$eb=30) at terminate.f90:154 >> #3 0xb44bd68f in solveadjointtransposepetsc () at >> solveADjointTransposePETSc.F90:122 >> #4 0xb44138a9 in f2py_rout_sumb_solveadjointtransposepetsc () from >> /tmp/tmpKYF_DT/sumb.so >> #5 0xb440fd35 in fortran_call () from /tmp/tmpKYF_DT/sumb.so >> #6 0x0805fd6a in PyObject_Call () >> #7 0x080dd5b0 in PyEval_EvalFrameEx () >> #8 0x080dfbb2 in PyEval_EvalCodeEx () >> #9 0x08168f1f in ?? () >> #10 0x0805fd6a in PyObject_Call () >> #11 0x080dcbeb in PyEval_EvalFrameEx () >> #12 0x080dfbb2 in PyEval_EvalCodeEx () >> #13 0x080de145 in PyEval_EvalFrameEx () >> #14 0x080dfbb2 in PyEval_EvalCodeEx () >> #15 0x080dfca7 in PyEval_EvalCode () >> #16 0x080fd956 in PyRun_FileExFlags () >> #17 0x080fdbb2 in PyRun_SimpleFileExFlags () >> #18 0x0805b6d3 in Py_Main () >> #19 0x0805a8ab in main () >> > > This stack doesn't involve PETSc at all. > > >> >> Valgrid was clean right up until the end where I get the normal error >> message: >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >> corruption errors >> [0]PETSC ERROR: likely location of problem given in stack below >> [0]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> [0]PETSC ERROR: INSTEAD the line number of the start of the function >> [0]PETSC ERROR: is given. >> [0]PETSC ERROR: [0] MatGetVecs line 8142 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c >> [0]PETSC ERROR: [0] KSPGetVecs line 774 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c >> [0]PETSC ERROR: [0] PCSetUp_MG line 508 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c >> [0]PETSC ERROR: [0] PCSetUp line 810 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: [0] KSPSetUp line 182 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: [0] KSPSolve line 351 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c >> >> I will try it on a different system tomorrow to see if I have any more >> luck. >> >> Thanks, >> >> Gaetan >> >> >> >> On Fri, Aug 31, 2012 at 11:08 PM, Jed Brown wrote: >> >>> On Fri, Aug 31, 2012 at 10:06 PM, Matthew Knepley wrote: >>> >>>> True, but the backtrace also shows that comm = 0x0 on the call to >>>> KSPCreate(), which >>>> leads me to believe that your petsc4py has not initialized PETSc, and >>>> therefor not >>>> initialized PETSC_COMM_WORLD. >>>> >>> >>> Could catch, maybe PetscFunctionBegin() should check that PETSc has been >>> initialized. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From five9a2 at gmail.com Sat Sep 1 14:27:28 2012 From: five9a2 at gmail.com (Jed Brown) Date: Sat, 1 Sep 2012 14:27:28 -0500 Subject: [petsc-users] MG Preconditioning In-Reply-To: References: Message-ID: Can you send the whole calling sequence and the full error message? It's not clear from your email how to reproduce. You might also look at src/ksp/ksp/examples/tutorials/ex42.c which uses PCMGSetGalerkin. Did you set the fine grid operators before calling PCSetUp()? The coarse grid operators should have been constructed earlier in PCSetUp_MG(). On Sep 1, 2012 10:05 AM, "Gaetan Kenway" wrote: > I believe I partially tracked down my problem. The issue arises at line > 644 in mg.c with a call to KSPGetVecs(). This is used to get the coarse > grid vectors for the RHS. The issue is that up until this point, the actual > operator for mglevels[i]->smoothd have not actually been set and therefore > it cannot get the vectors. > > I am using > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > with the intention of having the coarse matrices approximately > automatically. However, from looking over the code, it appears when it is > used in this manner, the coarse grid operators never get set. From what I > can tell, it is only possible to create these matrices automatically if > (pc->dm) is True which I think means that the preconditioner is derived > from a distributed matrix. > > I've set my own restriction operator, but mg->galerkin has a value of -1 > in mg.c and none of the KSPSetOperators() are executed. > > I've also tried to generate the matrices myself using MatRARt() or > MatPtAP, but I receive error code 56 "* /* no support for requested > operation */" . *Is this op only implemented from matrices derived from > from DM objects? > > Any other suggestions for creating the coarse grid operators manually? > > Thank you, > > Gaetan > > > On Sat, Sep 1, 2012 at 12:29 AM, Jed Brown wrote: > >> On Fri, Aug 31, 2012 at 11:22 PM, Gaetan Kenway > > wrote: >> >>> Hi Again >>> >>> I also tried petsc-3.2 version and I still get the same backtrace. >>> >>> If its not possible to figure out where the communicator segfault is >>> coming from its not a huge deal...I've just set the option using >>> PetscOptionsSetValue() and then use PCSetFromOptions() to pull it back out. >>> That seems to work fine. >>> >>> Even avoiding the above problem, with the PetscOptionsSetValue I'm still >>> receiving an error code 73 when I run the multigrid solver. I've included >>> the backtrace output below but its not a lot of help since the code exited >>> cleaning using my error checking procedure >>> >>> ================================================================= >>> PETSc or MPI Error. Error Code 73. Detected on Proc 0 >>> Error at line: 122 in file: solveADjointTransposePETSc.F90 >>> ================================================================= >>> >>> Program received signal SIGSEGV, Segmentation fault. >>> 0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0 >>> (gdb) bt >>> #0 0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0 >>> #1 0xb519f190 in pmpi_abort__ () from /usr/local/lib/libmpi_f77.so.0 >>> #2 0xb44c9e4c in echk (ierr=@0x49, file=..., line=@0x7a, >>> .tmp.FILE.len_V$eb=30) at terminate.f90:154 >>> #3 0xb44bd68f in solveadjointtransposepetsc () at >>> solveADjointTransposePETSc.F90:122 >>> #4 0xb44138a9 in f2py_rout_sumb_solveadjointtransposepetsc () from >>> /tmp/tmpKYF_DT/sumb.so >>> #5 0xb440fd35 in fortran_call () from /tmp/tmpKYF_DT/sumb.so >>> #6 0x0805fd6a in PyObject_Call () >>> #7 0x080dd5b0 in PyEval_EvalFrameEx () >>> #8 0x080dfbb2 in PyEval_EvalCodeEx () >>> #9 0x08168f1f in ?? () >>> #10 0x0805fd6a in PyObject_Call () >>> #11 0x080dcbeb in PyEval_EvalFrameEx () >>> #12 0x080dfbb2 in PyEval_EvalCodeEx () >>> #13 0x080de145 in PyEval_EvalFrameEx () >>> #14 0x080dfbb2 in PyEval_EvalCodeEx () >>> #15 0x080dfca7 in PyEval_EvalCode () >>> #16 0x080fd956 in PyRun_FileExFlags () >>> #17 0x080fdbb2 in PyRun_SimpleFileExFlags () >>> #18 0x0805b6d3 in Py_Main () >>> #19 0x0805a8ab in main () >>> >> >> This stack doesn't involve PETSc at all. >> >> >>> >>> Valgrid was clean right up until the end where I get the normal error >>> message: >>> >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [0]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >>> corruption errors >>> [0]PETSC ERROR: likely location of problem given in stack below >>> [0]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> available, >>> [0]PETSC ERROR: INSTEAD the line number of the start of the >>> function >>> [0]PETSC ERROR: is given. >>> [0]PETSC ERROR: [0] MatGetVecs line 8142 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c >>> [0]PETSC ERROR: [0] KSPGetVecs line 774 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c >>> [0]PETSC ERROR: [0] PCSetUp_MG line 508 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c >>> [0]PETSC ERROR: [0] PCSetUp line 810 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: [0] KSPSetUp line 182 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: [0] KSPSolve line 351 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>> >>> I will try it on a different system tomorrow to see if I have any more >>> luck. >>> >>> Thanks, >>> >>> Gaetan >>> >>> >>> >>> On Fri, Aug 31, 2012 at 11:08 PM, Jed Brown wrote: >>> >>>> On Fri, Aug 31, 2012 at 10:06 PM, Matthew Knepley wrote: >>>> >>>>> True, but the backtrace also shows that comm = 0x0 on the call to >>>>> KSPCreate(), which >>>>> leads me to believe that your petsc4py has not initialized PETSc, and >>>>> therefor not >>>>> initialized PETSC_COMM_WORLD. >>>>> >>>> >>>> Could catch, maybe PetscFunctionBegin() should check that PETSc has >>>> been initialized. >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Sep 1 15:05:21 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 1 Sep 2012 15:05:21 -0500 Subject: [petsc-users] pctype hmpi In-Reply-To: References: <560D180A-A4E6-4D84-BFBB-FA3AF21535B1@mcs.anl.gov> <8A172C6B-2B46-49BC-BB96-DA069CF8A624@lbl.gov> <396C9322-5C95-485A-9235-786ABFC3F190@mcs.anl.gov> <2EABC8B5-DF3A-4B05-A49E-E26566C31979@mcs.anl.gov> <8883F05E-51DE-4B37-82BB-102A3925505E@mcs.anl.gov> Message-ID: <29523A51-0806-49DE-A780-EDEA732EC21E@mcs.anl.gov> On Aug 31, 2012, at 11:04 PM, George Pau wrote: > Hi Barry, > > It works now. Thank you so much for the help. Excellant. Sorry about the delays. Barry > > Thanks, > George > > > On Fri, Aug 31, 2012 at 7:57 PM, Barry Smith wrote: > > Ok, another bug. Put the attached file in src/mat/impls/aij/mpi and run make in that directory then relink program and run again. > > Barry > > [see attached file: mpiaij.c] > > On Aug 31, 2012, at 5:26 PM, George Pau wrote: > > > hi Barry, > > > > The hmpi option is read in properly now. The error is now different when I use -hmpi_spawn_size 3 with mpiexec -n 1. My printout suggests this is now happening in the KSPSolve. > > > > George > > > > > > [0] petscinitialize_(): (Fortran):PETSc successfully started: procs 1 > > [0] PetscGetHostName(): Rejecting domainname, likely is NIS gilbert.(none) > > [0] petscinitialize_(): Running on machine: gilbert > > [0] petscinitialize_(): (Fortran):PETSc successfully started: procs 2 > > [0] PetscGetHostName(): Rejecting domainname, likely is NIS gilbert.(none) > > [0] petscinitialize_(): Running on machine: gilbert > > [1] petscinitialize_(): (Fortran):PETSc successfully started: procs 2 > > [1] PetscGetHostName(): Rejecting domainname, likely is NIS gilbert.(none) > > [1] petscinitialize_(): Running on machine: gilbert > > [0] PetscHMPISpawn(): PETSc HMPI successfully spawned: number of nodes = 1 node size = 3 > > > > [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 -2080374781 max tags = 2147483647 > > [0] MatSetUp(): Warning not preallocating matrix storage > > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 360 X 360; storage space: 3978 unneeded,3222 used > > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 360 > > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9 > > [0] Mat_CheckInode(): Found 120 nodes of 360. Limit used: 5. Using Inode routines > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374781 > > start ksp > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374781 > > [0] PCSetUp(): Setting up new PC > > [0] PetscCommDuplicate(): Duplicating a communicator -2080374782 -2080374780 max tags = 2147483647 > > [0] PetscCommDuplicate(): Duplicating a communicator -1006632960 -1006632959 max tags = 2147483647 > > [1] PetscCommDuplicate(): Duplicating a communicator -2080374779 -2080374778 max tags = 2147483647 > > [0] PetscCommDuplicate(): Using internal PETSc communicator -2080374782 -2080374780 > > [0] PetscCommDuplicate(): Using internal PETSc communicator -1006632960 -1006632959 > > [1] PetscCommDuplicate(): Using internal PETSc communicator -2080374779 -2080374778 > > [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374779 max tags = 2147483647 > > [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -1006632958 max tags = 2147483647 > > [1] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374777 max tags = 2147483647 > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374779 > > [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374777 > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -1006632958 > > [0] VecScatterCreate(): Special case: processor zero gets entire parallel vector, rest get none > > [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded in a user MPI_Comm m -2080374779 > > [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing reference from inner PETSc comm to this outer comm > > [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374779 > > [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374779 > > [0] PetscCommDuplicate(): Using internal PETSc communicator -2080374782 -2080374780 > > [0] PetscCommDuplicate(): Using internal PETSc communicator -1006632960 -1006632959 > > [1] PetscCommDuplicate(): Using internal PETSc communicator -2080374779 -2080374778 > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -1006632958 > > [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374777 > > > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > [0]PETSC ERROR: [1]PETSC ERROR: ------------------------------------------------------------------------ > > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > > [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > > Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > > [1]PETSC ERROR: likely location of problem given in stack below > > [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > > [0]PETSC ERROR: likely location of problem given in stack below > > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > > [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > > [1]PETSC ERROR: INSTEAD the line number of the start of the function > > [1]PETSC ERROR: is given. > > [1]PETSC ERROR: [1] MatDistribute_MPIAIJ line 192 /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/mat/impls/aij/mpi/mpiaij.c > > [1]PETSC ERROR: [1] PCSetUp_HMPI_MP line 90 /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/impls/openmp/hpc.c > > [1]PETSC ERROR: [1] PetscHMPIHandle line 253 /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/sys/objects/mpinit.c > > [1]PETSC ERROR: [1] PetscHMPISpawn line 71 /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/sys/objects/mpinit.c > > [1]PETSC ERROR: --------------------- Error Message ------------------------------------ > > [1]PETSC ERROR: Signal received! > > [1]PETSC ERROR: ------------------------------------------------------------------------ > > [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 > > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [1]PETSC ERROR: See docs/index.html for manual pages. > > [1]PETSC ERROR: ------------------------------------------------------------------------ > > [1]PETSC ERROR: ../../esd-tough2/xt2_eos4 on a arch-linu named gilbert by gpau Fri Aug 31 15:20:27 2012 > > [1]PETSC ERROR: [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > > [0]PETSC ERROR: INSTEAD the line number of the start of the function > > [0]PETSC ERROR: is given. > > Libraries linked from /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/lib > > [1]PETSC ERROR: Configure run at Fri Aug 31 15:16:04 2012 > > [1]PETSC ERROR: Configure options --with-debugging=1 --with-mpi-dir=/usr/lib/mpich2 --download-hypre=1 --prefix=/home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib > > [1]PETSC ERROR: ------------------------------------------------------------------------ > > [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 > > [0]PETSC ERROR: [0] MatDistribute_MPIAIJ line 192 /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/mat/impls/aij/mpi/mpiaij.c > > [0]PETSC ERROR: [0] PCSetUp_HMPI_MP line 90 /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/impls/openmp/hpc.c > > [0]PETSC ERROR: [0] PetscHMPIHandle line 253 /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/sys/objects/mpinit.c > > [0]PETSC ERROR: [0] PetscHMPISpawn line 71 /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/sys/objects/mpinit.c > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > > [0]PETSC ERROR: Signal received! > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > [0]PETSC ERROR: ../../esd-tough2/xt2_eos4 on a arch-linu named gilbert by gpau Fri Aug 31 15:20:27 2012 > > [0]PETSC ERROR: Libraries linked from /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/lib > > [0]PETSC ERROR: Configure run at Fri Aug 31 15:16:04 2012 > > [0]PETSC ERROR: Configure options --with-debugging=1 --with-mpi-dir=/usr/lib/mpich2 --download-hypre=1 --prefix=/home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > Fatal error in MPI_Allreduce: Other MPI error, error stack: > > MPI_Allreduce(855)........: MPI_Allreduce(sbuf=0x7fff6315ead0, rbuf=0x7fff6315eae0, count=2, MPI_INT, MPI_MAX, comm=0x84000004) failed > > MPIR_Allreduce_impl(712)..: > > MPIR_Allreduce_intra(534).: > > dequeue_and_set_error(596): Communication error with rank 0 > > > > > > On Fri, Aug 31, 2012 at 2:09 PM, Barry Smith wrote: > > [see attached file: zstart.c] > > > > On Aug 31, 2012, at 4:07 PM, George Pau wrote: > > > > > Hi Barry, > > > > > > You forgot the file ... > > > > > > George > > > > > > On Fri, Aug 31, 2012 at 2:04 PM, Barry Smith wrote: > > > > > > Yikes. It is totally my fault. The handling of these merge and spawn options is done only for PetscInitialize() for C. Not for Fortran, hence the arguments just got ignored. > > > > > > Please find attached a file zstart.c put it in the directory src/sys/ftn-custom and run make in that directory (with appropriate PETSC_DIR and PETSC_ARCH set) > > > > > > Then link and run the example again. > > > > > > > > > Barry > > > > > > > > > On Aug 31, 2012, at 3:30 PM, George Pau wrote: > > > > > > > Sorry, it was a cut and paste error. I tried running the code with all the options in the command line: > > > > > > > > mpiexec.mpich2 -n 1 xt2_eos4 -hmpi_spawn_size 3 -pc_type hmpi -ksp_type preonly -hmpi_ksp_type cg -hmpi_pc_type hypre -hmpi_pc_hypre boomeramg > > > > > > > > mpiexec.mpich2 -n 2 xt2_eos4 -hmpi_merge_size 2 -pc_type hmpi -ksp_type preonly -hmpi_ksp_type cg -hmpi_pc_type hypre -hmpi_pc_hypre boomeramg > > > > > > > > but I get the exact same outputs. > > > > > > > > George > > > > > > > > > > > > > > > > On Fri, Aug 31, 2012 at 1:18 PM, Barry Smith wrote: > > > > > > > > On Aug 31, 2012, at 3:09 PM, George Pau wrote: > > > > > > > > > Hi Barry, > > > > > > > > > > For the hmpi_spawn_size, the options in my .petscrc are > > > > > -info > > > > > -pc_view > > > > > pc_type hmpi > > > > > > > > How come there is no - in front of this one? > > > > > > > > > -ksp_type preonly > > > > > -ksp_view > > > > > -hmpi_pc_monitor > > > > > -hmpi_ksp_monitor > > > > > -hmpi_ksp_type cg > > > > > -hmpi_pc_type hypre > > > > > -hmpi_pc_hypre_type boomeramg > > > > > -hmpi_spawn_size 3 > > > > > > > > > > mpiexec.mpich2 -n 1 myprogram > > > > > > > > > > [0] petscinitialize_(): (Fortran):PETSc successfully started: procs 1 > > > > > [0] PetscGetHostName(): Rejecting domainname, likely is NIS gilbert.(none) > > > > > [0] petscinitialize_(): Running on machine: gilbert > > > > > > > > > > [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 -2080374784 max tags = 2147483647 > > > > > [0] MatSetUp(): Warning not preallocating matrix storage > > > > > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 360 X 360; storage space: 3978 unneeded,3222 used > > > > > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 360 > > > > > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9 > > > > > [0] Mat_CheckInode(): Found 120 nodes of 360. Limit used: 5. Using Inode routines > > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 > > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374784 > > > > > > > > > > Fatal error in PMPI_Bcast: Invalid communicator, error stack: > > > > > PMPI_Bcast(1478): MPI_Bcast(buf=0x7fff30dacecc, count=1, MPI_INT, root=0, comm=0x0) failed > > > > > PMPI_Bcast(1418): Invalid communicator > > > > > > > > > > I inserted some print statement between the ksp calls and found that the error occurs in > > > > > > > > > > call KSPSetFromOptions(ksp, pierr) > > > > > > > > > > 2. If I change hmpi_spawn_size 3 to hmpi_merge_size 2 and launch my job by > > > > > > > > How come there is no - in front of hmpi_merge_size 2? > > > > > > > > > > > > Can you try putting all the arguments as command line arguments instead of in a file? It shouldn't matter but it seems like some of the arguments are being ignored. > > > > > > > > Barry > > > > > > > > > > > > > > > > > > mpiexec.mpich2 -n 2 myprogram > > > > > > > > > > [0] petscinitialize_(): (Fortran):PETSc successfully started: procs 2 > > > > > [0] PetscGetHostName(): Rejecting domainname, likely is NIS gilbert.(none) > > > > > [0] petscinitialize_(): Running on machine: gilbert > > > > > [1] petscinitialize_(): (Fortran):PETSc successfully started: procs 2 > > > > > [1] PetscGetHostName(): Rejecting domainname, likely is NIS gilbert.(none) > > > > > [1] petscinitialize_(): Running on machine: gilbert > > > > > > > > > > [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 -2080374780 max tags = 2147483647 > > > > > [0] MatSetUp(): Warning not preallocating matrix storage > > > > > [1] PetscCommDuplicate(): Duplicating a communicator 1140850688 -2080374782 max tags = 2147483647 > > > > > [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374777 max tags = 2147483647 > > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374777 > > > > > [1] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374780 max tags = 2147483647 > > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374780 > > > > > [0] MatStashScatterBegin_Private(): No of messages: 1 > > > > > [0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 12896 > > > > > [0] MatAssemblyBegin_MPIAIJ(): Stash has 1611 entries, uses 0 mallocs. > > > > > [1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs. > > > > > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 180 X 180; storage space: 1998 unneeded,1602 used > > > > > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 180 > > > > > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9 > > > > > [0] Mat_CheckInode(): Found 60 nodes of 180. Limit used: 5. Using Inode routines > > > > > [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 180 X 180; storage space: 1998 unneeded,1602 used > > > > > [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 180 > > > > > [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9 > > > > > [1] Mat_CheckInode(): Found 60 nodes of 180. Limit used: 5. Using Inode routines > > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374777 > > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374780 > > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374777 > > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374780 > > > > > [0] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter > > > > > [0] VecScatterCreate(): General case: MPI to Seq > > > > > [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 180 X 3; storage space: 396 unneeded,9 used > > > > > [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 3 > > > > > [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3 > > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374782 > > > > > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 180 X 3; storage space: 396 unneeded,9 used > > > > > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 3 > > > > > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3 > > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374780 > > > > > [0] VecAssemblyBegin_MPI(): Stash has 180 entries, uses 1 mallocs. > > > > > [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs. > > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374782 > > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374780 > > > > > [0] PCSetUp(): Setting up new PC > > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374780 > > > > > > > > > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > > > > > [0]PETSC ERROR: Nonconforming object sizes! > > > > > [0]PETSC ERROR: HMPI preconditioner only works for sequential solves! > > > > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > > > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 > > > > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > > > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > > > > [0]PETSC ERROR: See docs/index.html for manual pages. > > > > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > > > > [0]PETSC ERROR: ../../esd-tough2/xt2_eos4 on a arch-linu named gilbert by gpau Fri Aug 31 13:00:31 2012 > > > > > [0]PETSC ERROR: Libraries linked from /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/lib > > > > > [0]PETSC ERROR: Configure run at Thu Aug 30 15:27:17 2012 > > > > > [0]PETSC ERROR: Configure options --with-debugging=0 --with-mpi-dir=/usr/lib/mpich2 --download-hypre=1 --prefix=/home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib > > > > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > > > > [0]PETSC ERROR: PCCreate_HMPI() line 283 in /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/impls/openmp/hpc.c > > > > > [0]PETSC ERROR: PCSetType() line 83 in /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/interface/pcset.c > > > > > [0]PETSC ERROR: PCSetFromOptions() line 188 in /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/interface/pcset.c > > > > > [0]PETSC ERROR: KSPSetFromOptions() line 287 in /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/ksp/interface/itcl.c > > > > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > > > > > [0]PETSC ERROR: No support for this operation for this object type! > > > > > [0]PETSC ERROR: PC does not have apply! > > > > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > > > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 > > > > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > > > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > > > > [0]PETSC ERROR: See docs/index.html for manual pages. > > > > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > > > > [0]PETSC ERROR: ../../esd-tough2/xt2_eos4 on a arch-linu named gilbert by gpau Fri Aug 31 13:00:31 2012 > > > > > [0]PETSC ERROR: Libraries linked from /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/lib > > > > > [0]PETSC ERROR: Configure run at Thu Aug 30 15:27:17 2012 > > > > > [0]PETSC ERROR: Configure options --with-debugging=0 --with-mpi-dir=/usr/lib/mpich2 --download-hypre=1 --prefix=/home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib > > > > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > > > > [0]PETSC ERROR: PCApply() line 382 in /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/interface/precon.c > > > > > [0]PETSC ERROR: KSPInitialResidual() line 64 in /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/ksp/interface/itres.c > > > > > [0]PETSC ERROR: KSPSolve_GMRES() line 230 in /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/ksp/impls/gmres/gmres.c > > > > > [0]PETSC ERROR: KSPSolve() line 446 in /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/ksp/interface/itfunc.c > > > > > > > > > > I note that the error appears to occur at the same point. > > > > > > > > > > George > > > > > > > > > > > > > > > On Fri, Aug 31, 2012 at 11:31 AM, Barry Smith wrote: > > > > > > > > > > On Aug 31, 2012, at 1:27 PM, George Pau wrote: > > > > > > > > > > > Hi Barry, > > > > > > > > > > > > 1. It is the exact same error related to MPI_ERR_COMM and MPI_Bcast. > > > > > > > > > > That should not happen. Please run and send all the output including the exact command line used > > > > > > > > > > > > > > > > I am currently using the MPICH2 distribution provided by ubuntu but if MPICH version that Petsc download with -download-mpich works, I can use that. > > > > > > 2. If I use hmpi_merge_size, I will need to launch mpiexec with more than 1 cpus. But, petsc will complain that the pctype hmpi can only be used in serial. > > > > > > > > > > That should not happen. Run with 2 MPI processes and -hmpi_merge_size 2 and send the complete error message. > > > > > > > > > > > > > > > Barry > > > > > > > > > > > > > > > > > George > > > > > > > > > > > > > > > > > > On Aug 31, 2012, at 11:17 AM, Barry Smith wrote: > > > > > > > > > > > >> > > > > > >> On Aug 30, 2012, at 10:02 PM, George Pau wrote: > > > > > >> > > > > > >>> Hi Barry, > > > > > >>> > > > > > >>> I tried with the addition of > > > > > >>> > > > > > >>> -hmpi_spawn_size 3 > > > > > >>> > > > > > >>> but I am still getting the same error though. > > > > > >> > > > > > >> The EXACT same error? Or some other error? > > > > > >> > > > > > >> What happens if you run with the -hmpi_merge_size option instead? > > > > > >> > > > > > >> Barry > > > > > >> > > > > > >> 1) I am getting a crash with the spawn version I suspect is due to bugs in the MPICH version I am using related to spawn. > > > > > >> > > > > > >> 2) I am getting errors with the merge version due to Apple's ASLR which they make hard to turn off. > > > > > >> > > > > > >> > > > > > >>> I am using mpich2. Any other options to try? > > > > > >>> > > > > > >>> George > > > > > >>> > > > > > >>> > > > > > >>> On Aug 30, 2012, at 7:28 PM, Barry Smith wrote: > > > > > >>> > > > > > >>>> > > > > > >>>> On Aug 30, 2012, at 7:24 PM, George Pau wrote: > > > > > >>>> > > > > > >>>>> Hi, > > > > > >>>>> > > > > > >>>>> I have some issues using the -pctype hmpi. I used the same setting found at > > > > > >>>>> > > > > > >>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCHMPI.html > > > > > >>>>> > > > > > >>>>> i.e. > > > > > >>>>> -pc_type hmpi > > > > > >>>>> -ksp_type preonly > > > > > >>>>> -hmpi_ksp_type cg > > > > > >>>>> -hmpi_pc_type hypre > > > > > >>>>> -hmpi_pc_hypre_type boomeramg > > > > > >>>>> > > > > > >>>>> My command is > > > > > >>>>> > > > > > >>>>> mpiexec -n 1 myprogram > > > > > >>>> > > > > > >>>> Sorry the documentation doesn't make this clearer. You need to start PETSc with special options to get the "worker" processes initialized. From the manual page for PCHMPI it has > > > > > >>>> > > > > > >>>> See PetscHMPIMerge() and PetscHMPISpawn() for two ways to start up MPI for use with this preconditioner > > > > > >>>> > > > > > >>>> This will tell you want option to start PETSc up with. > > > > > >>>> > > > > > >>>> I will fix the PC so that it prints a far more useful error message. > > > > > >>>> > > > > > >>>> > > > > > >>>> > > > > > >>>> Barry > > > > > >>>> > > > > > >>>> > > > > > >>>>> > > > > > >>>>> But, I get > > > > > >>>>> > > > > > >>>>> [gilbert:4041] *** An error occurred in MPI_Bcast > > > > > >>>>> [gilbert:4041] *** on communicator MPI_COMM_WORLD > > > > > >>>>> [gilbert:4041] *** MPI_ERR_COMM: invalid communicator > > > > > >>>>> [gilbert:4041] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) > > > > > >>>>> > > > > > >>>>> with openmpi. I get similar error with mpich2 > > > > > >>>>> > > > > > >>>>> Fatal error in PMPI_Bcast: Invalid communicator, error stack: > > > > > >>>>> PMPI_Bcast(1478): MPI_Bcast(buf=0x7fffb683479c, count=1, MPI_INT, root=0, comm=0x0) failed > > > > > >>>>> PMPI_Bcast(1418): Invalid communicator > > > > > >>>>> > > > > > >>>>> I couldn't figure out what is wrong. My petsc is version 3.3.3 and the configuration is -with-debugging=0 --with-mpi-dir=/usr/lib/openmpi --download-hypre=1 and I am on a Ubuntu machine. > > > > > >>>>> > > > > > >>>>> Note that with the default pc_type and ksp_type, everything is fine. It was also tested with multiple processors. I wondering whether there are some options that I am not specifying correctly? > > > > > >>>>> > > > > > >>>>> -- > > > > > >>>>> George Pau > > > > > >>>>> Earth Sciences Division > > > > > >>>>> Lawrence Berkeley National Laboratory > > > > > >>>>> One Cyclotron, MS 74-120 > > > > > >>>>> Berkeley, CA 94720 > > > > > >>>>> > > > > > >>>>> (510) 486-7196 > > > > > >>>>> gpau at lbl.gov > > > > > >>>>> http://esd.lbl.gov/about/staff/georgepau/ > > > > > >>>>> > > > > > >>>> > > > > > >>> > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > George Pau > > > > > Earth Sciences Division > > > > > Lawrence Berkeley National Laboratory > > > > > One Cyclotron, MS 74-120 > > > > > Berkeley, CA 94720 > > > > > > > > > > (510) 486-7196 > > > > > gpau at lbl.gov > > > > > http://esd.lbl.gov/about/staff/georgepau/ > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > George Pau > > > > Earth Sciences Division > > > > Lawrence Berkeley National Laboratory > > > > One Cyclotron, MS 74-120 > > > > Berkeley, CA 94720 > > > > > > > > (510) 486-7196 > > > > gpau at lbl.gov > > > > http://esd.lbl.gov/about/staff/georgepau/ > > > > > > > > > > > > > > > > > > > -- > > > George Pau > > > Earth Sciences Division > > > Lawrence Berkeley National Laboratory > > > One Cyclotron, MS 74-120 > > > Berkeley, CA 94720 > > > > > > (510) 486-7196 > > > gpau at lbl.gov > > > http://esd.lbl.gov/about/staff/georgepau/ > > > > > > > > > > > -- > > George Pau > > Earth Sciences Division > > Lawrence Berkeley National Laboratory > > One Cyclotron, MS 74-120 > > Berkeley, CA 94720 > > > > (510) 486-7196 > > gpau at lbl.gov > > http://esd.lbl.gov/about/staff/georgepau/ > > > > > > -- > George Pau > Earth Sciences Division > Lawrence Berkeley National Laboratory > One Cyclotron, MS 74-120 > Berkeley, CA 94720 > > (510) 486-7196 > gpau at lbl.gov > http://esd.lbl.gov/about/staff/georgepau/ > From kenway at utias.utoronto.ca Sat Sep 1 15:54:15 2012 From: kenway at utias.utoronto.ca (Gaetan Kenway) Date: Sat, 1 Sep 2012 16:54:15 -0400 Subject: [petsc-users] MG Preconditioning In-Reply-To: References: Message-ID: The full calling sequence is: call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) useAD = .False. useTranspose = .True. usePC = .True. call setupStateResidualMatrix(drdwpret,useAD,usePC,useTranspose) call KSPSetOperators(ksp, dRdWT, dRdwpreT, & DIFFERENT_NONZERO_PATTERN, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call KSPSetFromOptions(ksp, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call KSPSetType(ksp, KSPFGMRES, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call KSPGMRESSetRestart(ksp, adjRestart, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call KSPGMRESSetCGSRefinementType(ksp, & KSP_GMRES_CGS_REFINE_IFNEEDED, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call kspgetpc(ksp, pc, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call pcsettype(pc, PCMG, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call PetscOptionsSetValue('-pc_mg_levels','2',PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) ! Set command line options call PCSetFromOptions(pc, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) ! Create the restriction operator between finest and one below: call createRestrictOperator(RL1, 1) call PCMGSetRestriction(pc, 1, RL1, PETScierr) call EChk(PETScIerr,__FILE__,__LINE__) call PCSetup(pc, PETScierr) call EChk(PETScIerr,__FILE__,__LINE__) On Sat, Sep 1, 2012 at 3:27 PM, Jed Brown wrote: > Can you send the whole calling sequence and the full error message? It's > not clear from your email how to reproduce. You might also look at > src/ksp/ksp/examples/tutorials/ex42.c which uses PCMGSetGalerkin. Did you > set the fine grid operators before calling PCSetUp()? The coarse grid > operators should have been constructed earlier in PCSetUp_MG(). > On Sep 1, 2012 10:05 AM, "Gaetan Kenway" wrote: > >> I believe I partially tracked down my problem. The issue arises at line >> 644 in mg.c with a call to KSPGetVecs(). This is used to get the coarse >> grid vectors for the RHS. The issue is that up until this point, the actual >> operator for mglevels[i]->smoothd have not actually been set and therefore >> it cannot get the vectors. >> >> I am using >> call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> with the intention of having the coarse matrices approximately >> automatically. However, from looking over the code, it appears when it is >> used in this manner, the coarse grid operators never get set. From what I >> can tell, it is only possible to create these matrices automatically if >> (pc->dm) is True which I think means that the preconditioner is derived >> from a distributed matrix. >> >> I've set my own restriction operator, but mg->galerkin has a value of -1 >> in mg.c and none of the KSPSetOperators() are executed. >> >> I've also tried to generate the matrices myself using MatRARt() or >> MatPtAP, but I receive error code 56 "* /* no support for requested >> operation */" . *Is this op only implemented from matrices derived from >> from DM objects? >> >> Any other suggestions for creating the coarse grid operators manually? >> >> Thank you, >> >> Gaetan >> >> >> On Sat, Sep 1, 2012 at 12:29 AM, Jed Brown wrote: >> >>> On Fri, Aug 31, 2012 at 11:22 PM, Gaetan Kenway < >>> kenway at utias.utoronto.ca> wrote: >>> >>>> Hi Again >>>> >>>> I also tried petsc-3.2 version and I still get the same backtrace. >>>> >>>> If its not possible to figure out where the communicator segfault is >>>> coming from its not a huge deal...I've just set the option using >>>> PetscOptionsSetValue() and then use PCSetFromOptions() to pull it back out. >>>> That seems to work fine. >>>> >>>> Even avoiding the above problem, with the PetscOptionsSetValue I'm >>>> still receiving an error code 73 when I run the multigrid solver. I've >>>> included the backtrace output below but its not a lot of help since the >>>> code exited cleaning using my error checking procedure >>>> >>>> ================================================================= >>>> PETSc or MPI Error. Error Code 73. Detected on Proc 0 >>>> Error at line: 122 in file: solveADjointTransposePETSc.F90 >>>> ================================================================= >>>> >>>> Program received signal SIGSEGV, Segmentation fault. >>>> 0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0 >>>> (gdb) bt >>>> #0 0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0 >>>> #1 0xb519f190 in pmpi_abort__ () from /usr/local/lib/libmpi_f77.so.0 >>>> #2 0xb44c9e4c in echk (ierr=@0x49, file=..., line=@0x7a, >>>> .tmp.FILE.len_V$eb=30) at terminate.f90:154 >>>> #3 0xb44bd68f in solveadjointtransposepetsc () at >>>> solveADjointTransposePETSc.F90:122 >>>> #4 0xb44138a9 in f2py_rout_sumb_solveadjointtransposepetsc () from >>>> /tmp/tmpKYF_DT/sumb.so >>>> #5 0xb440fd35 in fortran_call () from /tmp/tmpKYF_DT/sumb.so >>>> #6 0x0805fd6a in PyObject_Call () >>>> #7 0x080dd5b0 in PyEval_EvalFrameEx () >>>> #8 0x080dfbb2 in PyEval_EvalCodeEx () >>>> #9 0x08168f1f in ?? () >>>> #10 0x0805fd6a in PyObject_Call () >>>> #11 0x080dcbeb in PyEval_EvalFrameEx () >>>> #12 0x080dfbb2 in PyEval_EvalCodeEx () >>>> #13 0x080de145 in PyEval_EvalFrameEx () >>>> #14 0x080dfbb2 in PyEval_EvalCodeEx () >>>> #15 0x080dfca7 in PyEval_EvalCode () >>>> #16 0x080fd956 in PyRun_FileExFlags () >>>> #17 0x080fdbb2 in PyRun_SimpleFileExFlags () >>>> #18 0x0805b6d3 in Py_Main () >>>> #19 0x0805a8ab in main () >>>> >>> >>> This stack doesn't involve PETSc at all. >>> >>> >>>> >>>> Valgrid was clean right up until the end where I get the normal error >>>> message: >>>> >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>>> probably memory access out of range >>>> [0]PETSC ERROR: Try option -start_in_debugger or >>>> -on_error_attach_debugger >>>> [0]PETSC ERROR: or see >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >>>> corruption errors >>>> [0]PETSC ERROR: likely location of problem given in stack below >>>> [0]PETSC ERROR: --------------------- Stack Frames >>>> ------------------------------------ >>>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>>> available, >>>> [0]PETSC ERROR: INSTEAD the line number of the start of the >>>> function >>>> [0]PETSC ERROR: is given. >>>> [0]PETSC ERROR: [0] MatGetVecs line 8142 >>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: [0] KSPGetVecs line 774 >>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c >>>> [0]PETSC ERROR: [0] PCSetUp_MG line 508 >>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c >>>> [0]PETSC ERROR: [0] PCSetUp line 810 >>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: [0] KSPSetUp line 182 >>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: [0] KSPSolve line 351 >>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>> >>>> I will try it on a different system tomorrow to see if I have any more >>>> luck. >>>> >>>> Thanks, >>>> >>>> Gaetan >>>> >>>> >>>> >>>> On Fri, Aug 31, 2012 at 11:08 PM, Jed Brown wrote: >>>> >>>>> On Fri, Aug 31, 2012 at 10:06 PM, Matthew Knepley wrote: >>>>> >>>>>> True, but the backtrace also shows that comm = 0x0 on the call to >>>>>> KSPCreate(), which >>>>>> leads me to believe that your petsc4py has not initialized PETSc, and >>>>>> therefor not >>>>>> initialized PETSC_COMM_WORLD. >>>>>> >>>>> >>>>> Could catch, maybe PetscFunctionBegin() should check that PETSc has >>>>> been initialized. >>>>> >>>> >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenway at utias.utoronto.ca Sat Sep 1 16:20:49 2012 From: kenway at utias.utoronto.ca (Gaetan Kenway) Date: Sat, 1 Sep 2012 17:20:49 -0400 Subject: [petsc-users] MG Preconditioning In-Reply-To: References: Message-ID: Sorry about the last email...I hit sent it too early. The full calling sequence is: call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) useAD = .False. useTranspose = .True. usePC = .True. call setupStateResidualMatrix(drdwpret,useAD,usePC,useTranspose) ! dRdwT has already been created call KSPSetOperators(ksp, dRdWT, dRdwpreT, & DIFFERENT_NONZERO_PATTERN, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call KSPSetFromOptions(ksp, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call KSPSetType(ksp, KSPFGMRES, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call KSPGMRESSetRestart(ksp, adjRestart, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call KSPGMRESSetCGSRefinementType(ksp, & KSP_GMRES_CGS_REFINE_IFNEEDED, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call kspgetpc(ksp, pc, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call pcsettype(pc, PCMG, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call PetscOptionsSetValue('-pc_mg_levels','2',PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) ! Set command line options call PCSetFromOptions(pc, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) ! Create the restriction operator between finest and one below: call createRestrictOperator(RL1, 1) call PCMGSetRestriction(pc, 1, RL1, PETScierr) call EChk(PETScIerr,__FILE__,__LINE__) call PCSetup(pc, PETScierr) call EChk(PETScIerr,__FILE__,__LINE__) The error occurs in PCSetup() which calls PCSetup_MG() and the issue is in KSPGetVecs() on line 644 in mg.c. Just to clarify, this isn't a memory problem/segfault. PETSc returns control cleanly and I my code exits when the error is detected. Therefore a backtrace or valgrind doesn't yield any additional information. ( I Ran both). The terminal output is: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ ================================================================= PETSc or MPI Error. Error Code 73. Detected on Proc 0 Error at line: 130 in file: setupPETScKsp.F90 ================================================================= [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] MatGetVecs line 8142 /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c [0]PETSC ERROR: [0] KSPGetVecs line 774 /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c [0]PETSC ERROR: [0] PCSetUp_MG line 508 /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: [0] PCSetUp line 810 /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c I had a look at ex42.c but it uses a DM which I am not doing. From the example there is the following: ierr = PCMGSetLevels(pc,nlevels,PETSC_NULL);CHKERRQ(ierr); ierr = PCMGSetType(pc,PC_MG_MULTIPLICATIVE);CHKERRQ(ierr); ierr = PCMGSetGalerkin(pc,PETSC_TRUE);CHKERRQ(ierr); for (k=1; kdm is True. I guess I'll just create the operators myself. Even if I modify the code to make it enter the if statement at line 577 if (mg->galerkin == 1) { Mat B; /* currently only handle case where mat and pmat are the same on coarser levels */ ierr = KSPGetOperators(mglevels[n-1]->smoothd,&dA,&dB,&uflag);CHKERRQ(ierr); if (!pc->setupcalled) { for (i=n-2; i>-1; i--) { ierr = MatPtAP(dB,mglevels[i+1]->interpolate,MAT_INITIAL_MATRIX,1.0,&B);CHKERRQ(ierr); ierr = KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); if (i != n-2) {ierr = PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} dB = B; } if (n > 1) {ierr = PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} } else { for (i=n-2; i>-1; i--) { ierr = KSPGetOperators(mglevels[i]->smoothd,PETSC_NULL,&B,PETSC_NULL);CHKERRQ(ierr); ierr = MatPtAP(dB,mglevels[i+1]->interpolate,MAT_REUSE_MATRIX,1.0,&B);CHKERRQ(ierr); ierr = KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); dB = B; } } which seems like where it should be done....the MatPTAP will not run. I'm using a mpibaij matrix for 'dRdwPreT' and my restriction matrix 'RL1' is a mpiaij matrix. When I try to run MatRART myself with the code: call MatRART(dRdwT, RL1, MAT_INITIAL_MATRIX, 1.0, newMat, PETScIERR) call EChk(PETScIerr,__FILE__,__LINE__) I receive error code 56: /* no support for requested operation */ (this is a clean exit from PETSc) In conclusion, I have two issues: 1) The coarse grid operators are not being set automatically in mg.c. 2) Using MatRART() with a mpibaij matrix and a mpiaij matrix is unsupported. Is there a PCMG example that doesn't use a DM to generate the restriction/prolongation/matrix information I can have a look at? Thanks, Gaetan On Sat, Sep 1, 2012 at 4:54 PM, Gaetan Kenway wrote: > The full calling sequence is: > > > call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > useAD = .False. > useTranspose = .True. > usePC = .True. > call setupStateResidualMatrix(drdwpret,useAD,usePC,useTranspose) > > call KSPSetOperators(ksp, dRdWT, dRdwpreT, & > DIFFERENT_NONZERO_PATTERN, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPSetFromOptions(ksp, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPSetType(ksp, KSPFGMRES, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPGMRESSetRestart(ksp, adjRestart, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPGMRESSetCGSRefinementType(ksp, & > KSP_GMRES_CGS_REFINE_IFNEEDED, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call kspgetpc(ksp, pc, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call pcsettype(pc, PCMG, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call PetscOptionsSetValue('-pc_mg_levels','2',PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > ! Set command line options > > > call PCSetFromOptions(pc, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > ! Create the restriction operator between finest and one below: > > > call createRestrictOperator(RL1, 1) > > call PCMGSetRestriction(pc, 1, RL1, PETScierr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call PCSetup(pc, PETScierr) > call EChk(PETScIerr,__FILE__,__LINE__) > > > > On Sat, Sep 1, 2012 at 3:27 PM, Jed Brown wrote: > >> Can you send the whole calling sequence and the full error message? It's >> not clear from your email how to reproduce. You might also look at >> src/ksp/ksp/examples/tutorials/ex42.c which uses PCMGSetGalerkin. Did you >> set the fine grid operators before calling PCSetUp()? The coarse grid >> operators should have been constructed earlier in PCSetUp_MG(). >> On Sep 1, 2012 10:05 AM, "Gaetan Kenway" >> wrote: >> >>> I believe I partially tracked down my problem. The issue arises at line >>> 644 in mg.c with a call to KSPGetVecs(). This is used to get the coarse >>> grid vectors for the RHS. The issue is that up until this point, the actual >>> operator for mglevels[i]->smoothd have not actually been set and therefore >>> it cannot get the vectors. >>> >>> I am using >>> call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) >>> call EChk(PETScIerr,__FILE__,__LINE__) >>> >>> with the intention of having the coarse matrices approximately >>> automatically. However, from looking over the code, it appears when it is >>> used in this manner, the coarse grid operators never get set. From what I >>> can tell, it is only possible to create these matrices automatically if >>> (pc->dm) is True which I think means that the preconditioner is derived >>> from a distributed matrix. >>> >>> I've set my own restriction operator, but mg->galerkin has a value of -1 >>> in mg.c and none of the KSPSetOperators() are executed. >>> >>> I've also tried to generate the matrices myself using MatRARt() or >>> MatPtAP, but I receive error code 56 "* /* no support for requested >>> operation */" . *Is this op only implemented from matrices derived from >>> from DM objects? >>> >>> Any other suggestions for creating the coarse grid operators manually? >>> >>> Thank you, >>> >>> Gaetan >>> >>> >>> On Sat, Sep 1, 2012 at 12:29 AM, Jed Brown wrote: >>> >>>> On Fri, Aug 31, 2012 at 11:22 PM, Gaetan Kenway < >>>> kenway at utias.utoronto.ca> wrote: >>>> >>>>> Hi Again >>>>> >>>>> I also tried petsc-3.2 version and I still get the same backtrace. >>>>> >>>>> If its not possible to figure out where the communicator segfault is >>>>> coming from its not a huge deal...I've just set the option using >>>>> PetscOptionsSetValue() and then use PCSetFromOptions() to pull it back out. >>>>> That seems to work fine. >>>>> >>>>> Even avoiding the above problem, with the PetscOptionsSetValue I'm >>>>> still receiving an error code 73 when I run the multigrid solver. I've >>>>> included the backtrace output below but its not a lot of help since the >>>>> code exited cleaning using my error checking procedure >>>>> >>>>> ================================================================= >>>>> PETSc or MPI Error. Error Code 73. Detected on Proc 0 >>>>> Error at line: 122 in file: solveADjointTransposePETSc.F90 >>>>> ================================================================= >>>>> >>>>> Program received signal SIGSEGV, Segmentation fault. >>>>> 0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0 >>>>> (gdb) bt >>>>> #0 0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0 >>>>> #1 0xb519f190 in pmpi_abort__ () from /usr/local/lib/libmpi_f77.so.0 >>>>> #2 0xb44c9e4c in echk (ierr=@0x49, file=..., line=@0x7a, >>>>> .tmp.FILE.len_V$eb=30) at terminate.f90:154 >>>>> #3 0xb44bd68f in solveadjointtransposepetsc () at >>>>> solveADjointTransposePETSc.F90:122 >>>>> #4 0xb44138a9 in f2py_rout_sumb_solveadjointtransposepetsc () from >>>>> /tmp/tmpKYF_DT/sumb.so >>>>> #5 0xb440fd35 in fortran_call () from /tmp/tmpKYF_DT/sumb.so >>>>> #6 0x0805fd6a in PyObject_Call () >>>>> #7 0x080dd5b0 in PyEval_EvalFrameEx () >>>>> #8 0x080dfbb2 in PyEval_EvalCodeEx () >>>>> #9 0x08168f1f in ?? () >>>>> #10 0x0805fd6a in PyObject_Call () >>>>> #11 0x080dcbeb in PyEval_EvalFrameEx () >>>>> #12 0x080dfbb2 in PyEval_EvalCodeEx () >>>>> #13 0x080de145 in PyEval_EvalFrameEx () >>>>> #14 0x080dfbb2 in PyEval_EvalCodeEx () >>>>> #15 0x080dfca7 in PyEval_EvalCode () >>>>> #16 0x080fd956 in PyRun_FileExFlags () >>>>> #17 0x080fdbb2 in PyRun_SimpleFileExFlags () >>>>> #18 0x0805b6d3 in Py_Main () >>>>> #19 0x0805a8ab in main () >>>>> >>>> >>>> This stack doesn't involve PETSc at all. >>>> >>>> >>>>> >>>>> Valgrid was clean right up until the end where I get the normal error >>>>> message: >>>>> >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>>>> probably memory access out of range >>>>> [0]PETSC ERROR: Try option -start_in_debugger or >>>>> -on_error_attach_debugger >>>>> [0]PETSC ERROR: or see >>>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>>>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >>>>> corruption errors >>>>> [0]PETSC ERROR: likely location of problem given in stack below >>>>> [0]PETSC ERROR: --------------------- Stack Frames >>>>> ------------------------------------ >>>>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>>>> available, >>>>> [0]PETSC ERROR: INSTEAD the line number of the start of the >>>>> function >>>>> [0]PETSC ERROR: is given. >>>>> [0]PETSC ERROR: [0] MatGetVecs line 8142 >>>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c >>>>> [0]PETSC ERROR: [0] KSPGetVecs line 774 >>>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c >>>>> [0]PETSC ERROR: [0] PCSetUp_MG line 508 >>>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c >>>>> [0]PETSC ERROR: [0] PCSetUp line 810 >>>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c >>>>> [0]PETSC ERROR: [0] KSPSetUp line 182 >>>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>>> [0]PETSC ERROR: [0] KSPSolve line 351 >>>>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>>> >>>>> I will try it on a different system tomorrow to see if I have any more >>>>> luck. >>>>> >>>>> Thanks, >>>>> >>>>> Gaetan >>>>> >>>>> >>>>> >>>>> On Fri, Aug 31, 2012 at 11:08 PM, Jed Brown wrote: >>>>> >>>>>> On Fri, Aug 31, 2012 at 10:06 PM, Matthew Knepley wrote: >>>>>> >>>>>>> True, but the backtrace also shows that comm = 0x0 on the call to >>>>>>> KSPCreate(), which >>>>>>> leads me to believe that your petsc4py has not initialized PETSc, >>>>>>> and therefor not >>>>>>> initialized PETSC_COMM_WORLD. >>>>>>> >>>>>> >>>>>> Could catch, maybe PetscFunctionBegin() should check that PETSc has >>>>>> been initialized. >>>>>> >>>>> >>>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Sep 1 16:34:57 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 1 Sep 2012 16:34:57 -0500 Subject: [petsc-users] MG Preconditioning In-Reply-To: References: Message-ID: On Sat, Sep 1, 2012 at 4:20 PM, Gaetan Kenway wrote: > Sorry about the last email...I hit sent it too early. > > The full calling sequence is: > > > call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > useAD = .False. > useTranspose = .True. > usePC = .True. > call setupStateResidualMatrix(drdwpret,useAD,usePC,useTranspose) > > ! dRdwT has already been created > call KSPSetOperators(ksp, dRdWT, dRdwpreT, & > DIFFERENT_NONZERO_PATTERN, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPSetFromOptions(ksp, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPSetType(ksp, KSPFGMRES, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPGMRESSetRestart(ksp, adjRestart, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPGMRESSetCGSRefinementType(ksp, & > KSP_GMRES_CGS_REFINE_IFNEEDED, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call kspgetpc(ksp, pc, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call pcsettype(pc, PCMG, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call PetscOptionsSetValue('-pc_mg_levels','2',PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > ! Set command line options > > > call PCSetFromOptions(pc, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > ! Create the restriction operator between finest and one below: > > > call createRestrictOperator(RL1, 1) > > call PCMGSetRestriction(pc, 1, RL1, PETScierr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call PCSetup(pc, PETScierr) > call EChk(PETScIerr,__FILE__,__LINE__) > > The error occurs in PCSetup() which calls PCSetup_MG() and the issue is > in KSPGetVecs() on line 644 in mg.c. Just to clarify, this isn't a memory > problem/segfault. > The output below is a SEGV. > PETSc returns control cleanly and I my code exits when the error is > detected. Therefore a backtrace or valgrind doesn't yield any additional > information. ( I Ran both). > You can still set a breakpoint in PetscError (-start_in_debugger / -on_error_attach_debugger does this automatically). Then you can examine the stack. > The terminal output is: > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > ================================================================= > PETSc or MPI Error. Error Code 73. Detected on Proc 0 > Error at line: 130 in file: setupPETScKsp.F90 > ================================================================= > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] MatGetVecs line 8142 > /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c > [0]PETSC ERROR: [0] KSPGetVecs line 774 > /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c > [0]PETSC ERROR: [0] PCSetUp_MG line 508 > /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: [0] PCSetUp line 810 > /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c > > I had a look at ex42.c but it uses a DM which I am not doing. It uses the DM for convenience to build the operators, but it does *not* tell the KSP about the DM. That is, it does not call KSPSetDM() or PCSetDM(). So the code in mg.c does not know about the DM. > From the example there is the following: > > ierr = PCMGSetLevels(pc,nlevels,PETSC_NULL);CHKERRQ(ierr); > ierr = PCMGSetType(pc,PC_MG_MULTIPLICATIVE);CHKERRQ(ierr); > ierr = PCMGSetGalerkin(pc,PETSC_TRUE);CHKERRQ(ierr); > > for (k=1; k ierr = > DMCreateInterpolation(da_list[k-1],da_list[k],&R,PETSC_NULL);CHKERRQ(ierr); > ierr = PCMGSetInterpolation(pc,k,R);CHKERRQ(ierr); > ierr = MatDestroy(&R);CHKERRQ(ierr); > } > > Is there anything special about "R" in this case? > Nothing special, it is just an MAIJ matrix that performs the interpolation. > As I mentioned before, I know what the issue is, the coarse grid operators > are not set at all. From what I can tell, they should be set in the routine > PCSetup_MG in mg.c, but the code is never run unless pc->dm is True. > What code is not run unless pc->dm? > I guess I'll just create the operators myself. Even if I modify the code > to make it enter the if statement at line 577 > You called PCMGSetGalerkin() so the code below should run, constructing the coarse levels. You could try setting a breakpoint in there to see why it's not running. > > if (mg->galerkin == 1) { > Mat B; > /* currently only handle case where mat and pmat are the same on > coarser levels */ > ierr = > KSPGetOperators(mglevels[n-1]->smoothd,&dA,&dB,&uflag);CHKERRQ(ierr); > if (!pc->setupcalled) { > for (i=n-2; i>-1; i--) { > ierr = > MatPtAP(dB,mglevels[i+1]->interpolate,MAT_INITIAL_MATRIX,1.0,&B);CHKERRQ(ierr); > ierr = > KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); > if (i != n-2) {ierr = > PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} > dB = B; > } > if (n > 1) {ierr = > PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} > } else { > for (i=n-2; i>-1; i--) { > ierr = > KSPGetOperators(mglevels[i]->smoothd,PETSC_NULL,&B,PETSC_NULL);CHKERRQ(ierr); > ierr = > MatPtAP(dB,mglevels[i+1]->interpolate,MAT_REUSE_MATRIX,1.0,&B);CHKERRQ(ierr); > ierr = > KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); > dB = B; > } > } > > which seems like where it should be done....the MatPTAP will not run. > What do you mean "will not run"? > I'm using a mpibaij matrix for 'dRdwPreT' and my restriction matrix 'RL1' > is a mpiaij matrix. When I try to run MatRART myself with the code: > > call MatRART(dRdwT, RL1, MAT_INITIAL_MATRIX, 1.0, newMat, PETScIERR) > call EChk(PETScIerr,__FILE__,__LINE__) > > I receive error code 56: /* no support for requested operation */ (this is > a clean exit from PETSc) > MatRARt (or MatPtAP, for that matter) is not implemented for BAIJ with an AIJ projection. The AIJ projection will lose the block structure anyway, so just use AIJ for both. > > In conclusion, I have two issues: 1) The coarse grid operators are not > being set automatically in mg.c. 2) Using MatRART() with a mpibaij matrix > and a mpiaij matrix is unsupported. > > Is there a PCMG example that doesn't use a DM to generate the > restriction/prolongation/matrix information I can have a look at? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenway at utias.utoronto.ca Sat Sep 1 17:16:10 2012 From: kenway at utias.utoronto.ca (Gaetan Kenway) Date: Sat, 1 Sep 2012 18:16:10 -0400 Subject: [petsc-users] MG Preconditioning In-Reply-To: References: Message-ID: Hi Jed After probing why the following code snippit isn't run: if (mg->galerkin == 1) { Mat B; /* currently only handle case where mat and pmat are the same on coarser levels */ ierr = KSPGetOperators(mglevels[n-1]->smoothd,&dA,&dB,&uflag);CHKERRQ(ierr); if (!pc->setupcalled) { for (i=n-2; i>-1; i--) { ierr = MatPtAP(dB,mglevels[i+1]->interpolate,MAT_INITIAL_MATRIX,1.0,&B);CHKERRQ(ierr); ierr = KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); if (i != n-2) {ierr = PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} dB = B; } if (n > 1) {ierr = PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} } else { for (i=n-2; i>-1; i--) { ierr = KSPGetOperators(mglevels[i]->smoothd,PETSC_NULL,&B,PETSC_NULL);CHKERRQ(ierr); ierr = MatPtAP(dB,mglevels[i+1]->interpolate,MAT_REUSE_MATRIX,1.0,&B);CHKERRQ(ierr); ierr = KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); dB = B; } } It is because after I use PCMGSetGalerkin(), mg->galerkin has a value of -1 not 1. From what I can tell the value of PETSC_TRUE is architecture/implementation dependant. I can't see where/how the value of mg->galerkin obtains a value of 1 and runs the code above. As for the other issue, after converting my baij matrix to aij, running MatRART() is ok. So we just have to figure out why calling call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) call EChk(PETScIerr,__FILE__,__LINE__) sets mg->galerkin to a value of -1 and the code above is expecting a value of 1. Gaetan On Sat, Sep 1, 2012 at 5:34 PM, Jed Brown wrote: > On Sat, Sep 1, 2012 at 4:20 PM, Gaetan Kenway wrote: > >> Sorry about the last email...I hit sent it too early. >> >> The full calling sequence is: >> >> >> call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> useAD = .False. >> useTranspose = .True. >> usePC = .True. >> call setupStateResidualMatrix(drdwpret,useAD,usePC,useTranspose) >> >> ! dRdwT has already been created >> call KSPSetOperators(ksp, dRdWT, dRdwpreT, & >> DIFFERENT_NONZERO_PATTERN, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> call KSPSetFromOptions(ksp, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> call KSPSetType(ksp, KSPFGMRES, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> call KSPGMRESSetRestart(ksp, adjRestart, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> call KSPGMRESSetCGSRefinementType(ksp, & >> KSP_GMRES_CGS_REFINE_IFNEEDED, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> call kspgetpc(ksp, pc, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> call pcsettype(pc, PCMG, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> call PetscOptionsSetValue('-pc_mg_levels','2',PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> ! Set command line options >> >> >> call PCSetFromOptions(pc, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> ! Create the restriction operator between finest and one below: >> >> >> call createRestrictOperator(RL1, 1) >> >> call PCMGSetRestriction(pc, 1, RL1, PETScierr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> call PCSetup(pc, PETScierr) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> The error occurs in PCSetup() which calls PCSetup_MG() and the issue is >> in KSPGetVecs() on line 644 in mg.c. Just to clarify, this isn't a memory >> problem/segfault. >> > > The output below is a SEGV. > > >> PETSc returns control cleanly and I my code exits when the error is >> detected. Therefore a backtrace or valgrind doesn't yield any additional >> information. ( I Ran both). >> > > You can still set a breakpoint in PetscError (-start_in_debugger / > -on_error_attach_debugger does this automatically). Then you can examine > the stack. > > >> The terminal output is: >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >> corruption errors >> [0]PETSC ERROR: likely location of problem given in stack below >> [0]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> ================================================================= >> PETSc or MPI Error. Error Code 73. Detected on Proc 0 >> Error at line: 130 in file: setupPETScKsp.F90 >> ================================================================= >> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> [0]PETSC ERROR: INSTEAD the line number of the start of the function >> [0]PETSC ERROR: is given. >> [0]PETSC ERROR: [0] MatGetVecs line 8142 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c >> [0]PETSC ERROR: [0] KSPGetVecs line 774 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c >> [0]PETSC ERROR: [0] PCSetUp_MG line 508 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c >> [0]PETSC ERROR: [0] PCSetUp line 810 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c >> >> I had a look at ex42.c but it uses a DM which I am not doing. > > > It uses the DM for convenience to build the operators, but it does *not* > tell the KSP about the DM. That is, it does not call KSPSetDM() or > PCSetDM(). So the code in mg.c does not know about the DM. > > >> From the example there is the following: >> >> ierr = PCMGSetLevels(pc,nlevels,PETSC_NULL);CHKERRQ(ierr); >> ierr = PCMGSetType(pc,PC_MG_MULTIPLICATIVE);CHKERRQ(ierr); >> ierr = PCMGSetGalerkin(pc,PETSC_TRUE);CHKERRQ(ierr); >> >> for (k=1; k> ierr = >> DMCreateInterpolation(da_list[k-1],da_list[k],&R,PETSC_NULL);CHKERRQ(ierr); >> ierr = PCMGSetInterpolation(pc,k,R);CHKERRQ(ierr); >> ierr = MatDestroy(&R);CHKERRQ(ierr); >> } >> >> Is there anything special about "R" in this case? >> > > Nothing special, it is just an MAIJ matrix that performs the interpolation. > > >> As I mentioned before, I know what the issue is, the coarse grid >> operators are not set at all. From what I can tell, they should be set in >> the routine PCSetup_MG in mg.c, but the code is never run unless pc->dm is >> True. >> > > What code is not run unless pc->dm? > > >> I guess I'll just create the operators myself. Even if I modify the code >> to make it enter the if statement at line 577 >> > > You called PCMGSetGalerkin() so the code below should run, constructing > the coarse levels. You could try setting a breakpoint in there to see why > it's not running. > > >> >> if (mg->galerkin == 1) { >> Mat B; >> /* currently only handle case where mat and pmat are the same on >> coarser levels */ >> ierr = >> KSPGetOperators(mglevels[n-1]->smoothd,&dA,&dB,&uflag);CHKERRQ(ierr); >> if (!pc->setupcalled) { >> for (i=n-2; i>-1; i--) { >> ierr = >> MatPtAP(dB,mglevels[i+1]->interpolate,MAT_INITIAL_MATRIX,1.0,&B);CHKERRQ(ierr); >> ierr = >> KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); >> if (i != n-2) {ierr = >> PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} >> dB = B; >> } >> if (n > 1) {ierr = >> PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} >> } else { >> for (i=n-2; i>-1; i--) { >> ierr = >> KSPGetOperators(mglevels[i]->smoothd,PETSC_NULL,&B,PETSC_NULL);CHKERRQ(ierr); >> ierr = >> MatPtAP(dB,mglevels[i+1]->interpolate,MAT_REUSE_MATRIX,1.0,&B);CHKERRQ(ierr); >> ierr = >> KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); >> dB = B; >> } >> } >> >> which seems like where it should be done....the MatPTAP will not run. >> > > What do you mean "will not run"? > > >> I'm using a mpibaij matrix for 'dRdwPreT' and my restriction matrix >> 'RL1' is a mpiaij matrix. When I try to run MatRART myself with the code: >> >> call MatRART(dRdwT, RL1, MAT_INITIAL_MATRIX, 1.0, newMat, PETScIERR) >> call EChk(PETScIerr,__FILE__,__LINE__) >> >> I receive error code 56: /* no support for requested operation */ (this >> is a clean exit from PETSc) >> > > MatRARt (or MatPtAP, for that matter) is not implemented for BAIJ with an > AIJ projection. The AIJ projection will lose the block structure anyway, so > just use AIJ for both. > > >> >> In conclusion, I have two issues: 1) The coarse grid operators are not >> being set automatically in mg.c. 2) Using MatRART() with a mpibaij matrix >> and a mpiaij matrix is unsupported. >> >> Is there a PCMG example that doesn't use a DM to generate the >> restriction/prolongation/matrix information I can have a look at? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Sep 1 17:33:11 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 1 Sep 2012 17:33:11 -0500 Subject: [petsc-users] MG Preconditioning In-Reply-To: References: Message-ID: <6E8512EF-39E3-47FA-A994-E4D4E47DD36E@mcs.anl.gov> On Sep 1, 2012, at 5:16 PM, Gaetan Kenway wrote: > Hi Jed > > After probing why the following code snippit isn't run: > > if (mg->galerkin == 1) { > Mat B; > /* currently only handle case where mat and pmat are the same on coarser levels */ > dB = B; > } > } > > It is because after I use PCMGSetGalerkin(), mg->galerkin has a value of -1 not 1. From what I can tell the value of PETSC_TRUE is architecture/implementation dependant. I can't see where/how the value of mg->galerkin obtains a value of 1 and runs the code above. > > As for the other issue, after converting my baij matrix to aij, running MatRART() is ok. > > So we just have to figure out why calling > > > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > sets mg->galerkin to a value of -1 and the code above is expecting a value of 1. This is a mistake with our Fortran interface. Some Fortran compilers (for what crazy reason I don't know) use -1 as bool for true. Normally this doesn't matter because we have if (boolvalue) { checks and so negative one and one work the same way; But the code mg->galerkin = (PetscInt)use; in PCMGSetGalekin() fails because we then check the numerical value of the result, instead of just if it is zero or nonzero. I can change this conversion to mg->galerkin = use ? 1 : 0 so the the Fortran use of -1 gets converted to one and this should resolve the problem. I have pushed this fix into the petsc-3.3 and petsc-dev repositories. If you are using 3.3 I've attached a new src/ksp/pc/impls/mg/mg.c that you can drop into that directory and run make in that directory to update your library. If you are using petsc-dev then do an hg pull -u and rerun make in that directory.[see attached file: mg.c] Thanks for determining the cause of this rather nasty bug. Barry > > Gaetan > > > > > On Sat, Sep 1, 2012 at 5:34 PM, Jed Brown wrote: > On Sat, Sep 1, 2012 at 4:20 PM, Gaetan Kenway wrote: > Sorry about the last email...I hit sent it too early. > > The full calling sequence is: > > > call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > useAD = .False. > useTranspose = .True. > usePC = .True. > call setupStateResidualMatrix(drdwpret,useAD,usePC,useTranspose) > > ! dRdwT has already been created > call KSPSetOperators(ksp, dRdWT, dRdwpreT, & > DIFFERENT_NONZERO_PATTERN, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPSetFromOptions(ksp, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPSetType(ksp, KSPFGMRES, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPGMRESSetRestart(ksp, adjRestart, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call KSPGMRESSetCGSRefinementType(ksp, & > KSP_GMRES_CGS_REFINE_IFNEEDED, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call kspgetpc(ksp, pc, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call pcsettype(pc, PCMG, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call PetscOptionsSetValue('-pc_mg_levels','2',PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > ! Set command line options > call PCSetFromOptions(pc, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) > call EChk(PETScIerr,__FILE__,__LINE__) > > ! Create the restriction operator between finest and one below: > call createRestrictOperator(RL1, 1) > > call PCMGSetRestriction(pc, 1, RL1, PETScierr) > call EChk(PETScIerr,__FILE__,__LINE__) > > call PCSetup(pc, PETScierr) > call EChk(PETScIerr,__FILE__,__LINE__) > > The error occurs in PCSetup() which calls PCSetup_MG() and the issue is in KSPGetVecs() on line 644 in mg.c. Just to clarify, this isn't a memory problem/segfault. > > The output below is a SEGV. > > PETSc returns control cleanly and I my code exits when the error is detected. Therefore a backtrace or valgrind doesn't yield any additional information. ( I Ran both). > > You can still set a breakpoint in PetscError (-start_in_debugger / -on_error_attach_debugger does this automatically). Then you can examine the stack. > > The terminal output is: > > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > ================================================================= > PETSc or MPI Error. Error Code 73. Detected on Proc 0 > Error at line: 130 in file: setupPETScKsp.F90 > ================================================================= > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] MatGetVecs line 8142 /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c > [0]PETSC ERROR: [0] KSPGetVecs line 774 /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c > [0]PETSC ERROR: [0] PCSetUp_MG line 508 /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: [0] PCSetUp line 810 /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c > > I had a look at ex42.c but it uses a DM which I am not doing. > > It uses the DM for convenience to build the operators, but it does *not* tell the KSP about the DM. That is, it does not call KSPSetDM() or PCSetDM(). So the code in mg.c does not know about the DM. > > From the example there is the following: > > ierr = PCMGSetLevels(pc,nlevels,PETSC_NULL);CHKERRQ(ierr); > ierr = PCMGSetType(pc,PC_MG_MULTIPLICATIVE);CHKERRQ(ierr); > ierr = PCMGSetGalerkin(pc,PETSC_TRUE);CHKERRQ(ierr); > > for (k=1; k ierr = DMCreateInterpolation(da_list[k-1],da_list[k],&R,PETSC_NULL);CHKERRQ(ierr); > ierr = PCMGSetInterpolation(pc,k,R);CHKERRQ(ierr); > ierr = MatDestroy(&R);CHKERRQ(ierr); > } > > Is there anything special about "R" in this case? > > Nothing special, it is just an MAIJ matrix that performs the interpolation. > > As I mentioned before, I know what the issue is, the coarse grid operators are not set at all. From what I can tell, they should be set in the routine PCSetup_MG in mg.c, but the code is never run unless pc->dm is True. > > What code is not run unless pc->dm? > > I guess I'll just create the operators myself. Even if I modify the code to make it enter the if statement at line 577 > > You called PCMGSetGalerkin() so the code below should run, constructing the coarse levels. You could try setting a breakpoint in there to see why it's not running. > > > if (mg->galerkin == 1) { > Mat B; > /* currently only handle case where mat and pmat are the same on coarser levels */ > ierr = KSPGetOperators(mglevels[n-1]->smoothd,&dA,&dB,&uflag);CHKERRQ(ierr); > if (!pc->setupcalled) { > for (i=n-2; i>-1; i--) { > ierr = MatPtAP(dB,mglevels[i+1]->interpolate,MAT_INITIAL_MATRIX,1.0,&B);CHKERRQ(ierr); > ierr = KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); > if (i != n-2) {ierr = PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} > dB = B; > } > if (n > 1) {ierr = PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} > } else { > for (i=n-2; i>-1; i--) { > ierr = KSPGetOperators(mglevels[i]->smoothd,PETSC_NULL,&B,PETSC_NULL);CHKERRQ(ierr); > ierr = MatPtAP(dB,mglevels[i+1]->interpolate,MAT_REUSE_MATRIX,1.0,&B);CHKERRQ(ierr); > ierr = KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); > dB = B; > } > } > > which seems like where it should be done....the MatPTAP will not run. > > What do you mean "will not run"? > > I'm using a mpibaij matrix for 'dRdwPreT' and my restriction matrix 'RL1' is a mpiaij matrix. When I try to run MatRART myself with the code: > > call MatRART(dRdwT, RL1, MAT_INITIAL_MATRIX, 1.0, newMat, PETScIERR) > call EChk(PETScIerr,__FILE__,__LINE__) > > I receive error code 56: /* no support for requested operation */ (this is a clean exit from PETSc) > > MatRARt (or MatPtAP, for that matter) is not implemented for BAIJ with an AIJ projection. The AIJ projection will lose the block structure anyway, so just use AIJ for both. > > > In conclusion, I have two issues: 1) The coarse grid operators are not being set automatically in mg.c. 2) Using MatRART() with a mpibaij matrix and a mpiaij matrix is unsupported. > > Is there a PCMG example that doesn't use a DM to generate the restriction/prolongation/matrix information I can have a look at? > > -------------- next part -------------- A non-text attachment was scrubbed... Name: mg.c Type: application/octet-stream Size: 41758 bytes Desc: not available URL: From kenway at utias.utoronto.ca Sat Sep 1 18:02:39 2012 From: kenway at utias.utoronto.ca (Gaetan Kenway) Date: Sat, 1 Sep 2012 19:02:39 -0400 Subject: [petsc-users] MG Preconditioning In-Reply-To: <6E8512EF-39E3-47FA-A994-E4D4E47DD36E@mcs.anl.gov> References: <6E8512EF-39E3-47FA-A994-E4D4E47DD36E@mcs.anl.gov> Message-ID: Hi Jed That commit fixes the mg->galerkin issue. I'm still having issues with the projection/restriction matrices. Are the projection/restriction matrices supposed to be serial matrices operating on the unknowns on each processor even when running in parallel? Thanks, Gaetan On Sat, Sep 1, 2012 at 6:33 PM, Barry Smith wrote: > > On Sep 1, 2012, at 5:16 PM, Gaetan Kenway > wrote: > > > Hi Jed > > > > After probing why the following code snippit isn't run: > > > > if (mg->galerkin == 1) { > > Mat B; > > /* currently only handle case where mat and pmat are the same on > coarser levels */ > > dB = B; > > } > > } > > > > It is because after I use PCMGSetGalerkin(), mg->galerkin has a value > of -1 not 1. From what I can tell the value of PETSC_TRUE is > architecture/implementation dependant. I can't see where/how the value of > mg->galerkin obtains a value of 1 and runs the code above. > > > > As for the other issue, after converting my baij matrix to aij, running > MatRART() is ok. > > > > So we just have to figure out why calling > > > > > > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > sets mg->galerkin to a value of -1 and the code above is expecting a > value of 1. > > This is a mistake with our Fortran interface. Some Fortran compilers > (for what crazy reason I don't know) use -1 as bool for true. Normally this > doesn't matter because we have if (boolvalue) { checks and so negative one > and one work the same way; > > But the code mg->galerkin = (PetscInt)use; in PCMGSetGalekin() > fails because we then check the numerical value of the result, instead of > just if it is zero or nonzero. > > I can change this conversion to mg->galerkin = use ? 1 : 0 so the > the Fortran use of -1 gets converted to one and this should resolve the > problem. > > I have pushed this fix into the petsc-3.3 and petsc-dev repositories. > If you are using 3.3 I've attached a new src/ksp/pc/impls/mg/mg.c that you > can drop into that directory and run make in that directory to update your > library. If you are using petsc-dev then do an hg pull -u and rerun make in > that directory.[see attached file: mg.c] > > Thanks for determining the cause of this rather nasty bug. > > Barry > > > > > > Gaetan > > > > > > > > > > On Sat, Sep 1, 2012 at 5:34 PM, Jed Brown wrote: > > On Sat, Sep 1, 2012 at 4:20 PM, Gaetan Kenway > wrote: > > Sorry about the last email...I hit sent it too early. > > > > The full calling sequence is: > > > > > > call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > useAD = .False. > > useTranspose = .True. > > usePC = .True. > > call setupStateResidualMatrix(drdwpret,useAD,usePC,useTranspose) > > > > ! dRdwT has already been created > > call KSPSetOperators(ksp, dRdWT, dRdwpreT, & > > DIFFERENT_NONZERO_PATTERN, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > call KSPSetFromOptions(ksp, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > call KSPSetType(ksp, KSPFGMRES, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > call KSPGMRESSetRestart(ksp, adjRestart, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > call KSPGMRESSetCGSRefinementType(ksp, & > > KSP_GMRES_CGS_REFINE_IFNEEDED, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > call kspgetpc(ksp, pc, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > call pcsettype(pc, PCMG, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > call PetscOptionsSetValue('-pc_mg_levels','2',PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > ! Set command line options > > call PCSetFromOptions(pc, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > ! Create the restriction operator between finest and one below: > > call createRestrictOperator(RL1, 1) > > > > call PCMGSetRestriction(pc, 1, RL1, PETScierr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > call PCSetup(pc, PETScierr) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > The error occurs in PCSetup() which calls PCSetup_MG() and the issue is > in KSPGetVecs() on line 644 in mg.c. Just to clarify, this isn't a memory > problem/segfault. > > > > The output below is a SEGV. > > > > PETSc returns control cleanly and I my code exits when the error is > detected. Therefore a backtrace or valgrind doesn't yield any additional > information. ( I Ran both). > > > > You can still set a breakpoint in PetscError (-start_in_debugger / > -on_error_attach_debugger does this automatically). Then you can examine > the stack. > > > > The terminal output is: > > > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > > [0]PETSC ERROR: likely location of problem given in stack below > > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > > ================================================================= > > PETSc or MPI Error. Error Code 73. Detected on Proc 0 > > Error at line: 130 in file: setupPETScKsp.F90 > > ================================================================= > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > > [0]PETSC ERROR: INSTEAD the line number of the start of the > function > > [0]PETSC ERROR: is given. > > [0]PETSC ERROR: [0] MatGetVecs line 8142 > /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c > > [0]PETSC ERROR: [0] KSPGetVecs line 774 > /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c > > [0]PETSC ERROR: [0] PCSetUp_MG line 508 > /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c > > [0]PETSC ERROR: [0] PCSetUp line 810 > /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c > > > > I had a look at ex42.c but it uses a DM which I am not doing. > > > > It uses the DM for convenience to build the operators, but it does *not* > tell the KSP about the DM. That is, it does not call KSPSetDM() or > PCSetDM(). So the code in mg.c does not know about the DM. > > > > From the example there is the following: > > > > ierr = PCMGSetLevels(pc,nlevels,PETSC_NULL);CHKERRQ(ierr); > > ierr = PCMGSetType(pc,PC_MG_MULTIPLICATIVE);CHKERRQ(ierr); > > ierr = PCMGSetGalerkin(pc,PETSC_TRUE);CHKERRQ(ierr); > > > > for (k=1; k > ierr = > DMCreateInterpolation(da_list[k-1],da_list[k],&R,PETSC_NULL);CHKERRQ(ierr); > > ierr = PCMGSetInterpolation(pc,k,R);CHKERRQ(ierr); > > ierr = MatDestroy(&R);CHKERRQ(ierr); > > } > > > > Is there anything special about "R" in this case? > > > > Nothing special, it is just an MAIJ matrix that performs the > interpolation. > > > > As I mentioned before, I know what the issue is, the coarse grid > operators are not set at all. From what I can tell, they should be set in > the routine PCSetup_MG in mg.c, but the code is never run unless pc->dm is > True. > > > > What code is not run unless pc->dm? > > > > I guess I'll just create the operators myself. Even if I modify the code > to make it enter the if statement at line 577 > > > > You called PCMGSetGalerkin() so the code below should run, constructing > the coarse levels. You could try setting a breakpoint in there to see why > it's not running. > > > > > > if (mg->galerkin == 1) { > > Mat B; > > /* currently only handle case where mat and pmat are the same on > coarser levels */ > > ierr = > KSPGetOperators(mglevels[n-1]->smoothd,&dA,&dB,&uflag);CHKERRQ(ierr); > > if (!pc->setupcalled) { > > for (i=n-2; i>-1; i--) { > > ierr = > MatPtAP(dB,mglevels[i+1]->interpolate,MAT_INITIAL_MATRIX,1.0,&B);CHKERRQ(ierr); > > ierr = > KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); > > if (i != n-2) {ierr = > PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} > > dB = B; > > } > > if (n > 1) {ierr = > PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} > > } else { > > for (i=n-2; i>-1; i--) { > > ierr = > KSPGetOperators(mglevels[i]->smoothd,PETSC_NULL,&B,PETSC_NULL);CHKERRQ(ierr); > > ierr = > MatPtAP(dB,mglevels[i+1]->interpolate,MAT_REUSE_MATRIX,1.0,&B);CHKERRQ(ierr); > > ierr = > KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); > > dB = B; > > } > > } > > > > which seems like where it should be done....the MatPTAP will not run. > > > > What do you mean "will not run"? > > > > I'm using a mpibaij matrix for 'dRdwPreT' and my restriction matrix > 'RL1' is a mpiaij matrix. When I try to run MatRART myself with the code: > > > > call MatRART(dRdwT, RL1, MAT_INITIAL_MATRIX, 1.0, newMat, PETScIERR) > > call EChk(PETScIerr,__FILE__,__LINE__) > > > > I receive error code 56: /* no support for requested operation */ (this > is a clean exit from PETSc) > > > > MatRARt (or MatPtAP, for that matter) is not implemented for BAIJ with > an AIJ projection. The AIJ projection will lose the block structure anyway, > so just use AIJ for both. > > > > > > In conclusion, I have two issues: 1) The coarse grid operators are not > being set automatically in mg.c. 2) Using MatRART() with a mpibaij matrix > and a mpiaij matrix is unsupported. > > > > Is there a PCMG example that doesn't use a DM to generate the > restriction/prolongation/matrix information I can have a look at? > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenway at utias.utoronto.ca Sat Sep 1 18:31:52 2012 From: kenway at utias.utoronto.ca (Gaetan Kenway) Date: Sat, 1 Sep 2012 19:31:52 -0400 Subject: [petsc-users] MG Preconditioning In-Reply-To: References: <6E8512EF-39E3-47FA-A994-E4D4E47DD36E@mcs.anl.gov> Message-ID: Never mind. I figured out the problem. I was creating the transpose of the restriction operator with MatCreateTranspose() But the PTAP operation requires an actual matrix so you need to run MatTranspose() To create an actual transpose. Slowly making progress. Thanks, Gaetan On Sat, Sep 1, 2012 at 7:02 PM, Gaetan Kenway wrote: > Hi Jed > > That commit fixes the mg->galerkin issue. > > I'm still having issues with the projection/restriction matrices. Are the > projection/restriction matrices supposed to be serial matrices operating on > the unknowns on each processor even when running in parallel? > > Thanks, > > Gaetan > > > On Sat, Sep 1, 2012 at 6:33 PM, Barry Smith wrote: > >> >> On Sep 1, 2012, at 5:16 PM, Gaetan Kenway >> wrote: >> >> > Hi Jed >> > >> > After probing why the following code snippit isn't run: >> > >> > if (mg->galerkin == 1) { >> > Mat B; >> > /* currently only handle case where mat and pmat are the same on >> coarser levels */ >> > dB = B; >> > } >> > } >> > >> > It is because after I use PCMGSetGalerkin(), mg->galerkin has a value >> of -1 not 1. From what I can tell the value of PETSC_TRUE is >> architecture/implementation dependant. I can't see where/how the value of >> mg->galerkin obtains a value of 1 and runs the code above. >> > >> > As for the other issue, after converting my baij matrix to aij, running >> MatRART() is ok. >> > >> > So we just have to figure out why calling >> > >> > >> > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > sets mg->galerkin to a value of -1 and the code above is expecting a >> value of 1. >> >> This is a mistake with our Fortran interface. Some Fortran compilers >> (for what crazy reason I don't know) use -1 as bool for true. Normally this >> doesn't matter because we have if (boolvalue) { checks and so negative one >> and one work the same way; >> >> But the code mg->galerkin = (PetscInt)use; in PCMGSetGalekin() >> fails because we then check the numerical value of the result, instead of >> just if it is zero or nonzero. >> >> I can change this conversion to mg->galerkin = use ? 1 : 0 so >> the the Fortran use of -1 gets converted to one and this should resolve the >> problem. >> >> I have pushed this fix into the petsc-3.3 and petsc-dev repositories. >> If you are using 3.3 I've attached a new src/ksp/pc/impls/mg/mg.c that you >> can drop into that directory and run make in that directory to update your >> library. If you are using petsc-dev then do an hg pull -u and rerun make in >> that directory.[see attached file: mg.c] >> >> Thanks for determining the cause of this rather nasty bug. >> >> Barry >> >> >> > >> > Gaetan >> > >> > >> > >> > >> > On Sat, Sep 1, 2012 at 5:34 PM, Jed Brown wrote: >> > On Sat, Sep 1, 2012 at 4:20 PM, Gaetan Kenway >> wrote: >> > Sorry about the last email...I hit sent it too early. >> > >> > The full calling sequence is: >> > >> > >> > call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > useAD = .False. >> > useTranspose = .True. >> > usePC = .True. >> > call setupStateResidualMatrix(drdwpret,useAD,usePC,useTranspose) >> > >> > ! dRdwT has already been created >> > call KSPSetOperators(ksp, dRdWT, dRdwpreT, & >> > DIFFERENT_NONZERO_PATTERN, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > call KSPSetFromOptions(ksp, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > call KSPSetType(ksp, KSPFGMRES, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > call KSPGMRESSetRestart(ksp, adjRestart, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > call KSPGMRESSetCGSRefinementType(ksp, & >> > KSP_GMRES_CGS_REFINE_IFNEEDED, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > call kspgetpc(ksp, pc, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > call pcsettype(pc, PCMG, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > call PetscOptionsSetValue('-pc_mg_levels','2',PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > ! Set command line options >> > call PCSetFromOptions(pc, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > ! Create the restriction operator between finest and one below: >> > call createRestrictOperator(RL1, 1) >> > >> > call PCMGSetRestriction(pc, 1, RL1, PETScierr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > call PCSetup(pc, PETScierr) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > The error occurs in PCSetup() which calls PCSetup_MG() and the issue >> is in KSPGetVecs() on line 644 in mg.c. Just to clarify, this isn't a >> memory problem/segfault. >> > >> > The output below is a SEGV. >> > >> > PETSc returns control cleanly and I my code exits when the error is >> detected. Therefore a backtrace or valgrind doesn't yield any additional >> information. ( I Ran both). >> > >> > You can still set a breakpoint in PetscError (-start_in_debugger / >> -on_error_attach_debugger does this automatically). Then you can examine >> the stack. >> > >> > The terminal output is: >> > >> > [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> > [0]PETSC ERROR: Try option -start_in_debugger or >> -on_error_attach_debugger >> > [0]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >> corruption errors >> > [0]PETSC ERROR: likely location of problem given in stack below >> > [0]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> > ================================================================= >> > PETSc or MPI Error. Error Code 73. Detected on Proc 0 >> > Error at line: 130 in file: setupPETScKsp.F90 >> > ================================================================= >> > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> > [0]PETSC ERROR: INSTEAD the line number of the start of the >> function >> > [0]PETSC ERROR: is given. >> > [0]PETSC ERROR: [0] MatGetVecs line 8142 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c >> > [0]PETSC ERROR: [0] KSPGetVecs line 774 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c >> > [0]PETSC ERROR: [0] PCSetUp_MG line 508 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c >> > [0]PETSC ERROR: [0] PCSetUp line 810 >> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c >> > >> > I had a look at ex42.c but it uses a DM which I am not doing. >> > >> > It uses the DM for convenience to build the operators, but it does >> *not* tell the KSP about the DM. That is, it does not call KSPSetDM() or >> PCSetDM(). So the code in mg.c does not know about the DM. >> > >> > From the example there is the following: >> > >> > ierr = PCMGSetLevels(pc,nlevels,PETSC_NULL);CHKERRQ(ierr); >> > ierr = PCMGSetType(pc,PC_MG_MULTIPLICATIVE);CHKERRQ(ierr); >> > ierr = PCMGSetGalerkin(pc,PETSC_TRUE);CHKERRQ(ierr); >> > >> > for (k=1; k> > ierr = >> DMCreateInterpolation(da_list[k-1],da_list[k],&R,PETSC_NULL);CHKERRQ(ierr); >> > ierr = PCMGSetInterpolation(pc,k,R);CHKERRQ(ierr); >> > ierr = MatDestroy(&R);CHKERRQ(ierr); >> > } >> > >> > Is there anything special about "R" in this case? >> > >> > Nothing special, it is just an MAIJ matrix that performs the >> interpolation. >> > >> > As I mentioned before, I know what the issue is, the coarse grid >> operators are not set at all. From what I can tell, they should be set in >> the routine PCSetup_MG in mg.c, but the code is never run unless pc->dm is >> True. >> > >> > What code is not run unless pc->dm? >> > >> > I guess I'll just create the operators myself. Even if I modify the >> code to make it enter the if statement at line 577 >> > >> > You called PCMGSetGalerkin() so the code below should run, constructing >> the coarse levels. You could try setting a breakpoint in there to see why >> it's not running. >> > >> > >> > if (mg->galerkin == 1) { >> > Mat B; >> > /* currently only handle case where mat and pmat are the same on >> coarser levels */ >> > ierr = >> KSPGetOperators(mglevels[n-1]->smoothd,&dA,&dB,&uflag);CHKERRQ(ierr); >> > if (!pc->setupcalled) { >> > for (i=n-2; i>-1; i--) { >> > ierr = >> MatPtAP(dB,mglevels[i+1]->interpolate,MAT_INITIAL_MATRIX,1.0,&B);CHKERRQ(ierr); >> > ierr = >> KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); >> > if (i != n-2) {ierr = >> PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} >> > dB = B; >> > } >> > if (n > 1) {ierr = >> PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} >> > } else { >> > for (i=n-2; i>-1; i--) { >> > ierr = >> KSPGetOperators(mglevels[i]->smoothd,PETSC_NULL,&B,PETSC_NULL);CHKERRQ(ierr); >> > ierr = >> MatPtAP(dB,mglevels[i+1]->interpolate,MAT_REUSE_MATRIX,1.0,&B);CHKERRQ(ierr); >> > ierr = >> KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); >> > dB = B; >> > } >> > } >> > >> > which seems like where it should be done....the MatPTAP will not run. >> > >> > What do you mean "will not run"? >> > >> > I'm using a mpibaij matrix for 'dRdwPreT' and my restriction matrix >> 'RL1' is a mpiaij matrix. When I try to run MatRART myself with the code: >> > >> > call MatRART(dRdwT, RL1, MAT_INITIAL_MATRIX, 1.0, newMat, PETScIERR) >> > call EChk(PETScIerr,__FILE__,__LINE__) >> > >> > I receive error code 56: /* no support for requested operation */ (this >> is a clean exit from PETSc) >> > >> > MatRARt (or MatPtAP, for that matter) is not implemented for BAIJ with >> an AIJ projection. The AIJ projection will lose the block structure anyway, >> so just use AIJ for both. >> > >> > >> > In conclusion, I have two issues: 1) The coarse grid operators are not >> being set automatically in mg.c. 2) Using MatRART() with a mpibaij matrix >> and a mpiaij matrix is unsupported. >> > >> > Is there a PCMG example that doesn't use a DM to generate the >> restriction/prolongation/matrix information I can have a look at? >> > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Sep 1 19:24:51 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 1 Sep 2012 19:24:51 -0500 Subject: [petsc-users] MG Preconditioning In-Reply-To: References: <6E8512EF-39E3-47FA-A994-E4D4E47DD36E@mcs.anl.gov> Message-ID: On Sat, Sep 1, 2012 at 6:31 PM, Gaetan Kenway wrote: > Never mind. > > I figured out the problem. > > I was creating the transpose of the restriction operator with > > MatCreateTranspose() > > But the PTAP operation requires an actual matrix so you need to run > > MatTranspose() > > To create an actual transpose. > For what it's worth, it's better to assemble P directly than to assemble R and MatTranspose(). > > Slowly making progress. > > Thanks, > > Gaetan > > > > On Sat, Sep 1, 2012 at 7:02 PM, Gaetan Kenway wrote: > >> Hi Jed >> >> That commit fixes the mg->galerkin issue. >> >> I'm still having issues with the projection/restriction matrices. Are the >> projection/restriction matrices supposed to be serial matrices operating on >> the unknowns on each processor even when running in parallel? >> >> Thanks, >> >> Gaetan >> >> >> On Sat, Sep 1, 2012 at 6:33 PM, Barry Smith wrote: >> >>> >>> On Sep 1, 2012, at 5:16 PM, Gaetan Kenway >>> wrote: >>> >>> > Hi Jed >>> > >>> > After probing why the following code snippit isn't run: >>> > >>> > if (mg->galerkin == 1) { >>> > Mat B; >>> > /* currently only handle case where mat and pmat are the same on >>> coarser levels */ >>> > dB = B; >>> > } >>> > } >>> > >>> > It is because after I use PCMGSetGalerkin(), mg->galerkin has a >>> value of -1 not 1. From what I can tell the value of PETSC_TRUE is >>> architecture/implementation dependant. I can't see where/how the value of >>> mg->galerkin obtains a value of 1 and runs the code above. >>> > >>> > As for the other issue, after converting my baij matrix to aij, >>> running MatRART() is ok. >>> > >>> > So we just have to figure out why calling >>> > >>> > >>> > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > sets mg->galerkin to a value of -1 and the code above is expecting a >>> value of 1. >>> >>> This is a mistake with our Fortran interface. Some Fortran >>> compilers (for what crazy reason I don't know) use -1 as bool for true. >>> Normally this doesn't matter because we have if (boolvalue) { checks and so >>> negative one and one work the same way; >>> >>> But the code mg->galerkin = (PetscInt)use; in PCMGSetGalekin() >>> fails because we then check the numerical value of the result, instead of >>> just if it is zero or nonzero. >>> >>> I can change this conversion to mg->galerkin = use ? 1 : 0 so >>> the the Fortran use of -1 gets converted to one and this should resolve the >>> problem. >>> >>> I have pushed this fix into the petsc-3.3 and petsc-dev >>> repositories. If you are using 3.3 I've attached a new >>> src/ksp/pc/impls/mg/mg.c that you can drop into that directory and run make >>> in that directory to update your library. If you are using petsc-dev then >>> do an hg pull -u and rerun make in that directory.[see attached file: mg.c] >>> >>> Thanks for determining the cause of this rather nasty bug. >>> >>> Barry >>> >>> >>> > >>> > Gaetan >>> > >>> > >>> > >>> > >>> > On Sat, Sep 1, 2012 at 5:34 PM, Jed Brown >>> wrote: >>> > On Sat, Sep 1, 2012 at 4:20 PM, Gaetan Kenway < >>> kenway at utias.utoronto.ca> wrote: >>> > Sorry about the last email...I hit sent it too early. >>> > >>> > The full calling sequence is: >>> > >>> > >>> > call KSPCreate(SUMB_COMM_WORLD, ksp, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > useAD = .False. >>> > useTranspose = .True. >>> > usePC = .True. >>> > call setupStateResidualMatrix(drdwpret,useAD,usePC,useTranspose) >>> > >>> > ! dRdwT has already been created >>> > call KSPSetOperators(ksp, dRdWT, dRdwpreT, & >>> > DIFFERENT_NONZERO_PATTERN, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > call KSPSetFromOptions(ksp, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > call KSPSetType(ksp, KSPFGMRES, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > call KSPGMRESSetRestart(ksp, adjRestart, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > call KSPGMRESSetCGSRefinementType(ksp, & >>> > KSP_GMRES_CGS_REFINE_IFNEEDED, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > call kspgetpc(ksp, pc, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > call pcsettype(pc, PCMG, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > call PetscOptionsSetValue('-pc_mg_levels','2',PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > ! Set command line options >>> > call PCSetFromOptions(pc, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > ! Create the restriction operator between finest and one below: >>> > call createRestrictOperator(RL1, 1) >>> > >>> > call PCMGSetRestriction(pc, 1, RL1, PETScierr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > call PCSetup(pc, PETScierr) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > The error occurs in PCSetup() which calls PCSetup_MG() and the issue >>> is in KSPGetVecs() on line 644 in mg.c. Just to clarify, this isn't a >>> memory problem/segfault. >>> > >>> > The output below is a SEGV. >>> > >>> > PETSc returns control cleanly and I my code exits when the error is >>> detected. Therefore a backtrace or valgrind doesn't yield any additional >>> information. ( I Ran both). >>> > >>> > You can still set a breakpoint in PetscError (-start_in_debugger / >>> -on_error_attach_debugger does this automatically). Then you can examine >>> the stack. >>> > >>> > The terminal output is: >>> > >>> > [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> > [0]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> > [0]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >>> corruption errors >>> > [0]PETSC ERROR: likely location of problem given in stack below >>> > [0]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> > ================================================================= >>> > PETSc or MPI Error. Error Code 73. Detected on Proc 0 >>> > Error at line: 130 in file: setupPETScKsp.F90 >>> > ================================================================= >>> > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> available, >>> > [0]PETSC ERROR: INSTEAD the line number of the start of the >>> function >>> > [0]PETSC ERROR: is given. >>> > [0]PETSC ERROR: [0] MatGetVecs line 8142 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c >>> > [0]PETSC ERROR: [0] KSPGetVecs line 774 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c >>> > [0]PETSC ERROR: [0] PCSetUp_MG line 508 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c >>> > [0]PETSC ERROR: [0] PCSetUp line 810 >>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c >>> > >>> > I had a look at ex42.c but it uses a DM which I am not doing. >>> > >>> > It uses the DM for convenience to build the operators, but it does >>> *not* tell the KSP about the DM. That is, it does not call KSPSetDM() or >>> PCSetDM(). So the code in mg.c does not know about the DM. >>> > >>> > From the example there is the following: >>> > >>> > ierr = PCMGSetLevels(pc,nlevels,PETSC_NULL);CHKERRQ(ierr); >>> > ierr = PCMGSetType(pc,PC_MG_MULTIPLICATIVE);CHKERRQ(ierr); >>> > ierr = PCMGSetGalerkin(pc,PETSC_TRUE);CHKERRQ(ierr); >>> > >>> > for (k=1; k>> > ierr = >>> DMCreateInterpolation(da_list[k-1],da_list[k],&R,PETSC_NULL);CHKERRQ(ierr); >>> > ierr = PCMGSetInterpolation(pc,k,R);CHKERRQ(ierr); >>> > ierr = MatDestroy(&R);CHKERRQ(ierr); >>> > } >>> > >>> > Is there anything special about "R" in this case? >>> > >>> > Nothing special, it is just an MAIJ matrix that performs the >>> interpolation. >>> > >>> > As I mentioned before, I know what the issue is, the coarse grid >>> operators are not set at all. From what I can tell, they should be set in >>> the routine PCSetup_MG in mg.c, but the code is never run unless pc->dm is >>> True. >>> > >>> > What code is not run unless pc->dm? >>> > >>> > I guess I'll just create the operators myself. Even if I modify the >>> code to make it enter the if statement at line 577 >>> > >>> > You called PCMGSetGalerkin() so the code below should run, >>> constructing the coarse levels. You could try setting a breakpoint in there >>> to see why it's not running. >>> > >>> > >>> > if (mg->galerkin == 1) { >>> > Mat B; >>> > /* currently only handle case where mat and pmat are the same on >>> coarser levels */ >>> > ierr = >>> KSPGetOperators(mglevels[n-1]->smoothd,&dA,&dB,&uflag);CHKERRQ(ierr); >>> > if (!pc->setupcalled) { >>> > for (i=n-2; i>-1; i--) { >>> > ierr = >>> MatPtAP(dB,mglevels[i+1]->interpolate,MAT_INITIAL_MATRIX,1.0,&B);CHKERRQ(ierr); >>> > ierr = >>> KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); >>> > if (i != n-2) {ierr = >>> PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} >>> > dB = B; >>> > } >>> > if (n > 1) {ierr = >>> PetscObjectDereference((PetscObject)dB);CHKERRQ(ierr);} >>> > } else { >>> > for (i=n-2; i>-1; i--) { >>> > ierr = >>> KSPGetOperators(mglevels[i]->smoothd,PETSC_NULL,&B,PETSC_NULL);CHKERRQ(ierr); >>> > ierr = >>> MatPtAP(dB,mglevels[i+1]->interpolate,MAT_REUSE_MATRIX,1.0,&B);CHKERRQ(ierr); >>> > ierr = >>> KSPSetOperators(mglevels[i]->smoothd,B,B,uflag);CHKERRQ(ierr); >>> > dB = B; >>> > } >>> > } >>> > >>> > which seems like where it should be done....the MatPTAP will not run. >>> > >>> > What do you mean "will not run"? >>> > >>> > I'm using a mpibaij matrix for 'dRdwPreT' and my restriction matrix >>> 'RL1' is a mpiaij matrix. When I try to run MatRART myself with the code: >>> > >>> > call MatRART(dRdwT, RL1, MAT_INITIAL_MATRIX, 1.0, newMat, PETScIERR) >>> > call EChk(PETScIerr,__FILE__,__LINE__) >>> > >>> > I receive error code 56: /* no support for requested operation */ >>> (this is a clean exit from PETSc) >>> > >>> > MatRARt (or MatPtAP, for that matter) is not implemented for BAIJ with >>> an AIJ projection. The AIJ projection will lose the block structure anyway, >>> so just use AIJ for both. >>> > >>> > >>> > In conclusion, I have two issues: 1) The coarse grid operators are not >>> being set automatically in mg.c. 2) Using MatRART() with a mpibaij matrix >>> and a mpiaij matrix is unsupported. >>> > >>> > Is there a PCMG example that doesn't use a DM to generate the >>> restriction/prolongation/matrix information I can have a look at? >>> > >>> > >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdiso at ustc.edu Sun Sep 2 01:07:53 2012 From: gdiso at ustc.edu (Gong Ding) Date: Sun, 2 Sep 2012 14:07:53 +0800 (CST) Subject: [petsc-users] ILU performance for 1d 2d and 3d Message-ID: <671858.308621346566073462.JavaMail.coremail@mail.ustc.edu> Hi all, In my piratical, I found that ILU preconditioner has different performance (acceleration of GMRES/BCGS) for physical problems (i.e. poisson's equation) with different dimensions. Of course, ILU is exact for 1D problem. And for 2D, ILU usually works very well. However, ILU is less efficient for 3D problems. I guess this can be explained by matrix bandwidth and fill-in, however, is there any theoretical paper on this topic? And what about the optimized k of ILU(k) preconditioner for 3D problem? Gong Ding From hzhang at mcs.anl.gov Sun Sep 2 14:12:40 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sun, 2 Sep 2012 14:12:40 -0500 Subject: [petsc-users] ILU performance for 1d 2d and 3d In-Reply-To: <671858.308621346566073462.JavaMail.coremail@mail.ustc.edu> References: <671858.308621346566073462.JavaMail.coremail@mail.ustc.edu> Message-ID: Gong Ding : > Hi all, > In my piratical, I found that ILU preconditioner has different performance > (acceleration of GMRES/BCGS) for physical problems (i.e. poisson's > equation) with different dimensions. Of course, ILU is exact for 1D > problem. And for 2D, ILU usually works very well. However, ILU is less > efficient for 3D problems. > > I guess this can be explained by matrix bandwidth and fill-in, however, is > there any theoretical paper on this topic? > And what about the optimized k of ILU(k) preconditioner for 3D problem? > There is no theory about ILU, purely empirical. Increase level k would reduce number of total iterations at cost of execution time, not a good choice in general. For poisson's equation, you may try multigrid preconditoner. Hong > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.witkowski at tu-dresden.de Mon Sep 3 03:15:35 2012 From: thomas.witkowski at tu-dresden.de (Thomas Witkowski) Date: Mon, 03 Sep 2012 10:15:35 +0200 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ Message-ID: <50446727.8070104@tu-dresden.de> In my FETI-DP code, each rank creates a SEQAIJ matrix that represents the discretization of the interior domain. Just for debugging, I would like to join these sequential matrices to one global MPIAIJ matrix. This matrix has no off diagonal nnzs and should be stored corresponding to the ranks unknowns, thus, first all rows of the first rank and so on. What's the most efficient way to do this? Is it possible to create this parallel matrix just as a view of the sequential ones, so without copying the data? Thanks for any advise. Thomas From hzhang at mcs.anl.gov Mon Sep 3 11:02:06 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 3 Sep 2012 11:02:06 -0500 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: <50446727.8070104@tu-dresden.de> References: <50446727.8070104@tu-dresden.de> Message-ID: Thomas : > In my FETI-DP code, each rank creates a SEQAIJ matrix that represents the > discretization of the interior domain. Just for debugging, I would like to > join these sequential matrices to one global MPIAIJ matrix. This matrix has > no off diagonal nnzs and should be stored corresponding to the ranks > unknowns, thus, first all rows of the first rank and so on. What's the most > efficient way to do this? Is it possible to create this parallel matrix > just as a view of the sequential ones, so without copying the data? Thanks > for any advise. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html Note: entries in seqaij matrices are copied into a mpiaij matrix without inter-processor communication. Use petsc-3.3 for this function. Hong > > > Thomas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.witkowski at tu-dresden.de Tue Sep 4 02:53:53 2012 From: thomas.witkowski at tu-dresden.de (Thomas Witkowski) Date: Tue, 04 Sep 2012 09:53:53 +0200 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: References: <50446727.8070104@tu-dresden.de> Message-ID: <5045B391.3080308@tu-dresden.de> Am 03.09.2012 18:02, schrieb Hong Zhang: > > > Thomas : > > In my FETI-DP code, each rank creates a SEQAIJ matrix that > represents the discretization of the interior domain. Just for > debugging, I would like to join these sequential matrices to one > global MPIAIJ matrix. This matrix has no off diagonal nnzs and > should be stored corresponding to the ranks unknowns, thus, first > all rows of the first rank and so on. What's the most efficient > way to do this? Is it possible to create this parallel matrix just > as a view of the sequential ones, so without copying the data? > Thanks for any advise. > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html > > Note: entries in seqaij matrices are copied into a mpiaij matrix without > inter-processor communication. Use petsc-3.3 for this function. The function does not do what I expect. For example, if we have two mpi task and each contains one local square matrix with n rows, I want to create a global square matrix with 2n rows. This function create a non-square matrix of size 2n x n. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Sep 4 04:08:39 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 04 Sep 2012 11:08:39 +0200 Subject: [petsc-users] Improve in speed/efficiency from partitioning in 1 direction to 3 directions Message-ID: <5045C516.6060002@gmail.com> Hi, My Fortran CFD code is currently partitioned in the z direction. Total grid size is around is 153x248x620. Hence depending on the no. of procs, the z direction 620 is partitioned. The grid size changes but the ratio is around there. Parititoning in 3 directions was initially too complex for me. However, it seems to be much simplified with the use of DM. However, there's still a lot of work to be done to make it working. I'm wondering how much improvement in speed/efficiency will I get, if I partition from 1 direction to 3 directions. Is it worth the effort? -- Yours sincerely, TAY wee-beng From aron.ahmadia at kaust.edu.sa Tue Sep 4 04:11:00 2012 From: aron.ahmadia at kaust.edu.sa (Aron Ahmadia) Date: Tue, 4 Sep 2012 10:11:00 +0100 Subject: [petsc-users] Improve in speed/efficiency from partitioning in 1 direction to 3 directions In-Reply-To: <5045C516.6060002@gmail.com> References: <5045C516.6060002@gmail.com> Message-ID: This doesn't strike me as a particularly large problem. I'm not sure it's worth doing unless you are going to be looking at more unknowns in the future. A On Tue, Sep 4, 2012 at 10:08 AM, TAY wee-beng wrote: > Hi, > > My Fortran CFD code is currently partitioned in the z direction. Total grid > size is around is 153x248x620. Hence depending on the no. of procs, the z > direction 620 is partitioned. The grid size changes but the ratio is around > there. > > Parititoning in 3 directions was initially too complex for me. However, it > seems to be much simplified with the use of DM. However, there's still a lot > of work to be done to make it working. > > I'm wondering how much improvement in speed/efficiency will I get, if I > partition from 1 direction to 3 directions. Is it worth the effort? > > -- > Yours sincerely, > > TAY wee-beng > -- ------------------------------ This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. From zonexo at gmail.com Tue Sep 4 05:17:13 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 04 Sep 2012 12:17:13 +0200 Subject: [petsc-users] Improve in speed/efficiency from partitioning in 1 direction to 3 directions In-Reply-To: References: <5045C516.6060002@gmail.com> Message-ID: <5045D529.8070703@gmail.com> On 4/9/2012 11:11 AM, Aron Ahmadia wrote: > This doesn't strike me as a particularly large problem. I'm not sure > it's worth doing unless you are going to be looking at more unknowns > in the future. > > A Hi Aron, It will increase to 500x500x1200 or more. In that case, do you think it's worth it? Thanks! > > On Tue, Sep 4, 2012 at 10:08 AM, TAY wee-beng wrote: >> Hi, >> >> My Fortran CFD code is currently partitioned in the z direction. Total grid >> size is around is 153x248x620. Hence depending on the no. of procs, the z >> direction 620 is partitioned. The grid size changes but the ratio is around >> there. >> >> Parititoning in 3 directions was initially too complex for me. However, it >> seems to be much simplified with the use of DM. However, there's still a lot >> of work to be done to make it working. >> >> I'm wondering how much improvement in speed/efficiency will I get, if I >> partition from 1 direction to 3 directions. Is it worth the effort? >> >> -- >> Yours sincerely, >> >> TAY wee-beng >> From aron.ahmadia at kaust.edu.sa Tue Sep 4 06:54:11 2012 From: aron.ahmadia at kaust.edu.sa (Aron Ahmadia) Date: Tue, 4 Sep 2012 12:54:11 +0100 Subject: [petsc-users] Improve in speed/efficiency from partitioning in 1 direction to 3 directions In-Reply-To: <5045D529.8070703@gmail.com> References: <5045C516.6060002@gmail.com> <5045D529.8070703@gmail.com> Message-ID: There are a lot of other factors at play here, including how much time you are spending working on the code, how far you're trying to scale the algorithm and how much communication dominates your problem for the architectures and algorithms you are using. Unless you are trying to take this code to thousands of processors I probably wouldn't worry about it. A On Tue, Sep 4, 2012 at 11:17 AM, TAY wee-beng wrote: > On 4/9/2012 11:11 AM, Aron Ahmadia wrote: >> >> This doesn't strike me as a particularly large problem. I'm not sure >> it's worth doing unless you are going to be looking at more unknowns >> in the future. >> >> A > > Hi Aron, > > It will increase to 500x500x1200 or more. In that case, do you think it's > worth it? > > Thanks! > >> >> On Tue, Sep 4, 2012 at 10:08 AM, TAY wee-beng wrote: >>> >>> Hi, >>> >>> My Fortran CFD code is currently partitioned in the z direction. Total >>> grid >>> size is around is 153x248x620. Hence depending on the no. of procs, the z >>> direction 620 is partitioned. The grid size changes but the ratio is >>> around >>> there. >>> >>> Parititoning in 3 directions was initially too complex for me. However, >>> it >>> seems to be much simplified with the use of DM. However, there's still a >>> lot >>> of work to be done to make it working. >>> >>> I'm wondering how much improvement in speed/efficiency will I get, if I >>> partition from 1 direction to 3 directions. Is it worth the effort? >>> >>> -- >>> Yours sincerely, >>> >>> TAY wee-beng >>> > -- ------------------------------ This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. From hzhang at mcs.anl.gov Tue Sep 4 09:20:24 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 4 Sep 2012 09:20:24 -0500 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: <5045B391.3080308@tu-dresden.de> References: <50446727.8070104@tu-dresden.de> <5045B391.3080308@tu-dresden.de> Message-ID: Thomas: Do you mean create a mpiaij matrix of order 2nx2n from two local nxn matrices? You might have to use MatSetValues() to do this. Hong > > Thomas : > >> In my FETI-DP code, each rank creates a SEQAIJ matrix that represents the >> discretization of the interior domain. Just for debugging, I would like to >> join these sequential matrices to one global MPIAIJ matrix. This matrix has >> no off diagonal nnzs and should be stored corresponding to the ranks >> unknowns, thus, first all rows of the first rank and so on. What's the most >> efficient way to do this? Is it possible to create this parallel matrix >> just as a view of the sequential ones, so without copying the data? Thanks >> for any advise. > > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html > > Note: entries in seqaij matrices are copied into a mpiaij matrix without > inter-processor communication. Use petsc-3.3 for this function. > > > The function does not do what I expect. For example, if we have two mpi > task and each contains one local square matrix with n rows, I want to > create a global square matrix with 2n rows. This function create a > non-square matrix of size 2n x n. > > Thomas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.witkowski at tu-dresden.de Tue Sep 4 09:42:43 2012 From: thomas.witkowski at tu-dresden.de (Thomas Witkowski) Date: Tue, 04 Sep 2012 16:42:43 +0200 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: References: <50446727.8070104@tu-dresden.de> <5045B391.3080308@tu-dresden.de> Message-ID: <50461363.2080701@tu-dresden.de> Am 04.09.2012 16:20, schrieb Hong Zhang: > Thomas: > Do you mean create a mpiaij matrix of order 2nx2n from two local nxn > matrices? Yes. > You might have to use MatSetValues() to do this. I think, it will be faster and more efficient to create a MatShell to wrap around the local matrices. If anybody has some other ideas, please let me know. Thomas > > Hong > >> >> Thomas : >> >> In my FETI-DP code, each rank creates a SEQAIJ matrix that >> represents the discretization of the interior domain. Just >> for debugging, I would like to join these sequential matrices >> to one global MPIAIJ matrix. This matrix has no off diagonal >> nnzs and should be stored corresponding to the ranks >> unknowns, thus, first all rows of the first rank and so on. >> What's the most efficient way to do this? Is it possible to >> create this parallel matrix just as a view of the sequential >> ones, so without copying the data? Thanks for any advise. >> >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html >> >> Note: entries in seqaij matrices are copied into a mpiaij matrix >> without >> inter-processor communication. Use petsc-3.3 for this function. > > The function does not do what I expect. For example, if we have > two mpi task and each contains one local square matrix with n > rows, I want to create a global square matrix with 2n rows. This > function create a non-square matrix of size 2n x n. > > Thomas > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Sep 4 09:46:41 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 4 Sep 2012 09:46:41 -0500 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: <50461363.2080701@tu-dresden.de> References: <50446727.8070104@tu-dresden.de> <5045B391.3080308@tu-dresden.de> <50461363.2080701@tu-dresden.de> Message-ID: On Tue, Sep 4, 2012 at 9:42 AM, Thomas Witkowski < thomas.witkowski at tu-dresden.de> wrote: > Am 04.09.2012 16:20, schrieb Hong Zhang: > > Thomas: > Do you mean create a mpiaij matrix of order 2nx2n from two local nxn > matrices? > > Yes. > > You might have to use MatSetValues() to do this. > > I think, it will be faster and more efficient to create a MatShell to wrap > around the local matrices. If anybody has some other ideas, please let me > know. > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATNEST.html Matt > Thomas > > > > Hong > >> >> Thomas : >> >>> In my FETI-DP code, each rank creates a SEQAIJ matrix that represents >>> the discretization of the interior domain. Just for debugging, I would like >>> to join these sequential matrices to one global MPIAIJ matrix. This matrix >>> has no off diagonal nnzs and should be stored corresponding to the ranks >>> unknowns, thus, first all rows of the first rank and so on. What's the most >>> efficient way to do this? Is it possible to create this parallel matrix >>> just as a view of the sequential ones, so without copying the data? Thanks >>> for any advise. >> >> >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html >> >> Note: entries in seqaij matrices are copied into a mpiaij matrix without >> inter-processor communication. Use petsc-3.3 for this function. >> >> >> The function does not do what I expect. For example, if we have two mpi >> task and each contains one local square matrix with n rows, I want to >> create a global square matrix with 2n rows. This function create a >> non-square matrix of size 2n x n. >> >> Thomas >> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.witkowski at tu-dresden.de Tue Sep 4 09:52:31 2012 From: thomas.witkowski at tu-dresden.de (Thomas Witkowski) Date: Tue, 04 Sep 2012 16:52:31 +0200 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: References: <50446727.8070104@tu-dresden.de> <5045B391.3080308@tu-dresden.de> <50461363.2080701@tu-dresden.de> Message-ID: <504615AF.2000005@tu-dresden.de> > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATNEST.html > As I wrote in my initial question, each rank contains one and only one seqaij matrix, which all should be joined to one global matrix such that each local matrix is the corresponding diagonal block of the mpiaij matrix. I think, this does not work with nested matrices? Thomas > >> Hong >> >>> >>> Thomas : >>> >>> In my FETI-DP code, each rank creates a SEQAIJ matrix >>> that represents the discretization of the interior >>> domain. Just for debugging, I would like to join these >>> sequential matrices to one global MPIAIJ matrix. This >>> matrix has no off diagonal nnzs and should be stored >>> corresponding to the ranks unknowns, thus, first all >>> rows of the first rank and so on. What's the most >>> efficient way to do this? Is it possible to create this >>> parallel matrix just as a view of the sequential ones, >>> so without copying the data? Thanks for any advise. >>> >>> >>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html >>> >>> Note: entries in seqaij matrices are copied into a mpiaij >>> matrix without >>> inter-processor communication. Use petsc-3.3 for this function. >> >> The function does not do what I expect. For example, if we >> have two mpi task and each contains one local square matrix >> with n rows, I want to create a global square matrix with 2n >> rows. This function create a non-square matrix of size 2n x n. >> >> Thomas >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Sep 4 10:20:43 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 4 Sep 2012 10:20:43 -0500 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: <504615AF.2000005@tu-dresden.de> References: <50446727.8070104@tu-dresden.de> <5045B391.3080308@tu-dresden.de> <50461363.2080701@tu-dresden.de> <504615AF.2000005@tu-dresden.de> Message-ID: On Tue, Sep 4, 2012 at 9:52 AM, Thomas Witkowski < thomas.witkowski at tu-dresden.de> wrote: > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATNEST.html > > As I wrote in my initial question, each rank contains one and only one > seqaij matrix, which all should be joined to one global matrix such that > each local matrix is the corresponding diagonal block of the mpiaij matrix. > I think, this does not work with nested matrices? > Why does this not work? I really think you are making this harder than it has to be. Matt > Thomas > > > >> >> Hong >> >>> >>> Thomas : >>> >>>> In my FETI-DP code, each rank creates a SEQAIJ matrix that represents >>>> the discretization of the interior domain. Just for debugging, I would like >>>> to join these sequential matrices to one global MPIAIJ matrix. This matrix >>>> has no off diagonal nnzs and should be stored corresponding to the ranks >>>> unknowns, thus, first all rows of the first rank and so on. What's the most >>>> efficient way to do this? Is it possible to create this parallel matrix >>>> just as a view of the sequential ones, so without copying the data? Thanks >>>> for any advise. >>> >>> >>> >>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html >>> >>> Note: entries in seqaij matrices are copied into a mpiaij matrix >>> without >>> inter-processor communication. Use petsc-3.3 for this function. >>> >>> >>> The function does not do what I expect. For example, if we have two >>> mpi task and each contains one local square matrix with n rows, I want to >>> create a global square matrix with 2n rows. This function create a >>> non-square matrix of size 2n x n. >>> >>> Thomas >>> >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.witkowski at tu-dresden.de Tue Sep 4 10:28:57 2012 From: thomas.witkowski at tu-dresden.de (Thomas Witkowski) Date: Tue, 04 Sep 2012 17:28:57 +0200 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: References: <50446727.8070104@tu-dresden.de> <5045B391.3080308@tu-dresden.de> <50461363.2080701@tu-dresden.de> <504615AF.2000005@tu-dresden.de> Message-ID: <50461E39.7080706@tu-dresden.de> Am 04.09.2012 17:20, schrieb Matthew Knepley: > On Tue, Sep 4, 2012 at 9:52 AM, Thomas Witkowski > > wrote: > >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATNEST.html >> > As I wrote in my initial question, each rank contains one and only > one seqaij matrix, which all should be joined to one global matrix > such that each local matrix is the corresponding diagonal block of > the mpiaij matrix. I think, this does not work with nested matrices? > > > Why does this not work? I really think you are making this harder than > it has to be. > Mh, maybe I have an incomplete view of the possibilities how to use nested matrices. To become more specific: In the case of two mpi tasks, each containing one seqaij matrix, how to call MatCreateNest? Is this correct: Mat A; MatCreaeteNest(PETSC_COMM_WORLD, 2, PETSC_NULL, 2, PETSC_NULL, V ,&A); and V is defined on rank 0 as Mat V[2] = {seqMat, PETSC_NULL} ; and and rank 1 as Mat V[2] = {PETSC_NULL, seqMat}; Thomas > Matt > > Thomas > >> >>> Hong >>> >>>> >>>> Thomas : >>>> >>>> In my FETI-DP code, each rank creates a SEQAIJ >>>> matrix that represents the discretization of the >>>> interior domain. Just for debugging, I would like >>>> to join these sequential matrices to one global >>>> MPIAIJ matrix. This matrix has no off diagonal nnzs >>>> and should be stored corresponding to the ranks >>>> unknowns, thus, first all rows of the first rank >>>> and so on. What's the most efficient way to do >>>> this? Is it possible to create this parallel matrix >>>> just as a view of the sequential ones, so without >>>> copying the data? Thanks for any advise. >>>> >>>> >>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html >>>> >>>> Note: entries in seqaij matrices are copied into a >>>> mpiaij matrix without >>>> inter-processor communication. Use petsc-3.3 for this >>>> function. >>> >>> The function does not do what I expect. For example, if >>> we have two mpi task and each contains one local square >>> matrix with n rows, I want to create a global square >>> matrix with 2n rows. This function create a non-square >>> matrix of size 2n x n. >>> >>> Thomas >>> >>> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Sep 4 10:36:43 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 4 Sep 2012 10:36:43 -0500 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: <50461E39.7080706@tu-dresden.de> References: <50446727.8070104@tu-dresden.de> <5045B391.3080308@tu-dresden.de> <50461363.2080701@tu-dresden.de> <504615AF.2000005@tu-dresden.de> <50461E39.7080706@tu-dresden.de> Message-ID: On Tue, Sep 4, 2012 at 10:28 AM, Thomas Witkowski < thomas.witkowski at tu-dresden.de> wrote: > Am 04.09.2012 17:20, schrieb Matthew Knepley: > > On Tue, Sep 4, 2012 at 9:52 AM, Thomas Witkowski < > thomas.witkowski at tu-dresden.de> wrote: > >> >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATNEST.html >> >> As I wrote in my initial question, each rank contains one and only one >> seqaij matrix, which all should be joined to one global matrix such that >> each local matrix is the corresponding diagonal block of the mpiaij matrix. >> I think, this does not work with nested matrices? >> > > Why does this not work? I really think you are making this harder than > it has to be. > > Mh, maybe I have an incomplete view of the possibilities how to use > nested matrices. > > To become more specific: In the case of two mpi tasks, each containing one > seqaij matrix, how to call MatCreateNest? Is this correct: > > Mat A; > MatCreaeteNest(PETSC_COMM_WORLD, 2, PETSC_NULL, 2, PETSC_NULL, V ,&A); > ^^^ This should be 1. Matt and V is defined on rank 0 as > > Mat V[2] = {seqMat, PETSC_NULL} ; > > and and rank 1 as > > Mat V[2] = {PETSC_NULL, seqMat}; > > > Thomas > > Matt > > >> Thomas >> >> >> >>> >>> Hong >>> >>>> >>>> Thomas : >>>> >>>>> In my FETI-DP code, each rank creates a SEQAIJ matrix that represents >>>>> the discretization of the interior domain. Just for debugging, I would like >>>>> to join these sequential matrices to one global MPIAIJ matrix. This matrix >>>>> has no off diagonal nnzs and should be stored corresponding to the ranks >>>>> unknowns, thus, first all rows of the first rank and so on. What's the most >>>>> efficient way to do this? Is it possible to create this parallel matrix >>>>> just as a view of the sequential ones, so without copying the data? Thanks >>>>> for any advise. >>>> >>>> >>>> >>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html >>>> >>>> Note: entries in seqaij matrices are copied into a mpiaij matrix >>>> without >>>> inter-processor communication. Use petsc-3.3 for this function. >>>> >>>> >>>> The function does not do what I expect. For example, if we have two >>>> mpi task and each contains one local square matrix with n rows, I want to >>>> create a global square matrix with 2n rows. This function create a >>>> non-square matrix of size 2n x n. >>>> >>>> Thomas >>>> >>> >>> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.witkowski at tu-dresden.de Tue Sep 4 11:23:55 2012 From: thomas.witkowski at tu-dresden.de (Thomas Witkowski) Date: Tue, 04 Sep 2012 18:23:55 +0200 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: References: <50446727.8070104@tu-dresden.de> <5045B391.3080308@tu-dresden.de> <50461363.2080701@tu-dresden.de> <504615AF.2000005@tu-dresden.de> <50461E39.7080706@tu-dresden.de> Message-ID: <50462B1B.5010308@tu-dresden.de> Mat seqMat; MatCreateSeqAIJ(PETSC_COMM_SELF, 10, 10, 0, PETSC_NULL, &seqMat); Mat nestMat; MatCreateNest(PETSC_COMM_WORLD, 1, PETSC_NULL, 1, PETSC_NULL, &seqMat, &nestMat); Results in the following error message: [0]PETSC ERROR: PetscSplitOwnership() line 93 in /home/thomas/software/petsc-3.3-p0/src/sys/utils/psplit.c Sum of local lengths 20 does not equal global length 10, my local length 10 likely a call to VecSetSizes() or MatSetSizes() is wrong. See http://www.mcs.anl.gov/petsc/documentation/faq.html#split [1]PETSC ERROR: PetscSplitOwnership() line 93 in /home/thomas/software/petsc-3.3-p0/src/sys/utils/psplit.c Sum of local lengths 20 does not equal global length 10, my local length 10 likely a call to VecSetSizes() or MatSetSizes() is wrong. Thomas Am 04.09.2012 17:36, schrieb Matthew Knepley: > On Tue, Sep 4, 2012 at 10:28 AM, Thomas Witkowski > > wrote: > > Am 04.09.2012 17:20, schrieb Matthew Knepley: >> On Tue, Sep 4, 2012 at 9:52 AM, Thomas Witkowski >> > > wrote: >> >>> >>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATNEST.html >>> >> As I wrote in my initial question, each rank contains one and >> only one seqaij matrix, which all should be joined to one >> global matrix such that each local matrix is the >> corresponding diagonal block of the mpiaij matrix. I think, >> this does not work with nested matrices? >> >> >> Why does this not work? I really think you are making this harder >> than it has to be. >> > Mh, maybe I have an incomplete view of the possibilities how to > use nested matrices. > > To become more specific: In the case of two mpi tasks, each > containing one seqaij matrix, how to call MatCreateNest? Is this > correct: > > Mat A; > MatCreaeteNest(PETSC_COMM_WORLD, 2, PETSC_NULL, 2, PETSC_NULL, V ,&A); > > ^^^ This should be 1. > > Matt > > and V is defined on rank 0 as > > Mat V[2] = {seqMat, PETSC_NULL} ; > > and and rank 1 as > > Mat V[2] = {PETSC_NULL, seqMat}; > > > Thomas >> Matt >> >> Thomas >> >>> >>>> Hong >>>> >>>>> >>>>> Thomas : >>>>> >>>>> In my FETI-DP code, each rank creates a SEQAIJ >>>>> matrix that represents the discretization of >>>>> the interior domain. Just for debugging, I >>>>> would like to join these sequential matrices >>>>> to one global MPIAIJ matrix. This matrix has >>>>> no off diagonal nnzs and should be stored >>>>> corresponding to the ranks unknowns, thus, >>>>> first all rows of the first rank and so on. >>>>> What's the most efficient way to do this? Is >>>>> it possible to create this parallel matrix >>>>> just as a view of the sequential ones, so >>>>> without copying the data? Thanks for any advise. >>>>> >>>>> >>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html >>>>> >>>>> Note: entries in seqaij matrices are copied into a >>>>> mpiaij matrix without >>>>> inter-processor communication. Use petsc-3.3 for >>>>> this function. >>>> >>>> The function does not do what I expect. For >>>> example, if we have two mpi task and each contains >>>> one local square matrix with n rows, I want to >>>> create a global square matrix with 2n rows. This >>>> function create a non-square matrix of size 2n x n. >>>> >>>> Thomas >>>> >>>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From tisaac at ices.utexas.edu Tue Sep 4 17:17:00 2012 From: tisaac at ices.utexas.edu (Tobin Isaac) Date: Tue, 4 Sep 2012 17:17:00 -0500 Subject: [petsc-users] SEGV using asm + icc + empty processors Message-ID: <20120904221700.GA28320@ices.utexas.edu> I've set up PCMG using PCML with repartitioning, which gives some processors empty partitions on all by the finest levels. As smoothers I want to use block incomplete factorizations with one block per processor. My command line looks like this: -info -pc_ml_PrintLevel 10 -pc_ml_maxCoarseSize 128 -pc_ml_repartition -pc_ml_repartitionType Zoltan -pc_ml_Reusable -pc_ml_KeepAggInfo -pc_mg_cycle_type v -pc_mg_smoothup 1 -pc_mg_smoothdown 1 -mg_coarse_pc_type redundant -mg_coarse_redundant_pc_type cholesky -mg_levels_pc_type asm -on_error_attach_debugger -mg_levels_1_sub_pc_type icc I get a SEGV from the empty processors: [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374780 [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [1]PETSC ERROR: INSTEAD the line number of the start of the function [1]PETSC ERROR: is given. [1]PETSC ERROR: [1] MatCholeskyFactorNumeric_SeqAIJ line 2094 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/mat/impls/aij/seq/aijfact.c [1]PETSC ERROR: [1] MatCholeskyFactorNumeric line 3019 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/mat/interface/matrix.c [1]PETSC ERROR: [1] PCSetup_ICC line 13 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/pc/impls/factor/icc/icc.c [1]PETSC ERROR: [1] PCSetUp line 810 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/pc/interface/precon.c [1]PETSC ERROR: [1] KSPSetUp line 182 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: [1] PCSetUpOnBlocks_ASM line 416 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/pc/impls/asm/asm.c [1]PETSC ERROR: [1] PCSetUpOnBlocks line 861 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/pc/interface/precon.c [1]PETSC ERROR: [1] KSPSetUpOnBlocks line 151 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: [1] KSPSolve line 351 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: [1] PCMGMCycle_Private line 17 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/pc/impls/mg/mg.c [1]PETSC ERROR: [1] PCMGMCycle_Private line 17 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/pc/impls/mg/mg.c [1]PETSC ERROR: [1] PCApply_MG line 311 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/pc/impls/mg/mg.c [1]PETSC ERROR: [1] PCApply line 373 /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/ksp/pc/interface/precon.c [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file The debugger pulled up line 2129: #9 0x00007f72b65cb992 in MatCholeskyFactorNumeric_SeqAIJ (B=0x2f367b0, A=0x2798410, info=0x2796678) at /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/build/dev/src/mat/impls/aij/seq/aijfact.c:2129 2129 il[0] = 0; (gdb) list 2124 2125 do { 2126 sctx.newshift = PETSC_FALSE; 2127 2128 for (i=0; i From zonexo at gmail.com Wed Sep 5 01:55:19 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 05 Sep 2012 08:55:19 +0200 Subject: [petsc-users] Improve in speed/efficiency from partitioning in 1 direction to 3 directions In-Reply-To: References: <5045C516.6060002@gmail.com> <5045D529.8070703@gmail.com> Message-ID: <5046F757.1070708@gmail.com> On 4/9/2012 1:54 PM, Aron Ahmadia wrote: > There are a lot of other factors at play here, including how much time > you are spending working on the code, how far you're trying to scale > the algorithm and how much communication dominates your problem for > the architectures and algorithms you are using. Unless you are trying > to take this code to thousands of processors I probably wouldn't worry > about it. > > A Ok thanks for the suggestions Aron. I'll look at others parts for optimization. > > On Tue, Sep 4, 2012 at 11:17 AM, TAY wee-beng wrote: >> On 4/9/2012 11:11 AM, Aron Ahmadia wrote: >>> This doesn't strike me as a particularly large problem. I'm not sure >>> it's worth doing unless you are going to be looking at more unknowns >>> in the future. >>> >>> A >> Hi Aron, >> >> It will increase to 500x500x1200 or more. In that case, do you think it's >> worth it? >> >> Thanks! >> >>> On Tue, Sep 4, 2012 at 10:08 AM, TAY wee-beng wrote: >>>> Hi, >>>> >>>> My Fortran CFD code is currently partitioned in the z direction. Total >>>> grid >>>> size is around is 153x248x620. Hence depending on the no. of procs, the z >>>> direction 620 is partitioned. The grid size changes but the ratio is >>>> around >>>> there. >>>> >>>> Parititoning in 3 directions was initially too complex for me. However, >>>> it >>>> seems to be much simplified with the use of DM. However, there's still a >>>> lot >>>> of work to be done to make it working. >>>> >>>> I'm wondering how much improvement in speed/efficiency will I get, if I >>>> partition from 1 direction to 3 directions. Is it worth the effort? >>>> >>>> -- >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> From mrosso at uci.edu Wed Sep 5 12:07:45 2012 From: mrosso at uci.edu (Michele Rosso) Date: Wed, 05 Sep 2012 10:07:45 -0700 Subject: [petsc-users] Default preconditioner Message-ID: <504786E1.7070007@uci.edu> Hi, if I do not specify any preconditioner, is one used by default? If so, which one? I am using the DMDA 3d context and petsc3.3-p2. Thank you, Michele -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Sep 5 12:09:34 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 5 Sep 2012 12:09:34 -0500 Subject: [petsc-users] Default preconditioner In-Reply-To: <504786E1.7070007@uci.edu> References: <504786E1.7070007@uci.edu> Message-ID: On Wed, Sep 5, 2012 at 12:07 PM, Michele Rosso wrote: > Hi, > > if I do not specify any preconditioner, is one used by default? > If so, which one? > I am using the DMDA 3d context and petsc3.3-p2. > You can see what you are using with -ksp_view. The default is ILU(0). Matt > Thank you, > Michele > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Wed Sep 5 12:24:34 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 5 Sep 2012 11:24:34 -0600 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian Message-ID: Dear All, I am trying to use the option '-snes_type test' to test my coded Jacobian. I tested with different snes options and it gives me different answers. I wonder if someone could give me a hint what is wrong with my settings. The command line looks like this: ./my-code-opt -i test.i -snes_type test -snes_test_display 1), when using 'petsc_option = -snes' in my input file, it says the Finite difference Jacobian is very different than the Hand-coded Jacobian 2), when using 'petsc_option = -snes_fd' in my input file, it says the Finite difference Jacobian is idential to the Hand-coded Jacobian 3), when using 'petsc_option = -snes_mf_operator', it gives error messages like: "Invalid argument! Cannot test with alternative preconditioner!" Thanks in advance. Ling From tisaac at ices.utexas.edu Wed Sep 5 12:25:24 2012 From: tisaac at ices.utexas.edu (Tobin Isaac) Date: Wed, 5 Sep 2012 12:25:24 -0500 Subject: [petsc-users] sor vs. asm + sor Message-ID: <20120905172524.GA10427@ices.utexas.edu> What's the difference between "-pc_type sor -pc_sor_local_symmetric" and "-pc_type asm -sub_pc_type sor -sub_pc_sor_local_symmetric"? Specifically, this converges in 30 iterations: ./ex49 -mx 100 -my 100 -elas_ksp_view -elas_ksp_monitor -elas_ksp_type cg -elas_pc_type gamg -elas_pc_gamg_verbose 10 -elas_pc_gamg_threshold 0. -elas_mg_coarse_pc_type cholesky -elas_pc_mg_smoothup 1 -elas_pc_mg_smoothdown 1 -elas_mg_levels_ksp_type richardson -elas_mg_levels_pc_type asm -elas_ml_levels_1_sub_pc_type sor While this iterates forever: ./ex49 -mx 100 -my 100 -elas_ksp_view -elas_ksp_monitor -elas_ksp_type cg -elas_pc_type gamg -elas_pc_gamg_verbose 10 -elas_pc_gamg_threshold 0. -elas_mg_coarse_pc_type cholesky -elas_pc_mg_smoothup 1 -elas_pc_mg_smoothdown 1 -elas_mg_levels_ksp_type richardson -elas_mg_levels_pc_type sor Thanks, Toby -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From knepley at gmail.com Wed Sep 5 12:35:20 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 5 Sep 2012 12:35:20 -0500 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: On Wed, Sep 5, 2012 at 12:24 PM, Zou (Non-US), Ling wrote: > Dear All, > > I am trying to use the option '-snes_type test' to test my coded > Jacobian. I tested with different snes options and it gives me > different answers. I wonder if someone could give me a hint what is > wrong with my settings. > > The command line looks like this: > ./my-code-opt -i test.i -snes_type test -snes_test_display > > 1), when using 'petsc_option = -snes' in my input file, it says the > Finite difference Jacobian is very different than the Hand-coded > Jacobian > This means your hand-coded routine is likely wrong. > 2), when using 'petsc_option = -snes_fd' in my input file, it says the > Finite difference Jacobian is idential to the Hand-coded Jacobian > snes_fd replaces your hand-coded routine with our FD routine, so it is of course the same as our FD routine. > 3), when using 'petsc_option = -snes_mf_operator', it gives error messages > like: > "Invalid argument! Cannot test with alternative preconditioner!" > This is inappropriate for testing. Matt > Thanks in advance. > > Ling > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Sep 5 12:36:53 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 5 Sep 2012 12:36:53 -0500 Subject: [petsc-users] sor vs. asm + sor In-Reply-To: <20120905172524.GA10427@ices.utexas.edu> References: <20120905172524.GA10427@ices.utexas.edu> Message-ID: On Wed, Sep 5, 2012 at 12:25 PM, Tobin Isaac wrote: > > What's the difference between "-pc_type sor -pc_sor_local_symmetric" > and "-pc_type asm -sub_pc_type sor -sub_pc_sor_local_symmetric"? > The ASM version sticks a Krylov iteration in these by default. Matt > Specifically, this converges in 30 iterations: > > ./ex49 -mx 100 -my 100 -elas_ksp_view -elas_ksp_monitor -elas_ksp_type > cg -elas_pc_type gamg -elas_pc_gamg_verbose 10 -elas_pc_gamg_threshold > 0. -elas_mg_coarse_pc_type cholesky -elas_pc_mg_smoothup 1 > -elas_pc_mg_smoothdown 1 -elas_mg_levels_ksp_type richardson > -elas_mg_levels_pc_type asm -elas_ml_levels_1_sub_pc_type sor > > While this iterates forever: > > ./ex49 -mx 100 -my 100 -elas_ksp_view -elas_ksp_monitor -elas_ksp_type > cg -elas_pc_type gamg -elas_pc_gamg_verbose 10 -elas_pc_gamg_threshold > 0. -elas_mg_coarse_pc_type cholesky -elas_pc_mg_smoothup 1 > -elas_pc_mg_smoothdown 1 -elas_mg_levels_ksp_type richardson > -elas_mg_levels_pc_type sor > > Thanks, > Toby > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > > iEYEARECAAYFAlBHiwQACgkQk/TrNolnueXLKACeOtF4ahbmU+moglWAkBn01fCm > qxEAnjJvvg48mza8TKqgMRDQf6uEtoq+ > =i0vn > -----END PGP SIGNATURE----- > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Wed Sep 5 12:42:17 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 5 Sep 2012 11:42:17 -0600 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: Thanks Matt. This is really helpful. Best, Ling On 9/5/12, Matthew Knepley wrote: > On Wed, Sep 5, 2012 at 12:24 PM, Zou (Non-US), Ling > wrote: > >> Dear All, >> >> I am trying to use the option '-snes_type test' to test my coded >> Jacobian. I tested with different snes options and it gives me >> different answers. I wonder if someone could give me a hint what is >> wrong with my settings. >> >> The command line looks like this: >> ./my-code-opt -i test.i -snes_type test -snes_test_display >> >> 1), when using 'petsc_option = -snes' in my input file, it says the >> Finite difference Jacobian is very different than the Hand-coded >> Jacobian >> > > This means your hand-coded routine is likely wrong. > > >> 2), when using 'petsc_option = -snes_fd' in my input file, it says the >> Finite difference Jacobian is idential to the Hand-coded Jacobian >> > > snes_fd replaces your hand-coded routine with our FD routine, so it > is of course the same as our FD routine. > > >> 3), when using 'petsc_option = -snes_mf_operator', it gives error >> messages >> like: >> "Invalid argument! Cannot test with alternative preconditioner!" >> > > This is inappropriate for testing. > > Matt > > >> Thanks in advance. >> >> Ling >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > From tisaac at ices.utexas.edu Wed Sep 5 14:04:47 2012 From: tisaac at ices.utexas.edu (Tobin Isaac) Date: Wed, 5 Sep 2012 14:04:47 -0500 Subject: [petsc-users] sor vs. asm + sor Message-ID: <20120905190447.GA12621@ices.utexas.edu> > > > > What's the difference between "-pc_type sor -pc_sor_local_symmetric" > > and "-pc_type asm -sub_pc_type sor -sub_pc_sor_local_symmetric"? > > > > The ASM version sticks a Krylov iteration in these by default. > > Matt > > In this case the inner ksp is preonly so I think they ought to be the same, but I figured out that the issue is a bug in MatSOR_SeqAIJ_Inode, which I'll report to petsc-maint. Thanks, Toby > > Specifically, this converges in 30 iterations: > > > > ./ex49 -mx 100 -my 100 -elas_ksp_view -elas_ksp_monitor -elas_ksp_type > > cg -elas_pc_type gamg -elas_pc_gamg_verbose 10 -elas_pc_gamg_threshold > > 0. -elas_mg_coarse_pc_type cholesky -elas_pc_mg_smoothup 1 > > -elas_pc_mg_smoothdown 1 -elas_mg_levels_ksp_type richardson > > -elas_mg_levels_pc_type asm -elas_ml_levels_1_sub_pc_type sor > > > > While this iterates forever: > > > > ./ex49 -mx 100 -my 100 -elas_ksp_view -elas_ksp_monitor -elas_ksp_type > > cg -elas_pc_type gamg -elas_pc_gamg_verbose 10 -elas_pc_gamg_threshold > > 0. -elas_mg_coarse_pc_type cholesky -elas_pc_mg_smoothup 1 > > -elas_pc_mg_smoothdown 1 -elas_mg_levels_ksp_type richardson > > -elas_mg_levels_pc_type sor -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From gpau at lbl.gov Wed Sep 5 18:27:01 2012 From: gpau at lbl.gov (George Pau) Date: Wed, 5 Sep 2012 16:27:01 -0700 Subject: [petsc-users] performance of hmpi Message-ID: Hi, I am comparing between mpiexec -n 2 myexec -pc_type bjacobi -ksp_type preonly and mpiexec -n 1 myexec -pc_type hmpi -ksp_type preonly -hmpi_pc_type bjacobi -hmpi_ksp_type preonly -hmpi_spawn_size 2 My matrix size is 10500x10500 and it is a non-symmetric and not necessarily positive definite all the time. I am not seeing any improvement in performance in hmpi case going from 1 process to 2 processes (in fact there is a slight increase). With the bjacobi I do see some improvement. Is this expected due to perhaps communication costs? Under what situation will hmpi be useful? Should I be using hmpi only on much larger problems and with more processes? Thanks, George -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Sep 5 20:26:48 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 5 Sep 2012 20:26:48 -0500 Subject: [petsc-users] performance of hmpi In-Reply-To: References: Message-ID: <43A6D5C2-8EA9-4139-944B-86A132D54EC5@mcs.anl.gov> On Sep 5, 2012, at 6:27 PM, George Pau wrote: > Hi, > > I am comparing between > > mpiexec -n 2 myexec -pc_type bjacobi -ksp_type preonly This "solver" won't work. It does only one iteration of bjacobi and so won't solve the system. You should use it with -ksp_type gmres (for example) > > and > > mpiexec -n 1 myexec -pc_type hmpi -ksp_type preonly -hmpi_pc_type bjacobi -hmpi_ksp_type preonly -hmpi_spawn_size 2 > > My matrix size is 10500x10500 and it is a non-symmetric and not necessarily positive definite all the time. I am not seeing any improvement in performance in hmpi case going from 1 process to 2 processes (in fact there is a slight increase). With the bjacobi I do see some improvement. Is this expected due to perhaps communication costs? Under what situation will hmpi be useful? Should I be using hmpi only on much larger problems and with more processes? This is a small problem to run on two processes. You need larger problems to see a good improvement and it depends a lot on the machine as well. http://www.mcs.anl.gov/petsc/documentation/faq.html#computers Since the solver used in your two examples above (if you switch to gmres) is the same and they both run on two processes the solution time should be pretty similar. But the second case has an additional cost of reorganizing the sparse matrix from one process to two so will always be a bit slower. The hmpi is intended for when one's original code is either not parallel (or uses OpenMP) but allows the PETSc solver part of the code to be parallel (with MPI) so you can't really compare the two cases above. You should compare the -hmpi_spawn_size 2 case with the one MPI process case (and then go up to 3 and 4 processes with the spawn). You should also use -log_summary to get timing information for each part of the solution process so you can see where the time is being spent. Barry > > Thanks, > George > > -- > George Pau > Earth Sciences Division > Lawrence Berkeley National Laboratory > One Cyclotron, MS 74-120 > Berkeley, CA 94720 > > (510) 486-7196 > gpau at lbl.gov > http://esd.lbl.gov/about/staff/georgepau/ > From jedbrown at mcs.anl.gov Thu Sep 6 00:28:19 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 5 Sep 2012 22:28:19 -0700 Subject: [petsc-users] Improve in speed/efficiency from partitioning in 1 direction to 3 directions In-Reply-To: <5046F757.1070708@gmail.com> References: <5045C516.6060002@gmail.com> <5045D529.8070703@gmail.com> <5046F757.1070708@gmail.com> Message-ID: The preconditioner also matters. If you are using a one-level preconditioner, the 3D partition can cut the iteration count by a factor of approximately the cube root of the number of processes relative to the 1D partition. There are many factors that can limit the performance of your code. For performance questions, always send the output of -log_summary. On Tue, Sep 4, 2012 at 11:55 PM, TAY wee-beng wrote: > On 4/9/2012 1:54 PM, Aron Ahmadia wrote: > >> There are a lot of other factors at play here, including how much time >> you are spending working on the code, how far you're trying to scale >> the algorithm and how much communication dominates your problem for >> the architectures and algorithms you are using. Unless you are trying >> to take this code to thousands of processors I probably wouldn't worry >> about it. >> >> A >> > > Ok thanks for the suggestions Aron. I'll look at others parts for > optimization. > > >> On Tue, Sep 4, 2012 at 11:17 AM, TAY wee-beng wrote: >> >>> On 4/9/2012 11:11 AM, Aron Ahmadia wrote: >>> >>>> This doesn't strike me as a particularly large problem. I'm not sure >>>> it's worth doing unless you are going to be looking at more unknowns >>>> in the future. >>>> >>>> A >>>> >>> Hi Aron, >>> >>> It will increase to 500x500x1200 or more. In that case, do you think it's >>> worth it? >>> >>> Thanks! >>> >>> On Tue, Sep 4, 2012 at 10:08 AM, TAY wee-beng wrote: >>>> >>>>> Hi, >>>>> >>>>> My Fortran CFD code is currently partitioned in the z direction. Total >>>>> grid >>>>> size is around is 153x248x620. Hence depending on the no. of procs, >>>>> the z >>>>> direction 620 is partitioned. The grid size changes but the ratio is >>>>> around >>>>> there. >>>>> >>>>> Parititoning in 3 directions was initially too complex for me. However, >>>>> it >>>>> seems to be much simplified with the use of DM. However, there's still >>>>> a >>>>> lot >>>>> of work to be done to make it working. >>>>> >>>>> I'm wondering how much improvement in speed/efficiency will I get, if I >>>>> partition from 1 direction to 3 directions. Is it worth the effort? >>>>> >>>>> -- >>>>> Yours sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Sep 6 01:06:12 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 5 Sep 2012 23:06:12 -0700 Subject: [petsc-users] Convert mat SEQAIJ to MPIAIJ In-Reply-To: <50462B1B.5010308@tu-dresden.de> References: <50446727.8070104@tu-dresden.de> <5045B391.3080308@tu-dresden.de> <50461363.2080701@tu-dresden.de> <504615AF.2000005@tu-dresden.de> <50461E39.7080706@tu-dresden.de> <50462B1B.5010308@tu-dresden.de> Message-ID: Matt is giving you some misguided advice. All the submatrices in a MatNest have to share the same communicator. This restriction could be lifted in the future, but that's how it is now. Why do you want to concatenate these sequential blocks? Using MatSetValues() is the standard answer and what I would recommend. You may also be able to create a new MPIAIJ matrix with the correct local sizes, then MatMPIAIJGetSeqAIJ(Ampi,&Aseq,&Bseq,&colmap); MatCopy(myblock,Aseq,DIFFERENT_NONZERO_PATTERN); you'll have to do a dummy assembly if you go this route. On Tue, Sep 4, 2012 at 9:23 AM, Thomas Witkowski < thomas.witkowski at tu-dresden.de> wrote: > Mat seqMat; > MatCreateSeqAIJ(PETSC_COMM_SELF, 10, 10, 0, PETSC_NULL, &seqMat); > > Mat nestMat; > MatCreateNest(PETSC_COMM_WORLD, 1, PETSC_NULL, 1, PETSC_NULL, &seqMat, > &nestMat); > > Results in the following error message: > > [0]PETSC ERROR: PetscSplitOwnership() line 93 in > /home/thomas/software/petsc-3.3-p0/src/sys/utils/psplit.c Sum of local > lengths 20 does not equal global length 10, my local length 10 > likely a call to VecSetSizes() or MatSetSizes() is wrong. > See http://www.mcs.anl.gov/petsc/documentation/faq.html#split > [1]PETSC ERROR: PetscSplitOwnership() line 93 in > /home/thomas/software/petsc-3.3-p0/src/sys/utils/psplit.c Sum of local > lengths 20 does not equal global length 10, my local length 10 > likely a call to VecSetSizes() or MatSetSizes() is wrong. > > > Thomas > > > Am 04.09.2012 17:36, schrieb Matthew Knepley: > > On Tue, Sep 4, 2012 at 10:28 AM, Thomas Witkowski < > thomas.witkowski at tu-dresden.de> wrote: > >> Am 04.09.2012 17:20, schrieb Matthew Knepley: >> >> On Tue, Sep 4, 2012 at 9:52 AM, Thomas Witkowski < >> thomas.witkowski at tu-dresden.de> wrote: >> >>> >>> >>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATNEST.html >>> >>> As I wrote in my initial question, each rank contains one and only >>> one seqaij matrix, which all should be joined to one global matrix such >>> that each local matrix is the corresponding diagonal block of the mpiaij >>> matrix. I think, this does not work with nested matrices? >>> >> >> Why does this not work? I really think you are making this harder than >> it has to be. >> >> Mh, maybe I have an incomplete view of the possibilities how to use >> nested matrices. >> >> To become more specific: In the case of two mpi tasks, each containing >> one seqaij matrix, how to call MatCreateNest? Is this correct: >> >> Mat A; >> MatCreaeteNest(PETSC_COMM_WORLD, 2, PETSC_NULL, 2, PETSC_NULL, V ,&A); >> > ^^^ This > should be 1. > > Matt > > and V is defined on rank 0 as >> >> Mat V[2] = {seqMat, PETSC_NULL} ; >> >> and and rank 1 as >> >> Mat V[2] = {PETSC_NULL, seqMat}; >> >> >> Thomas >> >> Matt >> >> >>> Thomas >>> >>> >>> >>>> >>>> Hong >>>> >>>>> >>>>> Thomas : >>>>> >>>>>> In my FETI-DP code, each rank creates a SEQAIJ matrix that represents >>>>>> the discretization of the interior domain. Just for debugging, I would like >>>>>> to join these sequential matrices to one global MPIAIJ matrix. This matrix >>>>>> has no off diagonal nnzs and should be stored corresponding to the ranks >>>>>> unknowns, thus, first all rows of the first rank and so on. What's the most >>>>>> efficient way to do this? Is it possible to create this parallel matrix >>>>>> just as a view of the sequential ones, so without copying the data? Thanks >>>>>> for any advise. >>>>> >>>>> >>>>> >>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAIJConcatenateSeqAIJ.html >>>>> >>>>> Note: entries in seqaij matrices are copied into a mpiaij matrix >>>>> without >>>>> inter-processor communication. Use petsc-3.3 for this function. >>>>> >>>>> >>>>> The function does not do what I expect. For example, if we have two >>>>> mpi task and each contains one local square matrix with n rows, I want to >>>>> create a global square matrix with 2n rows. This function create a >>>>> non-square matrix of size 2n x n. >>>>> >>>>> Thomas >>>>> >>>> >>>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 0sabio00 at gmail.com Thu Sep 6 02:23:08 2012 From: 0sabio00 at gmail.com (0sabio00 at gmail.com) Date: Thu, 6 Sep 2012 02:23:08 -0500 Subject: [petsc-users] missing libpetsc.a Message-ID: I think i'm having a trouble linking to the petsc library. gcc main.cpp -I /path to /petsc-3.3-p3/include -I /path to/petsc-3.3-p3/arch-linux2-c-debug/include -L/path to /petsc-3.3-p3/arch-linux2-c-debug/lib/ -lpetsc-o test I get an error => DMGetMatrix not declared in this scope. and I realized that I can't find libpetsc.a or libpetsc.so anywhere. Where is the petsc library exactly? if I find the location of the petsc library and link it would it fix the problem? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mirzadeh at gmail.com Thu Sep 6 04:00:23 2012 From: mirzadeh at gmail.com (Mohammad Mirzadeh) Date: Thu, 6 Sep 2012 02:00:23 -0700 Subject: [petsc-users] missing libpetsc.a In-Reply-To: References: Message-ID: You need to link to all of the libraries. At the petsc top directory issue: make PETSC_DIR=$PWD PETSC_ARCH=arch-linux2-c-debug getlinklibs That'll give you all the libraries that need to be linked to the executable On Thu, Sep 6, 2012 at 12:23 AM, <0sabio00 at gmail.com> wrote: > I think i'm having a trouble linking to the petsc library. > > gcc main.cpp -I /path to /petsc-3.3-p3/include -I /path > to/petsc-3.3-p3/arch-linux2-c-debug/include > -L/path to /petsc-3.3-p3/arch-linux2-c-debug/lib/ -lpetsc-o test > > I get an error => DMGetMatrix not declared in this scope. > > and I realized that I can't find libpetsc.a or libpetsc.so anywhere. > Where is the petsc library exactly? if I find the location of the petsc > library and link it would it fix the problem? > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Sep 6 06:28:57 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 6 Sep 2012 04:28:57 -0700 Subject: [petsc-users] missing libpetsc.a In-Reply-To: References: Message-ID: The routine is spelled DMCreateMatrix() On Sep 6, 2012 12:23 AM, <0sabio00 at gmail.com> wrote: > I think i'm having a trouble linking to the petsc library. > > gcc main.cpp -I /path to /petsc-3.3-p3/include -I /path > to/petsc-3.3-p3/arch-linux2-c-debug/include > -L/path to /petsc-3.3-p3/arch-linux2-c-debug/lib/ -lpetsc-o test > > I get an error => DMGetMatrix not declared in this scope. > > and I realized that I can't find libpetsc.a or libpetsc.so anywhere. > Where is the petsc library exactly? if I find the location of the petsc > library and link it would it fix the problem? > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 0sabio00 at gmail.com Thu Sep 6 10:29:06 2012 From: 0sabio00 at gmail.com (0sabio00 at gmail.com) Date: Thu, 6 Sep 2012 10:29:06 -0500 Subject: [petsc-users] missing libpetsc.a In-Reply-To: References: Message-ID: I tried make PETSC_DIR=$PWD PETSC_ARCH=arch-linux2-c-debug getlinklibs and it printed out -Wl, -rpath, /PETSC_DIR/PETSC_ARCH/lib -L//PETSC_DIR/PETSC_ARCH/lib -lpetsc -lpthread -Wl -rpath, /PETSC_DIR/PETSC_ARCH/lib -lflapack -lfblas -lm -L/usr/lib/gcc/i486-linux-gnu/4.4.5 -L/usr/lib/i486-linux-gru -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl I gor rid of all ',' and used the flag above and it had error with -Wl and -rpath flags gcc: unrecognized option -rpath cclplus: error : unrecognized command line option "-Wl" so i got rid of those two flags and compiled and I still get the same error with DMGetMatrix not being declared. I checked the location of all the libraries listed above. (with find -name lib'blahblah'.*) at "/" But There was no library for petsc "libpetsc.*" anywhere. Am I missing something?? Thank you On Thu, Sep 6, 2012 at 6:28 AM, Jed Brown wrote: > The routine is spelled DMCreateMatrix() > On Sep 6, 2012 12:23 AM, <0sabio00 at gmail.com> wrote: > >> I think i'm having a trouble linking to the petsc library. >> >> gcc main.cpp -I /path to /petsc-3.3-p3/include -I /path >> to/petsc-3.3-p3/arch-linux2-c-debug/include >> -L/path to /petsc-3.3-p3/arch-linux2-c-debug/lib/ -lpetsc-o test >> >> I get an error => DMGetMatrix not declared in this scope. >> >> and I realized that I can't find libpetsc.a or libpetsc.so anywhere. >> Where is the petsc library exactly? if I find the location of the petsc >> library and link it would it fix the problem? >> >> Thanks >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 0sabio00 at gmail.com Thu Sep 6 10:33:45 2012 From: 0sabio00 at gmail.com (0sabio00 at gmail.com) Date: Thu, 6 Sep 2012 10:33:45 -0500 Subject: [petsc-users] missing libpetsc.a In-Reply-To: References: Message-ID: With new flag and DMGetMatrix --> DMCreateMatrix. I get an error /usr/bin/ld : cannot find -lpetsc collect2: ld returned 1 exit status On Thu, Sep 6, 2012 at 10:29 AM, <0sabio00 at gmail.com> wrote: > I tried > > make PETSC_DIR=$PWD PETSC_ARCH=arch-linux2-c-debug getlinklibs > > and it printed out > > -Wl, -rpath, /PETSC_DIR/PETSC_ARCH/lib -L//PETSC_DIR/PETSC_ARCH/lib > -lpetsc -lpthread -Wl -rpath, /PETSC_DIR/PETSC_ARCH/lib -lflapack -lfblas > -lm -L/usr/lib/gcc/i486-linux-gnu/4.4.5 -L/usr/lib/i486-linux-gru > -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread > -lgcc_s -ldl > > I gor rid of all ',' and used the flag above and it had error with -Wl and > -rpath flags > > gcc: unrecognized option -rpath > cclplus: error : unrecognized command line option "-Wl" > > so i got rid of those two flags and compiled and I still get the same > error with DMGetMatrix not being declared. > > I checked the location of all the libraries listed above. (with find -name > lib'blahblah'.*) at "/" > > But There was no library for petsc "libpetsc.*" anywhere. > > Am I missing something?? > > Thank you > > > > On Thu, Sep 6, 2012 at 6:28 AM, Jed Brown wrote: > >> The routine is spelled DMCreateMatrix() >> On Sep 6, 2012 12:23 AM, <0sabio00 at gmail.com> wrote: >> >>> I think i'm having a trouble linking to the petsc library. >>> >>> gcc main.cpp -I /path to /petsc-3.3-p3/include -I /path >>> to/petsc-3.3-p3/arch-linux2-c-debug/include >>> -L/path to /petsc-3.3-p3/arch-linux2-c-debug/lib/ -lpetsc-o test >>> >>> I get an error => DMGetMatrix not declared in this scope. >>> >>> and I realized that I can't find libpetsc.a or libpetsc.so anywhere. >>> Where is the petsc library exactly? if I find the location of the petsc >>> library and link it would it fix the problem? >>> >>> Thanks >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Sep 6 12:22:56 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 6 Sep 2012 12:22:56 -0500 Subject: [petsc-users] missing libpetsc.a In-Reply-To: References: Message-ID: On Thu, Sep 6, 2012 at 10:29 AM, <0sabio00 at gmail.com> wrote: > I tried > > make PETSC_DIR=$PWD PETSC_ARCH=arch-linux2-c-debug getlinklibs > > and it printed out > > -Wl, -rpath, /PETSC_DIR/PETSC_ARCH/lib -L//PETSC_DIR/PETSC_ARCH/lib > -lpetsc -lpthread -Wl -rpath, /PETSC_DIR/PETSC_ARCH/lib -lflapack -lfblas > -lm -L/usr/lib/gcc/i486-linux-gnu/4.4.5 -L/usr/lib/i486-linux-gru > -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread > -lgcc_s -ldl > > I gor rid of all ',' and used the flag above and it had error with -Wl and > -rpath flags > Why did you do that? These commas are meaningful. See the gcc manual. Matt gcc: unrecognized option -rpath > cclplus: error : unrecognized command line option "-Wl" > > so i got rid of those two flags and compiled and I still get the same > error with DMGetMatrix not being declared. > > I checked the location of all the libraries listed above. (with find -name > lib'blahblah'.*) at "/" > > But There was no library for petsc "libpetsc.*" anywhere. > > Am I missing something?? > > Thank you > > > > On Thu, Sep 6, 2012 at 6:28 AM, Jed Brown wrote: > >> The routine is spelled DMCreateMatrix() >> On Sep 6, 2012 12:23 AM, <0sabio00 at gmail.com> wrote: >> >>> I think i'm having a trouble linking to the petsc library. >>> >>> gcc main.cpp -I /path to /petsc-3.3-p3/include -I /path >>> to/petsc-3.3-p3/arch-linux2-c-debug/include >>> -L/path to /petsc-3.3-p3/arch-linux2-c-debug/lib/ -lpetsc-o test >>> >>> I get an error => DMGetMatrix not declared in this scope. >>> >>> and I realized that I can't find libpetsc.a or libpetsc.so anywhere. >>> Where is the petsc library exactly? if I find the location of the petsc >>> library and link it would it fix the problem? >>> >>> Thanks >>> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Sep 6 12:25:09 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 6 Sep 2012 12:25:09 -0500 (CDT) Subject: [petsc-users] missing libpetsc.a In-Reply-To: References: Message-ID: Also first confirm 'make test' works for this build of petsc. Satish On Thu, 6 Sep 2012, Matthew Knepley wrote: > On Thu, Sep 6, 2012 at 10:29 AM, <0sabio00 at gmail.com> wrote: > > > I tried > > > > make PETSC_DIR=$PWD PETSC_ARCH=arch-linux2-c-debug getlinklibs > > > > and it printed out > > > > -Wl, -rpath, /PETSC_DIR/PETSC_ARCH/lib -L//PETSC_DIR/PETSC_ARCH/lib > > -lpetsc -lpthread -Wl -rpath, /PETSC_DIR/PETSC_ARCH/lib -lflapack -lfblas > > -lm -L/usr/lib/gcc/i486-linux-gnu/4.4.5 -L/usr/lib/i486-linux-gru > > -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread > > -lgcc_s -ldl > > > > I gor rid of all ',' and used the flag above and it had error with -Wl and > > -rpath flags > > > > Why did you do that? These commas are meaningful. See the gcc manual. > > Matt > > gcc: unrecognized option -rpath > > cclplus: error : unrecognized command line option "-Wl" > > > > so i got rid of those two flags and compiled and I still get the same > > error with DMGetMatrix not being declared. > > > > I checked the location of all the libraries listed above. (with find -name > > lib'blahblah'.*) at "/" > > > > But There was no library for petsc "libpetsc.*" anywhere. > > > > Am I missing something?? > > > > Thank you > > > > > > > > On Thu, Sep 6, 2012 at 6:28 AM, Jed Brown wrote: > > > >> The routine is spelled DMCreateMatrix() > >> On Sep 6, 2012 12:23 AM, <0sabio00 at gmail.com> wrote: > >> > >>> I think i'm having a trouble linking to the petsc library. > >>> > >>> gcc main.cpp -I /path to /petsc-3.3-p3/include -I /path > >>> to/petsc-3.3-p3/arch-linux2-c-debug/include > >>> -L/path to /petsc-3.3-p3/arch-linux2-c-debug/lib/ -lpetsc-o test > >>> > >>> I get an error => DMGetMatrix not declared in this scope. > >>> > >>> and I realized that I can't find libpetsc.a or libpetsc.so anywhere. > >>> Where is the petsc library exactly? if I find the location of the petsc > >>> library and link it would it fix the problem? > >>> > >>> Thanks > >>> > >> > > > > > From caplanr at predsci.com Thu Sep 6 13:12:22 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Thu, 6 Sep 2012 11:12:22 -0700 Subject: [petsc-users] AO versus DM for 3D CG solve Message-ID: Hi, I am trying to implement petsc into an existing (very large) FORTRAN code which splits a 3D grid in each dimension (so for a 100x100x100 problem on 4 cores, each core gets a 25x25x25 part of the grid). The default partitioning for petsc just assigns contiguous rows of the coefficient matrix to each core which is hampering performance. >From looking around I have found that there are AO routines and DM routines in petsc. What is the difference between using either of them for my problem? Should I be using DM to partition the petsc matrices/vectors so that they match the grid points on each node, or do I use AO routines to do that? Or do AO routines just map the default petsc partitioning indices to my problem's indices in which case I want to use DM? Thanks Ron Caplan -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Sep 6 13:32:35 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 6 Sep 2012 13:32:35 -0500 Subject: [petsc-users] AO versus DM for 3D CG solve In-Reply-To: References: Message-ID: On Thu, Sep 6, 2012 at 1:12 PM, Ronald M. Caplan wrote: > Hi, > > I am trying to implement petsc into an existing (very large) FORTRAN code > which splits a 3D grid in each dimension (so for a 100x100x100 problem on 4 > cores, each core gets a 25x25x25 part of the grid). > > The default partitioning for petsc just assigns contiguous rows of the > coefficient matrix to each core which is hampering performance. > > From looking around I have found that there are AO routines and DM > routines in petsc. > > What is the difference between using either of them for my problem? > AO is just a global renumbering. DMDA is actually a structured grid. > Should I be using DM to partition the petsc matrices/vectors so that they > match the grid points on each node, or do I use AO routines to do that? > Or do AO routines just map the default petsc partitioning indices to my > problem's indices in which case I want to use DM? > If you have a colocated discretization on the structured grid, meaning your unknowns are only on vertices (or only on cells), use the DMDA. It will make your life easier, and also allows you to match your by-hand grid partitioning if you want. Matt > > Thanks > > Ron Caplan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From five9a2 at gmail.com Thu Sep 6 13:49:45 2012 From: five9a2 at gmail.com (Jed Brown) Date: Thu, 6 Sep 2012 11:49:45 -0700 Subject: [petsc-users] AO versus DM for 3D CG solve In-Reply-To: References: Message-ID: On Sep 6, 2012 1:12 PM, "Ronald M. Caplan" wrote: > > Hi, > > I am trying to implement petsc into an existing (very large) FORTRAN code which splits a 3D grid in each dimension (so for a 100x100x100 problem on 4 cores, each core gets a 25x25x25 part of the grid). Do you mean 4^3 = 64 cores? > > The default partitioning for petsc just assigns contiguous rows of the coefficient matrix to each core which is hampering performance. > > From looking around I have found that there are AO routines and DM routines in petsc. > > What is the difference between using either of them for my problem? > Should I be using DM to partition the petsc matrices/vectors so that they match the grid points on each node, or do I use AO routines to do that? > Or do AO routines just map the default petsc partitioning indices to my problem's indices in which case I want to use DM? > > Thanks > > Ron Caplan -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhenglun.wei at gmail.com Thu Sep 6 17:38:09 2012 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Thu, 06 Sep 2012 17:38:09 -0500 Subject: [petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve Message-ID: <504925D1.4080005@gmail.com> Dear All, I hope you're having a nice day. I met a memory problem for MPI data communication. I guess here is a good place to ask this question since you guys are experts and may experienced the same problem before. I used the MPI derived data type (MPI_Type_contiguous, MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation of 3D problem. The communication is fine, as I checked every single data it sent and received. However, the problem is that the memory keeps increasing while communication. Therefore, I tested each of these three types. MPI_Type_contiguous does not have any problem; while MPI_Type_vector and MPI_Type_indexed have problem of memory accumulation. I tried to use MPI_Type_free, but it does not help. Have anyone experienced this problem before? Would this be related to the non-blocking MPI communication (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication since the blocking communication is extremely slow when it has a lot of data involved in the communication. Is there any alternative way in PETSc that could do the similar work of MPI derived types? thanks, Alan From jedbrown at mcs.anl.gov Thu Sep 6 17:44:11 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 6 Sep 2012 15:44:11 -0700 Subject: [petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve In-Reply-To: <504925D1.4080005@gmail.com> References: <504925D1.4080005@gmail.com> Message-ID: Are you familiar with VecScatter? On Sep 6, 2012 5:38 PM, "Zhenglun (Alan) Wei" wrote: > Dear All, > I hope you're having a nice day. > I met a memory problem for MPI data communication. I guess here is a > good place to ask this question since you guys are experts and may > experienced the same problem before. > I used the MPI derived data type (MPI_Type_contiguous, > MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation > of 3D problem. The communication is fine, as I checked every single data it > sent and received. However, the problem is that the memory keeps increasing > while communication. Therefore, I tested each of these three types. > MPI_Type_contiguous does not have any problem; while MPI_Type_vector and > MPI_Type_indexed have problem of memory accumulation. I tried to use > MPI_Type_free, but it does not help. Have anyone experienced this problem > before? > Would this be related to the non-blocking MPI communication > (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication > since the blocking communication is extremely slow when it has a lot of > data involved in the communication. > Is there any alternative way in PETSc that could do the similar work > of MPI derived types? > > thanks, > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhenglun.wei at gmail.com Thu Sep 6 17:48:25 2012 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Thu, 06 Sep 2012 17:48:25 -0500 Subject: [petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve In-Reply-To: References: <504925D1.4080005@gmail.com> Message-ID: <50492839.9050702@gmail.com> Dear Dr. Brown, I'm not quite familiar with VecScatter. I just read its explanation; it seems requires that my data is stored as a form of vectors (is it the vector in PETSc?). However, my data are stored as arrays in C program. Is that any problem in MPI or it is likely a problem of my code? thanks, Alan On 9/6/2012 5:44 PM, Jed Brown wrote: > > Are you familiar with VecScatter? > > On Sep 6, 2012 5:38 PM, "Zhenglun (Alan) Wei" > wrote: > > Dear All, > I hope you're having a nice day. > I met a memory problem for MPI data communication. I guess > here is a good place to ask this question since you guys are > experts and may experienced the same problem before. > I used the MPI derived data type (MPI_Type_contiguous, > MPI_Type_vector and MPI_Type_indexed) to communicate data for a > simulation of 3D problem. The communication is fine, as I checked > every single data it sent and received. However, the problem is > that the memory keeps increasing while communication. Therefore, I > tested each of these three types. MPI_Type_contiguous does not > have any problem; while MPI_Type_vector and MPI_Type_indexed have > problem of memory accumulation. I tried to use MPI_Type_free, but > it does not help. Have anyone experienced this problem before? > Would this be related to the non-blocking MPI communication > (MPI_Isend and MPI_Irecv). I have to use this non-blocking > communication since the blocking communication is extremely slow > when it has a lot of data involved in the communication. > Is there any alternative way in PETSc that could do the > similar work of MPI derived types? > > thanks, > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Sep 6 17:51:44 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 6 Sep 2012 17:51:44 -0500 Subject: [petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve In-Reply-To: <504925D1.4080005@gmail.com> References: <504925D1.4080005@gmail.com> Message-ID: <49FB1C87-0E19-4CD6-AF2E-E43633FA430D@mcs.anl.gov> First I would try another MPI implementation. Do you get the exact same problem with MPICH and OpenMPI? Then likely it is an issue with your code, if only one has problems then it is an MPI implementation issue. Barry On Sep 6, 2012, at 5:38 PM, "Zhenglun (Alan) Wei" wrote: > Dear All, > I hope you're having a nice day. > I met a memory problem for MPI data communication. I guess here is a good place to ask this question since you guys are experts and may experienced the same problem before. > I used the MPI derived data type (MPI_Type_contiguous, MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation of 3D problem. The communication is fine, as I checked every single data it sent and received. However, the problem is that the memory keeps increasing while communication. Therefore, I tested each of these three types. MPI_Type_contiguous does not have any problem; while MPI_Type_vector and MPI_Type_indexed have problem of memory accumulation. I tried to use MPI_Type_free, but it does not help. Have anyone experienced this problem before? > Would this be related to the non-blocking MPI communication (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication since the blocking communication is extremely slow when it has a lot of data involved in the communication. > Is there any alternative way in PETSc that could do the similar work of MPI derived types? > > thanks, > Alan From zhenglun.wei at gmail.com Thu Sep 6 17:53:25 2012 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Thu, 06 Sep 2012 17:53:25 -0500 Subject: [petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve In-Reply-To: <49FB1C87-0E19-4CD6-AF2E-E43633FA430D@mcs.anl.gov> References: <504925D1.4080005@gmail.com> <49FB1C87-0E19-4CD6-AF2E-E43633FA430D@mcs.anl.gov> Message-ID: <50492965.2090302@gmail.com> Dear Dr. Smith, What I used is the MPICH. I will try OpenMPI to see if there is any problem. Thank you so much for the advice. cheers, Alan On 9/6/2012 5:51 PM, Barry Smith wrote: > First I would try another MPI implementation. Do you get the exact same problem with MPICH and OpenMPI? Then likely it is an issue with your code, if only one has problems then it is an MPI implementation issue. > > Barry > > > On Sep 6, 2012, at 5:38 PM, "Zhenglun (Alan) Wei" wrote: > >> Dear All, >> I hope you're having a nice day. >> I met a memory problem for MPI data communication. I guess here is a good place to ask this question since you guys are experts and may experienced the same problem before. >> I used the MPI derived data type (MPI_Type_contiguous, MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation of 3D problem. The communication is fine, as I checked every single data it sent and received. However, the problem is that the memory keeps increasing while communication. Therefore, I tested each of these three types. MPI_Type_contiguous does not have any problem; while MPI_Type_vector and MPI_Type_indexed have problem of memory accumulation. I tried to use MPI_Type_free, but it does not help. Have anyone experienced this problem before? >> Would this be related to the non-blocking MPI communication (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication since the blocking communication is extremely slow when it has a lot of data involved in the communication. >> Is there any alternative way in PETSc that could do the similar work of MPI derived types? >> >> thanks, >> Alan From jedbrown at mcs.anl.gov Thu Sep 6 17:56:33 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 6 Sep 2012 15:56:33 -0700 Subject: [petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve In-Reply-To: <50492839.9050702@gmail.com> References: <504925D1.4080005@gmail.com> <50492839.9050702@gmail.com> Message-ID: Numeric data that the solver sees should be stored in Vecs. You can put other scalars in Vecs if you like. On Sep 6, 2012 5:48 PM, "Zhenglun (Alan) Wei" wrote: > Dear Dr. Brown, > I'm not quite familiar with VecScatter. I just read its explanation; > it seems requires that my data is stored as a form of vectors (is it the > vector in PETSc?). However, my data are stored as arrays in C program. > Is that any problem in MPI or it is likely a problem of my code? > > thanks, > Alan > On 9/6/2012 5:44 PM, Jed Brown wrote: > > Are you familiar with VecScatter? > On Sep 6, 2012 5:38 PM, "Zhenglun (Alan) Wei" > wrote: > >> Dear All, >> I hope you're having a nice day. >> I met a memory problem for MPI data communication. I guess here is a >> good place to ask this question since you guys are experts and may >> experienced the same problem before. >> I used the MPI derived data type (MPI_Type_contiguous, >> MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation >> of 3D problem. The communication is fine, as I checked every single data it >> sent and received. However, the problem is that the memory keeps increasing >> while communication. Therefore, I tested each of these three types. >> MPI_Type_contiguous does not have any problem; while MPI_Type_vector and >> MPI_Type_indexed have problem of memory accumulation. I tried to use >> MPI_Type_free, but it does not help. Have anyone experienced this problem >> before? >> Would this be related to the non-blocking MPI communication >> (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication >> since the blocking communication is extremely slow when it has a lot of >> data involved in the communication. >> Is there any alternative way in PETSc that could do the similar work >> of MPI derived types? >> >> thanks, >> Alan >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 0sabio00 at gmail.com Thu Sep 6 18:22:24 2012 From: 0sabio00 at gmail.com (0sabio00 at gmail.com) Date: Thu, 6 Sep 2012 18:22:24 -0500 Subject: [petsc-users] missing libpetsc.a In-Reply-To: References: Message-ID: I tried with given flag and it still gave me an error ( cannot find -lpetsc ) When I did "make test" I got an error /usr/bin/ld : cannot find -lpetsc Thank you On Thu, Sep 6, 2012 at 12:25 PM, Satish Balay wrote: > Also first confirm 'make test' works for this build of petsc. > > Satish > > On Thu, 6 Sep 2012, Matthew Knepley wrote: > > > On Thu, Sep 6, 2012 at 10:29 AM, <0sabio00 at gmail.com> wrote: > > > > > I tried > > > > > > make PETSC_DIR=$PWD PETSC_ARCH=arch-linux2-c-debug getlinklibs > > > > > > and it printed out > > > > > > -Wl, -rpath, /PETSC_DIR/PETSC_ARCH/lib -L//PETSC_DIR/PETSC_ARCH/lib > > > -lpetsc -lpthread -Wl -rpath, /PETSC_DIR/PETSC_ARCH/lib -lflapack > -lfblas > > > -lm -L/usr/lib/gcc/i486-linux-gnu/4.4.5 -L/usr/lib/i486-linux-gru > > > -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread > > > -lgcc_s -ldl > > > > > > I gor rid of all ',' and used the flag above and it had error with -Wl > and > > > -rpath flags > > > > > > > Why did you do that? These commas are meaningful. See the gcc manual. > > > > Matt > > > > gcc: unrecognized option -rpath > > > cclplus: error : unrecognized command line option "-Wl" > > > > > > so i got rid of those two flags and compiled and I still get the same > > > error with DMGetMatrix not being declared. > > > > > > I checked the location of all the libraries listed above. (with find > -name > > > lib'blahblah'.*) at "/" > > > > > > But There was no library for petsc "libpetsc.*" anywhere. > > > > > > Am I missing something?? > > > > > > Thank you > > > > > > > > > > > > On Thu, Sep 6, 2012 at 6:28 AM, Jed Brown > wrote: > > > > > >> The routine is spelled DMCreateMatrix() > > >> On Sep 6, 2012 12:23 AM, <0sabio00 at gmail.com> wrote: > > >> > > >>> I think i'm having a trouble linking to the petsc library. > > >>> > > >>> gcc main.cpp -I /path to /petsc-3.3-p3/include -I /path > > >>> to/petsc-3.3-p3/arch-linux2-c-debug/include > > >>> -L/path to /petsc-3.3-p3/arch-linux2-c-debug/lib/ -lpetsc-o test > > >>> > > >>> I get an error => DMGetMatrix not declared in this scope. > > >>> > > >>> and I realized that I can't find libpetsc.a or libpetsc.so anywhere. > > >>> Where is the petsc library exactly? if I find the location of the > petsc > > >>> library and link it would it fix the problem? > > >>> > > >>> Thanks > > >>> > > >> > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Sep 6 19:27:21 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 6 Sep 2012 19:27:21 -0500 (CDT) Subject: [petsc-users] missing libpetsc.a In-Reply-To: References: Message-ID: Then you should first install PETSc and make sure the tests work. http://www.mcs.anl.gov/petsc/documentation/installation.html Satish On Thu, 6 Sep 2012, 0sabio00 at gmail.com wrote: > When I did "make test" I got an error > /usr/bin/ld : cannot find -lpetsc From caplanr at predsci.com Fri Sep 7 11:32:59 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Fri, 7 Sep 2012 09:32:59 -0700 Subject: [petsc-users] AO versus DM for 3D CG solve In-Reply-To: References: Message-ID: Yes sorry I mean 64 cores. On Thu, Sep 6, 2012 at 11:49 AM, Jed Brown wrote: > > On Sep 6, 2012 1:12 PM, "Ronald M. Caplan" wrote: > > > > Hi, > > > > I am trying to implement petsc into an existing (very large) FORTRAN > code which splits a 3D grid in each dimension (so for a 100x100x100 problem > on 4 cores, each core gets a 25x25x25 part of the grid). > > Do you mean 4^3 = 64 cores? > > > > > The default partitioning for petsc just assigns contiguous rows of the > coefficient matrix to each core which is hampering performance. > > > > From looking around I have found that there are AO routines and DM > routines in petsc. > > > > What is the difference between using either of them for my problem? > > Should I be using DM to partition the petsc matrices/vectors so that > they match the grid points on each node, or do I use AO routines to do that? > > Or do AO routines just map the default petsc partitioning indices to my > problem's indices in which case I want to use DM? > > > > Thanks > > > > Ron Caplan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhenglun.wei at gmail.com Fri Sep 7 16:50:38 2012 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Fri, 07 Sep 2012 16:50:38 -0500 Subject: [petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve In-Reply-To: References: <504925D1.4080005@gmail.com> <50492839.9050702@gmail.com> Message-ID: <504A6C2E.7070107@gmail.com> Dear folks, I did more tests, since I want to make sure where I'm wrong. As Dr. Smith suggested, I tested my code using OpenMPI and MPICH. Both of them have the memory accumulation problem. Therefore, I suppose there is a bug in my code. I went into the code, and changed the non-blocking MPI communication to blocking one. The memory accumulation problem is just gone by itself. However, I have to change it back since the blocking MPI communication does not allow me to do massive data communication. Now, I'm searching for related topics on non-blocking MPI communication. Here I cut off those unrelated part of my code and attach the communication part here. Could anyone help me to briefly check if there is any obvious mistake I made in the program? After unzip the file, './AlanRun' will execute the program. I really appreciate your help :) Alan On 9/6/2012 5:56 PM, Jed Brown wrote: > > Numeric data that the solver sees should be stored in Vecs. You can > put other scalars in Vecs if you like. > > On Sep 6, 2012 5:48 PM, "Zhenglun (Alan) Wei" > wrote: > > Dear Dr. Brown, > I'm not quite familiar with VecScatter. I just read its > explanation; it seems requires that my data is stored as a form of > vectors (is it the vector in PETSc?). However, my data are stored > as arrays in C program. > Is that any problem in MPI or it is likely a problem of my code? > > thanks, > Alan > On 9/6/2012 5:44 PM, Jed Brown wrote: >> >> Are you familiar with VecScatter? >> >> On Sep 6, 2012 5:38 PM, "Zhenglun (Alan) Wei" >> > wrote: >> >> Dear All, >> I hope you're having a nice day. >> I met a memory problem for MPI data communication. I >> guess here is a good place to ask this question since you >> guys are experts and may experienced the same problem before. >> I used the MPI derived data type (MPI_Type_contiguous, >> MPI_Type_vector and MPI_Type_indexed) to communicate data for >> a simulation of 3D problem. The communication is fine, as I >> checked every single data it sent and received. However, the >> problem is that the memory keeps increasing while >> communication. Therefore, I tested each of these three types. >> MPI_Type_contiguous does not have any problem; while >> MPI_Type_vector and MPI_Type_indexed have problem of memory >> accumulation. I tried to use MPI_Type_free, but it does not >> help. Have anyone experienced this problem before? >> Would this be related to the non-blocking MPI >> communication (MPI_Isend and MPI_Irecv). I have to use this >> non-blocking communication since the blocking communication >> is extremely slow when it has a lot of data involved in the >> communication. >> Is there any alternative way in PETSc that could do the >> similar work of MPI derived types? >> >> thanks, >> Alan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: V1.13_CommTEST.zip Type: application/x-zip-compressed Size: 6930 bytes Desc: not available URL: From fd.kong at siat.ac.cn Fri Sep 7 18:41:46 2012 From: fd.kong at siat.ac.cn (=?ISO-8859-1?B?ZmRrb25n?=) Date: Sat, 8 Sep 2012 07:41:46 +0800 Subject: [petsc-users] configure error with netcdf Message-ID: Hi all, There are anyone who know how to install netcdf with cross compile? I configured petsc with netcdf, but got error below: =============================================================================== Configuring PETSc to compile on your system =============================================================================== =============================================================================== ***** WARNING: MPI_DIR found in enviornment variables - ignoring ****** =============================================================================== =============================================================================== WARNING! Compiling PETSc with no debugging, this should only be done for timing and production runs. All development should be done when configured using --with-debugging=1 =============================================================================== =============================================================================== Configuring NetCDF; this may take several minutes =============================================================================== ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Error running make on NetCDF: Could not execute "cd /projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1 && AR="/usr/bin/ar" ARFLAGS="cr" CC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc" CFLAGS=" -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." CXX="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicxx" CXXFLAGS=" -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -fPIC -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." FC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" FCFLAGS=" -fPIC -Wno-unused-variable -O -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." F90="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" ./configure --prefix=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt --libdir=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib --disable-dap --enable-shared": configure: netCDF 4.1.1 checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes configure: checking user options checking whether CXX is set to ''... no checking whether FC is set to ''... no checking whether F90 is set to ''... no checking whether a NCIO_MINBLOCKSIZE was specified... 256 checking whether udunits is to be built... no checking if fsync support is enabled... yes checking whether extra valgrind tests should be run... no checking whether libcf is to be built... no checking whether reading of HDF4 SD files is to be enabled... no checking whether to fetch some sample HDF4 files from Unidata ftp site to test HDF4 reading (requires wget)... no checking whether parallel I/O for classic and 64-bit offset files using parallel-netcdf is to be enabled... no checking whether a location for the parallel-netcdf library was specified... no checking whether new netCDF-4 C++ API is to be built... no checking whether extra example tests should be run... no checking whether parallel IO tests should be run... no checking whether a location for the HDF5 library was specified... checking whether a location for the ZLIB library was specified... checking whether a location for the SZLIB library was specified... checking whether a location for the HDF4 library was specified... checking whether a default chunk size in bytes was specified... 4194304 checking whether a maximum per-variable cache size for HDF5 was specified... 67108864 checking whether a number of chunks for the default per-variable cache was specified... 10 checking whether a default file cache size for HDF5 was specified... 4194304 checking whether a default file cache maximum number of elements for HDF5 was specified... 1009 checking whether a default cache preemption for HDF5 was specified... 0.75 checking whether netCDF-4 logging is enabled... no checking whether a path for curl-config was specified... no checking whether a location for curl installation was specified... no configure: checking whether a location for curl-config is in PATH... yes checking whether DAP client is to be built... no checking whether dap remote testing should be enabled (default on)... no checking whether the time-consuming dap tests should be enabled (default off)... no checking whether a location for liboc was specified... no checking whether netCDF extra tests should be run (developers only)... no checking whether Fortran compiler(s) should be tested during configure... yes checking whether FFIO will be used... no checking whether to skip C++, F77, or F90 APIs if compiler is broken... yes checking whether only the C library is desired... no checking whether examples should be built... yes checking whether F77 API is desired... yes checking whether any Fortran API is desired... yes checking whether F90 API is desired... yes checking whether fortran type sizes should be checked... yes checking whether C API is desired... yes checking where to get netCDF C-only library for separate fortran libraries... checking whether CXX API is desired... yes checking whether v2 netCDF API should be built... yes checking whether the ncgen/ncdump should be built... yes checking whether large file (> 2GB) tests should be run... no checking whether benchmaks should be run (experimental)... no checking whether extreme numbers should be used in tests... yes checking where to put large temp files if large file tests are run... . checking whether a win32 DLL is desired... no checking whether separate fortran libs are desired... yes configure: finding C compiler checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... configure: error: in `/projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1': configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details. ******************************************************************************* The configure script is: ./configure --with-clanguage=cxx --with-shared-libraries=1 --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-mpi-dir=$MPI_DIR --known-mpi-shared-libraries=0 --with-batch=1 --download-parmetis=1 --download-metis=1 --with-64-bit-indices=1 --download-netcdf=1 --download-exodusii=1 --with-debugging=no --download-ptscotch=1 I also attached the file configure.log ------------------ Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.zip Type: application/octet-stream Size: 144771 bytes Desc: not available URL: From knepley at gmail.com Fri Sep 7 19:27:39 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 7 Sep 2012 19:27:39 -0500 Subject: [petsc-users] configure error with netcdf In-Reply-To: References: Message-ID: On Fri, Sep 7, 2012 at 6:41 PM, fdkong wrote: > Hi all, > > There are anyone who know how to install netcdf with cross compile? I > configured petsc with netcdf, but got error below: > Send externalpackages./netcdf*/config.log Thanks, Matt > > =============================================================================== > Configuring PETSc to compile on your system > > > =============================================================================== > =============================================================================== > > ***** WARNING: MPI_DIR > found in enviornment variables - ignoring ****** > > > =============================================================================== > > > =============================================================================== > > WARNING! > Compiling PETSc with no debugging, this should > > only be done for timing and > production runs. All development should > > be done when configured using --with-debugging=1 > > > =============================================================================== > > > =============================================================================== > > Configuring NetCDF; this > may take several minutes > > > =============================================================================== > > > > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > ------------------------------------------------------------------------------- > Error running make on NetCDF: Could not execute "cd > /projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1 && > AR="/usr/bin/ar" ARFLAGS="cr" > CC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc" CFLAGS=" > -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > CXX="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicxx" > CXXFLAGS=" -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > -fPIC -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > FC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" FCFLAGS=" > -fPIC -Wno-unused-variable -O > -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > F90="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" > ./configure --prefix=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt > --libdir=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib > --disable-dap --enable-shared": > configure: netCDF 4.1.1 > checking build system type... x86_64-unknown-linux-gnu > checking host system type... x86_64-unknown-linux-gnu > checking for a BSD-compatible install... /usr/bin/install -c > checking whether build environment is sane... yes > checking for a thread-safe mkdir -p... /bin/mkdir -p > checking for gawk... gawk > checking whether make sets $(MAKE)... yes > configure: checking user options > checking whether CXX is set to ''... no > checking whether FC is set to ''... no > checking whether F90 is set to ''... no > checking whether a NCIO_MINBLOCKSIZE was specified... 256 > checking whether udunits is to be built... no > checking if fsync support is enabled... yes > checking whether extra valgrind tests should be run... no > checking whether libcf is to be built... no > checking whether reading of HDF4 SD files is to be enabled... no > checking whether to fetch some sample HDF4 files from Unidata ftp site to > test HDF4 reading (requires wget)... no > checking whether parallel I/O for classic and 64-bit offset files using > parallel-netcdf is to be enabled... no > checking whether a location for the parallel-netcdf library was > specified... no > checking whether new netCDF-4 C++ API is to be built... no > checking whether extra example tests should be run... no > checking whether parallel IO tests should be run... no > checking whether a location for the HDF5 library was specified... > checking whether a location for the ZLIB library was specified... > checking whether a location for the SZLIB library was specified... > checking whether a location for the HDF4 library was specified... > checking whether a default chunk size in bytes was specified... 4194304 > checking whether a maximum per-variable cache size for HDF5 was > specified... 67108864 > checking whether a number of chunks for the default per-variable cache was > specified... 10 > checking whether a default file cache size for HDF5 was specified... > 4194304 > checking whether a default file cache maximum number of elements for HDF5 > was specified... 1009 > checking whether a default cache preemption for HDF5 was specified... 0.75 > checking whether netCDF-4 logging is enabled... no > checking whether a path for curl-config was specified... no > checking whether a location for curl installation was specified... no > configure: checking whether a location for curl-config is in PATH... yes > checking whether DAP client is to be built... no > checking whether dap remote testing should be enabled (default on)... no > checking whether the time-consuming dap tests should be enabled (default > off)... no > checking whether a location for liboc was specified... no > checking whether netCDF extra tests should be run (developers only)... no > checking whether Fortran compiler(s) should be tested during configure... > yes > checking whether FFIO will be used... no > checking whether to skip C++, F77, or F90 APIs if compiler is broken... yes > checking whether only the C library is desired... no > checking whether examples should be built... yes > checking whether F77 API is desired... yes > checking whether any Fortran API is desired... yes > checking whether F90 API is desired... yes > checking whether fortran type sizes should be checked... yes > checking whether C API is desired... yes > checking where to get netCDF C-only library for separate fortran > libraries... > checking whether CXX API is desired... yes > checking whether v2 netCDF API should be built... yes > checking whether the ncgen/ncdump should be built... yes > checking whether large file (> 2GB) tests should be run... no > checking whether benchmaks should be run (experimental)... no > checking whether extreme numbers should be used in tests... yes > checking where to put large temp files if large file tests are run... . > checking whether a win32 DLL is desired... no > checking whether separate fortran libs are desired... yes > configure: finding C compiler > checking whether the C compiler works... yes > checking for C compiler default output file name... a.out > checking for suffix of executables... > checking whether we are cross compiling... configure: error: in > `/projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1': > configure: error: cannot run C compiled programs. > If you meant to cross compile, use `--host'. > See `config.log' for more details. > > ******************************************************************************* > > The configure script is: > > ./configure --with-clanguage=cxx --with-shared-libraries=1 > --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-mpi-dir=$MPI_DIR > --known-mpi-shared-libraries=0 --with-batch=1 --download-parmetis=1 > --download-metis=1 --with-64-bit-indices=1 --download-netcdf=1 > --download-exodusii=1 --with-debugging=no --download-ptscotch=1 > > I also attached the file configure.log > > ------------------ > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Sep 7 21:20:26 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 7 Sep 2012 21:20:26 -0500 Subject: [petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve In-Reply-To: <504A6C2E.7070107@gmail.com> References: <504925D1.4080005@gmail.com> <50492839.9050702@gmail.com> <504A6C2E.7070107@gmail.com> Message-ID: if(!localBC.tBC) { MPI_Isend(&_TestV[0][_Index.tcEnd-1][0], 1, columntype, NbrRank.t, SendTag.T1st, PETSC_COMM_WORLD, &request); MPI_Isend(&_TestV[0][_Index.tcEnd][0], 1, columntype, NbrRank.t, SendTag.T2nd, PETSC_COMM_WORLD, &request); } if(!localBC.bBC) { MPI_Isend(&_TestV[0][_Index.bcStr+1][0], 1, columntype, NbrRank.b, SendTag.B1st, PETSC_COMM_WORLD, &request); MPI_Isend(&_TestV[0][_Index.bcStr][0], 1, columntype, NbrRank.b, SendTag.B2nd, PETSC_COMM_WORLD, &request); } MPI_Barrier(PETSC_COMM_WORLD); printf("Rank = %d finished sending!!!\n", rank); if(!localBC.tBC) { MPI_Irecv(&_TestV[0][_Index.tbStr+1][0], 1, columntype, NbrRank.t, RecvTag.T2nd, PETSC_COMM_WORLD, &request); MPI_Irecv(&_TestV[0][_Index.tbStr][0], 1, columntype, NbrRank.t, RecvTag.T1st, PETSC_COMM_WORLD, &request); } if(!localBC.bBC) { MPI_Irecv(&_TestV[0][_Index.bbEnd-1][0], 1, columntype, NbrRank.b, RecvTag.B2nd, PETSC_COMM_WORLD, &request); MPI_Irecv(&_TestV[0][_Index.bbEnd][0], 1, columntype, NbrRank.b, RecvTag.B1st, PETSC_COMM_WORLD, &request); } MPI_Wait(&request, &status); You are creating far more requests than you are waiting on. You need to keep track of *every* request and eventually wait on all of them. It is generally better for performance to post the receives first, then post the sends, then MPI_Waitall() on all the requests. On Fri, Sep 7, 2012 at 4:50 PM, Zhenglun (Alan) Wei wrote: > Dear folks, > I did more tests, since I want to make sure where I'm wrong. > As Dr. Smith suggested, I tested my code using OpenMPI and MPICH. > Both of them have the memory accumulation problem. Therefore, I suppose > there is a bug in my code. I went into the code, and changed the > non-blocking MPI communication to blocking one. The memory accumulation > problem is just gone by itself. However, I have to change it back since the > blocking MPI communication does not allow me to do massive data > communication. Now, I'm searching for related topics on non-blocking MPI > communication. > Here I cut off those unrelated part of my code and attach the > communication part here. Could anyone help me to briefly check if there is > any obvious mistake I made in the program? After unzip the file, > './AlanRun' will execute the program. > > I really appreciate your help :) > Alan > > > > On 9/6/2012 5:56 PM, Jed Brown wrote: > > Numeric data that the solver sees should be stored in Vecs. You can put > other scalars in Vecs if you like. > On Sep 6, 2012 5:48 PM, "Zhenglun (Alan) Wei" > wrote: > >> Dear Dr. Brown, >> I'm not quite familiar with VecScatter. I just read its explanation; >> it seems requires that my data is stored as a form of vectors (is it the >> vector in PETSc?). However, my data are stored as arrays in C program. >> Is that any problem in MPI or it is likely a problem of my code? >> >> thanks, >> Alan >> On 9/6/2012 5:44 PM, Jed Brown wrote: >> >> Are you familiar with VecScatter? >> On Sep 6, 2012 5:38 PM, "Zhenglun (Alan) Wei" >> wrote: >> >>> Dear All, >>> I hope you're having a nice day. >>> I met a memory problem for MPI data communication. I guess here is >>> a good place to ask this question since you guys are experts and may >>> experienced the same problem before. >>> I used the MPI derived data type (MPI_Type_contiguous, >>> MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation >>> of 3D problem. The communication is fine, as I checked every single data it >>> sent and received. However, the problem is that the memory keeps increasing >>> while communication. Therefore, I tested each of these three types. >>> MPI_Type_contiguous does not have any problem; while MPI_Type_vector and >>> MPI_Type_indexed have problem of memory accumulation. I tried to use >>> MPI_Type_free, but it does not help. Have anyone experienced this problem >>> before? >>> Would this be related to the non-blocking MPI communication >>> (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication >>> since the blocking communication is extremely slow when it has a lot of >>> data involved in the communication. >>> Is there any alternative way in PETSc that could do the similar >>> work of MPI derived types? >>> >>> thanks, >>> Alan >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Sep 7 21:25:59 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 7 Sep 2012 21:25:59 -0500 Subject: [petsc-users] AO versus DM for 3D CG solve In-Reply-To: References: Message-ID: On Fri, Sep 7, 2012 at 9:32 AM, Ronald M. Caplan wrote: > Yes sorry I mean 64 cores. Okay. As Matt says, DMDA is generally what you want for structured grids. AO is just renumbering and best avoided unless you really need it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Fri Sep 7 23:28:14 2012 From: fd.kong at siat.ac.cn (=?ISO-8859-1?B?ZmRrb25n?=) Date: Sat, 8 Sep 2012 12:28:14 +0800 Subject: [petsc-users] configure error with netcdf Message-ID: >> Hi all, >> >> There are anyone who know how to install netcdf with cross compile? I >> configured petsc with netcdf, but got error below: >> >Send externalpackages./netcdf*/config.log > Thanks, > Matt I attached the file config.log. >> >> >===========================================================================>==== >> Configuring PETSc to compile on your system >> >> >> >===========================================================================>==== > >===========================================================================>==== >> >> ***** WARNING: MPI_DIR >> found in enviornment variables - ignoring ****** >>> >> >> >===========================================================================>==== >> >> >> =============================================================================== >> >> WARNING! >> Compiling PETSc with no debugging, this should >> >> only be done for timing and >> production runs. All development should >> >> be done when configured using --with-debugging=1 > > > =============================================================================== >> >> >> =============================================================================== >> >> Configuring NetCDF; this >> may take several minutes >> >> >> =============================================================================== >> >> >> >> >> >> ******************************************************************************* >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> details): >> >> ------------------------------------------------------------------------------- >> Error running make on NetCDF: Could not execute "cd >> /projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1 && >> AR="/usr/bin/ar" ARFLAGS="cr" >> CC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc" CFLAGS=" >> -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." >> CXX="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicxx" >> CXXFLAGS=" -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O >> -fPIC -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." >> FC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" FCFLAGS=" >> -fPIC -Wno-unused-variable -O >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." >> F90="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" >> ./configure --prefix=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt >> --libdir=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib >> --disable-dap --enable-shared": >> configure: netCDF 4.1.1 >> checking build system type... x86_64-unknown-linux-gnu >> checking host system type... x86_64-unknown-linux-gnu >> checking for a BSD-compatible install... /usr/bin/install -c >> checking whether build environment is sane... yes >> checking for a thread-safe mkdir -p... /bin/mkdir -p >> checking for gawk... gawk >> checking whether make sets $(MAKE)... yes >> configure: checking user options > >checking whether CXX is set to ''... no >> checking whether FC is set to ''... no >> checking whether F90 is set to ''... no >> checking whether a NCIO_MINBLOCKSIZE was specified... 256 >> checking whether udunits is to be built... no > >checking if fsync support is enabled... yes >> checking whether extra valgrind tests should be run... no >> checking whether libcf is to be built... no >> checking whether reading of HDF4 SD files is to be enabled... no >> checking whether to fetch some sample HDF4 files from Unidata ftp site to > >test HDF4 reading (requires wget)... no >> checking whether parallel I/O for classic and 64-bit offset files using >> parallel-netcdf is to be enabled... no > >checking whether a location for the parallel-netcdf library was >> specified... no >> checking whether new netCDF-4 C++ API is to be built... no > >checking whether extra example tests should be run... no > >checking whether parallel IO tests should be run... no > >checking whether a location for the HDF5 library was specified... >> checking whether a location for the ZLIB library was specified... >> checking whether a location for the SZLIB library was specified... >> checking whether a location for the HDF4 library was specified... >> checking whether a default chunk size in bytes was specified... 4194304 > >checking whether a maximum per-variable cache size for HDF5 was > >specified... 67108864 > >checking whether a number of chunks for the default per-variable cache was >> specified... 10 >> checking whether a default file cache size for HDF5 was specified... > >4194304 > >checking whether a default file cache maximum number of elements for HDF5 > >was specified... 1009 > >checking whether a default cache preemption for HDF5 was specified... 0.75 > >checking whether netCDF-4 logging is enabled... no > >checking whether a path for curl-config was specified... no > >checking whether a location for curl installation was specified... no > >configure: checking whether a location for curl-config is in PATH... yes > >checking whether DAP client is to be built... no > >checking whether dap remote testing should be enabled (default on)... no > >checking whether the time-consuming dap tests should be enabled (default > >off)... no > >checking whether a location for liboc was specified... no >> checking whether netCDF extra tests should be run (developers only)... no > >checking whether Fortran compiler(s) should be tested during configure... > >yes > >checking whether FFIO will be used... no > >checking whether to skip C++, F77, or F90 APIs if compiler is broken... yes > >checking whether only the C library is desired... no >> checking whether examples should be built... yes >> checking whether F77 API is desired... yes > >checking whether any Fortran API is desired... yes > >checking whether F90 API is desired... yes > >checking whether fortran type sizes should be checked... yes >> checking whether C API is desired... yes >> checking where to get netCDF C-only library for separate fortran >> libraries... > >checking whether CXX API is desired... yes > >checking whether v2 netCDF API should be built... yes > >checking whether the ncgen/ncdump should be built... yes >> checking whether large file (> 2GB) tests should be run... no > >checking whether benchmaks should be run (experimental)... no >> checking whether extreme numbers should be used in tests... yes > >checking where to put large temp files if large file tests are run... . > >checking whether a win32 DLL is desired... no > >checking whether separate fortran libs are desired... yes >> configure: finding C compiler > >checking whether the C compiler works... yes >> checking for C compiler default output file name... a.out > >checking for suffix of executables... > >checking whether we are cross compiling... configure: error: in >> `/projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1': > >configure: error: cannot run C compiled programs. > >If you meant to cross compile, use `--host'. > >See `config.log' for more details. >> >> ******************************************************************************* >> > >The configure script is: >> > > ./configure --with-clanguage=cxx --with-shared-libraries=1 >> --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-mpi-dir=$MPI_DIR >> --known-mpi-shared-libraries=0 --with-batch=1 --download-parmetis=1 > >--download-metis=1 --with-64-bit-indices=1 --download-netcdf=1 > >--download-exodusii=1 --with-debugging=no --download-ptscotch=1 >> >> I also attached the file configure.log >> > >------------------ > >Fande Kong >> ShenZhen Institutes of Advanced Technology > >Chinese Academy of Sciences >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: config.zip Type: application/octet-stream Size: 5196 bytes Desc: not available URL: From knepley at gmail.com Fri Sep 7 23:30:42 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 7 Sep 2012 23:30:42 -0500 Subject: [petsc-users] configure error with netcdf In-Reply-To: References: Message-ID: On Fri, Sep 7, 2012 at 11:28 PM, fdkong wrote: > >> Hi all, > >> > >> There are anyone who know how to install netcdf with cross compile? I > >> configured petsc with netcdf, but got error below: > >> > > >Send externalpackages./netcdf*/config.log > > > Thanks, > > > Matt > > I attached the file config.log. > Your MPI shared libraries are not in your LD_LIBRARY_PATH configure:5252: checking whether we are cross compiling configure:5260: /curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc -o conftest -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I. conftest.c >&5 configure:5264: $? = 0 configure:5271: ./conftest ./conftest: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory configure:5275: $? = 127 Matt > >> > > >> >===========================================================================>==== > >> Configuring PETSc to compile on your system > >> > >> > > >> >===========================================================================>==== > > > >===========================================================================>==== > >> > >> ***** WARNING: > MPI_DIR > >> found in enviornment variables - ignoring ****** > >>> > >> > >> > >===========================================================================>==== > >> > >> > >> > =============================================================================== > >> > >> WARNING! > >> Compiling PETSc with no debugging, this should > >> > >> only be done for timing > and > >> production runs. All development should > >> > >> be done when configured using --with-debugging=1 > > > > > > > =============================================================================== > >> > >> > >> > =============================================================================== > >> > >> Configuring NetCDF; > this > >> may take several minutes > >> > >> > > >> =============================================================================== > >> > >> > >> > >> > >> > >> > ******************************************************************************* > >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for > >> details): > >> > >> > ------------------------------------------------------------------------------- > >> Error running make on NetCDF: Could not execute "cd > >> /projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1 && > >> AR="/usr/bin/ar" ARFLAGS="cr" > >> CC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc" > CFLAGS=" > >> -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > >> CXX="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicxx" > >> CXXFLAGS=" -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > >> -fPIC -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > >> FC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" > FCFLAGS=" > >> -fPIC -Wno-unused-variable -O > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > >> F90="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" > >> ./configure > --prefix=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt > >> --libdir=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib > >> --disable-dap --enable-shared": > >> configure: netCDF 4.1.1 > >> checking build system type... x86_64-unknown-linux-gnu > >> checking host system type... x86_64-unknown-linux-gnu > >> checking for a BSD-compatible install... /usr/bin/install -c > >> checking whether build environment is sane... yes > >> checking for a thread-safe mkdir -p... /bin/mkdir -p > >> checking for gawk... gawk > >> checking whether make sets $(MAKE)... yes > >> configure: checking user options > > >checking whether CXX is set to ''... no > >> checking whether FC is set to ''... no > >> checking whether F90 is set to ''... no > >> checking whether a NCIO_MINBLOCKSIZE was specified... 256 > >> checking whether udunits is to be built... no > > >checking if fsync support is enabled... yes > >> checking whether extra valgrind tests should be run... no > >> checking whether libcf is to be built... no > >> checking whether reading of HDF4 SD files is to be enabled... no > >> checking whether to fetch some sample HDF4 files from Unidata ftp site > to > > >test HDF4 reading (requires wget)... no > >> checking whether parallel I/O for classic and 64-bit offset files using > >> parallel-netcdf is to be enabled... no > > >checking whether a location for the parallel-netcdf library was > >> specified... no > >> checking whether new netCDF-4 C++ API is to be built... no > > >checking whether extra example tests should be run... no > > >checking whether parallel IO tests should be run... no > > >checking whether a location for the HDF5 library was specified... > >> checking whether a location for the ZLIB library was specified... > >> checking whether a location for the SZLIB library was specified... > >> checking whether a location for the HDF4 library was specified... > >> checking whether a default chunk size in bytes was specified... 4194304 > > >checking whether a maximum per-variable cache size for HDF5 was > > >specified... 67108864 > > >checking whether a number of chunks for the default per-variable cache > was > >> specified... 10 > >> checking whether a default file cache size for HDF5 was specified... > > >4194304 > > >checking whether a default file cache maximum number of elements for > HDF5 > > >was specified... 1009 > > >checking whether a default cache preemption for HDF5 was specified... > 0.75 > > >checking whether netCDF-4 logging is enabled... no > > >checking whether a path for curl-config was specified... no > > >checking whether a location for curl installation was specified... no > > >configure: checking whether a location for curl-config is in PATH... yes > > >checking whether DAP client is to be built... no > > >checking whether dap remote testing should be enabled (default on)... no > > >checking whether the time-consuming dap tests should be enabled (default > > >off)... no > > >checking whether a location for liboc was specified... no > >> checking whether netCDF extra tests should be run (developers only)... > no > > >checking whether Fortran compiler(s) should be tested during > configure... > > >yes > > >checking whether FFIO will be used... no > > >checking whether to skip C++, F77, or F90 APIs if compiler is broken... > yes > > >checking whether only the C library is desired... no > >> checking whether examples should be built... yes > >> checking whether F77 API is desired... yes > > >checking whether any Fortran API is desired... yes > > >checking whether F90 API is desired... yes > > >checking whether fortran type sizes should be checked... yes > >> checking whether C API is desired... yes > >> checking where to get netCDF C-only library for separate fortran > >> libraries... > > >checking whether CXX API is desired... yes > > >checking whether v2 netCDF API should be built... yes > > >checking whether the ncgen/ncdump should be built... yes > >> checking whether large file (> 2GB) tests should be run... no > > >checking whether benchmaks should be run (experimental)... no > >> checking whether extreme numbers should be used in tests... yes > > >checking where to put large temp files if large file tests are run... . > > >checking whether a win32 DLL is desired... no > > >checking whether separate fortran libs are desired... yes > >> configure: finding C compiler > > >checking whether the C compiler works... yes > >> checking for C compiler default output file name... a.out > > >checking for suffix of executables... > > >checking whether we are cross compiling... configure: error: in > >> `/projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1': > > >configure: error: cannot run C compiled programs. > > >If you meant to cross compile, use `--host'. > > >See `config.log' for more details. > >> > > >> ******************************************************************************* > >> > > >The configure script is: > >> > > > ./configure --with-clanguage=cxx --with-shared-libraries=1 > >> --with-dynamic-loading=1 --download-f-blas-lapack=1 > --with-mpi-dir=$MPI_DIR > >> --known-mpi-shared-libraries=0 --with-batch=1 --download-parmetis=1 > > >--download-metis=1 --with-64-bit-indices=1 --download-netcdf=1 > > >--download-exodusii=1 --with-debugging=no --download-ptscotch=1 > >> > >> I also attached the file configure.log > >> > > >------------------ > > >Fande Kong > >> ShenZhen Institutes of Advanced Technology > > >Chinese Academy of Sciences > >> > >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Sat Sep 8 10:35:29 2012 From: fd.kong at siat.ac.cn (=?ISO-8859-1?B?ZmRrb25n?=) Date: Sat, 8 Sep 2012 23:35:29 +0800 Subject: [petsc-users] configure error with netcdf Message-ID: Thank you Matt. I added MPI shared libraries to my LD_LIBRARY_PATH. The error disappeared. But other errors happened. Please find the attached files! ------------------ Original ------------------ From: "knepley"; Date: Sat, Sep 8, 2012 12:30 PM To: "fdkong"; Cc: "petsc-users"; Subject: Re: configure error with netcdf On Fri, Sep 7, 2012 at 11:28 PM, fdkong wrote: >> Hi all, >> >> There are anyone who know how to install netcdf with cross compile? I >> configured petsc with netcdf, but got error below: >> >Send externalpackages./netcdf*/config.log > Thanks, > Matt I attached the file config.log. Your MPI shared libraries are not in your LD_LIBRARY_PATH configure:5252: checking whether we are cross compiling configure:5260: /curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc -o conftest -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I. conftest.c >&5 configure:5264: $? = 0 configure:5271: ./conftest ./conftest: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory configure:5275: $? = 127 Matt >> >> >===========================================================================>==== >> Configuring PETSc to compile on your system >> >> >> >===========================================================================>==== > >===========================================================================>==== >> >> ***** WARNING: MPI_DIR >> found in enviornment variables - ignoring ****** >>> >> >> >===========================================================================>==== >> >> >> =============================================================================== >> >> WARNING! >> Compiling PETSc with no debugging, this should >> >> only be done for timing and >> production runs. All development should >> >> be done when configured using --with-debugging=1 > > > =============================================================================== >> >> >> =============================================================================== >> >> Configuring NetCDF; this >> may take several minutes >> >> >> =============================================================================== >> >> >> >> >> >> ******************************************************************************* >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> details): >> >> ------------------------------------------------------------------------------- >> Error running make on NetCDF: Could not execute "cd >> /projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1 && >> AR="/usr/bin/ar" ARFLAGS="cr" >> CC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc" CFLAGS=" >> -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." >> CXX="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicxx" >> CXXFLAGS=" -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O >> -fPIC -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." >> FC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" FCFLAGS=" >> -fPIC -Wno-unused-variable -O >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." >> F90="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" >> ./configure --prefix=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt >> --libdir=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib >> --disable-dap --enable-shared": >> configure: netCDF 4.1.1 >> checking build system type... x86_64-unknown-linux-gnu >> checking host system type... x86_64-unknown-linux-gnu >> checking for a BSD-compatible install... /usr/bin/install -c >> checking whether build environment is sane... yes >> checking for a thread-safe mkdir -p... /bin/mkdir -p >> checking for gawk... gawk >> checking whether make sets $(MAKE)... yes >> configure: checking user options > >checking whether CXX is set to ''... no >> checking whether FC is set to ''... no >> checking whether F90 is set to ''... no >> checking whether a NCIO_MINBLOCKSIZE was specified... 256 >> checking whether udunits is to be built... no > >checking if fsync support is enabled... yes >> checking whether extra valgrind tests should be run... no >> checking whether libcf is to be built... no >> checking whether reading of HDF4 SD files is to be enabled... no >> checking whether to fetch some sample HDF4 files from Unidata ftp site to > >test HDF4 reading (requires wget)... no >> checking whether parallel I/O for classic and 64-bit offset files using >> parallel-netcdf is to be enabled... no > >checking whether a location for the parallel-netcdf library was >> specified... no >> checking whether new netCDF-4 C++ API is to be built... no > >checking whether extra example tests should be run... no > >checking whether parallel IO tests should be run... no > >checking whether a location for the HDF5 library was specified... >> checking whether a location for the ZLIB library was specified... >> checking whether a location for the SZLIB library was specified... >> checking whether a location for the HDF4 library was specified... >> checking whether a default chunk size in bytes was specified... 4194304 > >checking whether a maximum per-variable cache size for HDF5 was > >specified... 67108864 > >checking whether a number of chunks for the default per-variable cache was >> specified... 10 >> checking whether a default file cache size for HDF5 was specified... > >4194304 > >checking whether a default file cache maximum number of elements for HDF5 > >was specified... 1009 > >checking whether a default cache preemption for HDF5 was specified... 0.75 > >checking whether netCDF-4 logging is enabled... no > >checking whether a path for curl-config was specified... no > >checking whether a location for curl installation was specified... no > >configure: checking whether a location for curl-config is in PATH... yes > >checking whether DAP client is to be built... no > >checking whether dap remote testing should be enabled (default on)... no > >checking whether the time-consuming dap tests should be enabled (default > >off)... no > >checking whether a location for liboc was specified... no >> checking whether netCDF extra tests should be run (developers only)... no > >checking whether Fortran compiler(s) should be tested during configure... > >yes > >checking whether FFIO will be used... no > >checking whether to skip C++, F77, or F90 APIs if compiler is broken... yes > >checking whether only the C library is desired... no >> checking whether examples should be built... yes >> checking whether F77 API is desired... yes > >checking whether any Fortran API is desired... yes > >checking whether F90 API is desired... yes > >checking whether fortran type sizes should be checked... yes >> checking whether C API is desired... yes >> checking where to get netCDF C-only library for separate fortran >> libraries... > >checking whether CXX API is desired... yes > >checking whether v2 netCDF API should be built... yes > >checking whether the ncgen/ncdump should be built... yes >> checking whether large file (> 2GB) tests should be run... no > >checking whether benchmaks should be run (experimental)... no >> checking whether extreme numbers should be used in tests... yes > >checking where to put large temp files if large file tests are run... . > >checking whether a win32 DLL is desired... no > >checking whether separate fortran libs are desired... yes >> configure: finding C compiler > >checking whether the C compiler works... yes >> checking for C compiler default output file name... a.out > >checking for suffix of executables... > >checking whether we are cross compiling... configure: error: in >> `/projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1': > >configure: error: cannot run C compiled programs. > >If you meant to cross compile, use `--host'. > >See `config.log' for more details. >> >> ******************************************************************************* >> > >The configure script is: >> > > ./configure --with-clanguage=cxx --with-shared-libraries=1 >> --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-mpi-dir=$MPI_DIR >> --known-mpi-shared-libraries=0 --with-batch=1 --download-parmetis=1 > >--download-metis=1 --with-64-bit-indices=1 --download-netcdf=1 > >--download-exodusii=1 --with-debugging=no --download-ptscotch=1 >> >> I also attached the file configure.log >> > >------------------ > >Fande Kong >> ShenZhen Institutes of Advanced Technology > >Chinese Academy of Sciences >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: config.zip Type: application/octet-stream Size: 16287 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.zip Type: application/octet-stream Size: 159428 bytes Desc: not available URL: From zhenglun.wei at gmail.com Sat Sep 8 12:59:50 2012 From: zhenglun.wei at gmail.com (Alan Wei) Date: Sat, 8 Sep 2012 12:59:50 -0500 Subject: [petsc-users] MPI Derived Data Type and Non Blocking MPI Send/Recieve In-Reply-To: References: <504925D1.4080005@gmail.com> <50492839.9050702@gmail.com> <504A6C2E.7070107@gmail.com> Message-ID: Thank you soooooo much, Dr. Brown. It fixed my problem. I really appreciate your time and help. :) thank you again, Alan On Fri, Sep 7, 2012 at 9:20 PM, Jed Brown wrote: > if(!localBC.tBC) { > MPI_Isend(&_TestV[0][_Index.tcEnd-1][0], 1, columntype, NbrRank.t, > SendTag.T1st, PETSC_COMM_WORLD, &request); > MPI_Isend(&_TestV[0][_Index.tcEnd][0], 1, columntype, NbrRank.t, > SendTag.T2nd, PETSC_COMM_WORLD, &request); > } > > if(!localBC.bBC) { > MPI_Isend(&_TestV[0][_Index.bcStr+1][0], 1, columntype, NbrRank.b, > SendTag.B1st, PETSC_COMM_WORLD, &request); > MPI_Isend(&_TestV[0][_Index.bcStr][0], 1, columntype, NbrRank.b, > SendTag.B2nd, PETSC_COMM_WORLD, &request); > } > > MPI_Barrier(PETSC_COMM_WORLD); > printf("Rank = %d finished sending!!!\n", rank); > > if(!localBC.tBC) { > MPI_Irecv(&_TestV[0][_Index.tbStr+1][0], 1, columntype, NbrRank.t, > RecvTag.T2nd, PETSC_COMM_WORLD, &request); > MPI_Irecv(&_TestV[0][_Index.tbStr][0], 1, columntype, NbrRank.t, > RecvTag.T1st, PETSC_COMM_WORLD, &request); > } > > if(!localBC.bBC) { > MPI_Irecv(&_TestV[0][_Index.bbEnd-1][0], 1, columntype, NbrRank.b, > RecvTag.B2nd, PETSC_COMM_WORLD, &request); > MPI_Irecv(&_TestV[0][_Index.bbEnd][0], 1, columntype, NbrRank.b, > RecvTag.B1st, PETSC_COMM_WORLD, &request); > } > > MPI_Wait(&request, &status); > > > You are creating far more requests than you are waiting on. You need to > keep track of *every* request and eventually wait on all of them. > > It is generally better for performance to post the receives first, then > post the sends, then MPI_Waitall() on all the requests. > > On Fri, Sep 7, 2012 at 4:50 PM, Zhenglun (Alan) Wei < > zhenglun.wei at gmail.com> wrote: > >> Dear folks, >> I did more tests, since I want to make sure where I'm wrong. >> As Dr. Smith suggested, I tested my code using OpenMPI and MPICH. >> Both of them have the memory accumulation problem. Therefore, I suppose >> there is a bug in my code. I went into the code, and changed the >> non-blocking MPI communication to blocking one. The memory accumulation >> problem is just gone by itself. However, I have to change it back since the >> blocking MPI communication does not allow me to do massive data >> communication. Now, I'm searching for related topics on non-blocking MPI >> communication. >> Here I cut off those unrelated part of my code and attach the >> communication part here. Could anyone help me to briefly check if there is >> any obvious mistake I made in the program? After unzip the file, >> './AlanRun' will execute the program. >> >> I really appreciate your help :) >> Alan >> >> >> >> On 9/6/2012 5:56 PM, Jed Brown wrote: >> >> Numeric data that the solver sees should be stored in Vecs. You can put >> other scalars in Vecs if you like. >> On Sep 6, 2012 5:48 PM, "Zhenglun (Alan) Wei" >> wrote: >> >>> Dear Dr. Brown, >>> I'm not quite familiar with VecScatter. I just read its >>> explanation; it seems requires that my data is stored as a form of vectors >>> (is it the vector in PETSc?). However, my data are stored as arrays in C >>> program. >>> Is that any problem in MPI or it is likely a problem of my code? >>> >>> thanks, >>> Alan >>> On 9/6/2012 5:44 PM, Jed Brown wrote: >>> >>> Are you familiar with VecScatter? >>> On Sep 6, 2012 5:38 PM, "Zhenglun (Alan) Wei" >>> wrote: >>> >>>> Dear All, >>>> I hope you're having a nice day. >>>> I met a memory problem for MPI data communication. I guess here is >>>> a good place to ask this question since you guys are experts and may >>>> experienced the same problem before. >>>> I used the MPI derived data type (MPI_Type_contiguous, >>>> MPI_Type_vector and MPI_Type_indexed) to communicate data for a simulation >>>> of 3D problem. The communication is fine, as I checked every single data it >>>> sent and received. However, the problem is that the memory keeps increasing >>>> while communication. Therefore, I tested each of these three types. >>>> MPI_Type_contiguous does not have any problem; while MPI_Type_vector and >>>> MPI_Type_indexed have problem of memory accumulation. I tried to use >>>> MPI_Type_free, but it does not help. Have anyone experienced this problem >>>> before? >>>> Would this be related to the non-blocking MPI communication >>>> (MPI_Isend and MPI_Irecv). I have to use this non-blocking communication >>>> since the blocking communication is extremely slow when it has a lot of >>>> data involved in the communication. >>>> Is there any alternative way in PETSc that could do the similar >>>> work of MPI derived types? >>>> >>>> thanks, >>>> Alan >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Sep 8 13:58:39 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 8 Sep 2012 13:58:39 -0500 Subject: [petsc-users] configure error with netcdf In-Reply-To: References: Message-ID: <15A5E49D-DC6A-4D55-8A41-5BF2A771020F@mcs.anl.gov> This is likely the same flex problem you say with ptscotch: ncgen.l:142: warning: passing argument 1 of ?ncgerror? discards qualifiers from pointer target type ncgentab.o: In function `ignore': ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' collect2: ld returned 1 exit status make[2]: *** [ncgen3] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 netcdf.py would have to be modified in the same way as ptscotch.py was modified to do the right thing with flex Satish, do you know how to do it? Cause I sure don't. Barry On Sep 8, 2012, at 10:35 AM, fdkong wrote: > Thank you Matt. I added MPI shared libraries to my LD_LIBRARY_PATH. The error disappeared. But other errors happened. Please find the attached files! > > > > ------------------ Original ------------------ > From: "knepley"; > Date: Sat, Sep 8, 2012 12:30 PM > To: "fdkong"; > Cc: "petsc-users"; > Subject: Re: configure error with netcdf > > On Fri, Sep 7, 2012 at 11:28 PM, fdkong wrote: > >> Hi all, > >> > >> There are anyone who know how to install netcdf with cross compile? I > >> configured petsc with netcdf, but got error below: > >> > > >Send externalpackages./netcdf*/config.log > > > Thanks, > > > Matt > > I attached the file config.log. > > Your MPI shared libraries are not in your LD_LIBRARY_PATH > > configure:5252: checking whether we are cross compiling > configure:5260: /curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc -o conftest -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I. conftest.c >&5 > configure:5264: $? = 0 > configure:5271: ./conftest > ./conftest: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory > configure:5275: $? = 127 > > Matt > >> > >> >===========================================================================>==== > >> Configuring PETSc to compile on your system > >> > >> > >> >===========================================================================>==== > > >===========================================================================>==== > >> > >> ***** WARNING: MPI_DIR > >> found in enviornment variables - ignoring ****** > >>> > >> > >> >===========================================================================>==== > >> > >> > >> =============================================================================== > >> > >> WARNING! > >> Compiling PETSc with no debugging, this should > >> > >> only be done for timing and > >> production runs. All development should > >> > >> be done when configured using --with-debugging=1 > > > > > > =============================================================================== > >> > >> > >> =============================================================================== > >> > >> Configuring NetCDF; this > >> may take several minutes > >> > >> > >> =============================================================================== > >> > >> > >> > >> > >> > >> ******************************************************************************* > >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > >> details): > >> > >> ------------------------------------------------------------------------------- > >> Error running make on NetCDF: Could not execute "cd > >> /projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1 && > >> AR="/usr/bin/ar" ARFLAGS="cr" > >> CC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc" CFLAGS=" > >> -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > >> CXX="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicxx" > >> CXXFLAGS=" -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > >> -fPIC -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > >> FC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" FCFLAGS=" > >> -fPIC -Wno-unused-variable -O > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > >> F90="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" > >> ./configure --prefix=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt > >> --libdir=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib > >> --disable-dap --enable-shared": > >> configure: netCDF 4.1.1 > >> checking build system type... x86_64-unknown-linux-gnu > >> checking host system type... x86_64-unknown-linux-gnu > >> checking for a BSD-compatible install... /usr/bin/install -c > >> checking whether build environment is sane... yes > >> checking for a thread-safe mkdir -p... /bin/mkdir -p > >> checking for gawk... gawk > >> checking whether make sets $(MAKE)... yes > >> configure: checking user options > > >checking whether CXX is set to ''... no > >> checking whether FC is set to ''... no > >> checking whether F90 is set to ''... no > >> checking whether a NCIO_MINBLOCKSIZE was specified... 256 > >> checking whether udunits is to be built... no > > >checking if fsync support is enabled... yes > >> checking whether extra valgrind tests should be run... no > >> checking whether libcf is to be built... no > >> checking whether reading of HDF4 SD files is to be enabled... no > >> checking whether to fetch some sample HDF4 files from Unidata ftp site to > > >test HDF4 reading (requires wget)... no > >> checking whether parallel I/O for classic and 64-bit offset files using > >> parallel-netcdf is to be enabled... no > > >checking whether a location for the parallel-netcdf library was > >> specified... no > >> checking whether new netCDF-4 C++ API is to be built... no > > >checking whether extra example tests should be run... no > > >checking whether parallel IO tests should be run... no > > >checking whether a location for the HDF5 library was specified... > >> checking whether a location for the ZLIB library was specified... > >> checking whether a location for the SZLIB library was specified... > >> checking whether a location for the HDF4 library was specified... > >> checking whether a default chunk size in bytes was specified... 4194304 > > >checking whether a maximum per-variable cache size for HDF5 was > > >specified... 67108864 > > >checking whether a number of chunks for the default per-variable cache was > >> specified... 10 > >> checking whether a default file cache size for HDF5 was specified... > > >4194304 > > >checking whether a default file cache maximum number of elements for HDF5 > > >was specified... 1009 > > >checking whether a default cache preemption for HDF5 was specified... 0.75 > > >checking whether netCDF-4 logging is enabled... no > > >checking whether a path for curl-config was specified... no > > >checking whether a location for curl installation was specified... no > > >configure: checking whether a location for curl-config is in PATH... yes > > >checking whether DAP client is to be built... no > > >checking whether dap remote testing should be enabled (default on)... no > > >checking whether the time-consuming dap tests should be enabled (default > > >off)... no > > >checking whether a location for liboc was specified... no > >> checking whether netCDF extra tests should be run (developers only)... no > > >checking whether Fortran compiler(s) should be tested during configure... > > >yes > > >checking whether FFIO will be used... no > > >checking whether to skip C++, F77, or F90 APIs if compiler is broken... yes > > >checking whether only the C library is desired... no > >> checking whether examples should be built... yes > >> checking whether F77 API is desired... yes > > >checking whether any Fortran API is desired... yes > > >checking whether F90 API is desired... yes > > >checking whether fortran type sizes should be checked... yes > >> checking whether C API is desired... yes > >> checking where to get netCDF C-only library for separate fortran > >> libraries... > > >checking whether CXX API is desired... yes > > >checking whether v2 netCDF API should be built... yes > > >checking whether the ncgen/ncdump should be built... yes > >> checking whether large file (> 2GB) tests should be run... no > > >checking whether benchmaks should be run (experimental)... no > >> checking whether extreme numbers should be used in tests... yes > > >checking where to put large temp files if large file tests are run... . > > >checking whether a win32 DLL is desired... no > > >checking whether separate fortran libs are desired... yes > >> configure: finding C compiler > > >checking whether the C compiler works... yes > >> checking for C compiler default output file name... a.out > > >checking for suffix of executables... > > >checking whether we are cross compiling... configure: error: in > >> `/projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1': > > >configure: error: cannot run C compiled programs. > > >If you meant to cross compile, use `--host'. > > >See `config.log' for more details. > >> > >> ******************************************************************************* > >> > > >The configure script is: > >> > > > ./configure --with-clanguage=cxx --with-shared-libraries=1 > >> --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-mpi-dir=$MPI_DIR > >> --known-mpi-shared-libraries=0 --with-batch=1 --download-parmetis=1 > > >--download-metis=1 --with-64-bit-indices=1 --download-netcdf=1 > > >--download-exodusii=1 --with-debugging=no --download-ptscotch=1 > >> > >> I also attached the file configure.log > >> > > >------------------ > > >Fande Kong > >> ShenZhen Institutes of Advanced Technology > > >Chinese Academy of Sciences > >> > >> > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From pflath at ices.utexas.edu Sat Sep 8 15:19:59 2012 From: pflath at ices.utexas.edu (Pearl Flath) Date: Sat, 8 Sep 2012 15:19:59 -0500 Subject: [petsc-users] Cholesky matrix factorization Message-ID: Hi all, I have a matrix M, for which I set up a Cholesky decomposition M = L L^T using MUMPS and PETSc. I then obtained the factored matrix with PCFactorGetMatrix(pc,&factor); However, I do not want to solve M*x = b, which seems to be what MatSolve does. How do I tell it to solve Lx = b instead? Thanks, Pearl Flath -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Sep 8 15:36:14 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 8 Sep 2012 15:36:14 -0500 Subject: [petsc-users] Cholesky matrix factorization In-Reply-To: References: Message-ID: On Sat, Sep 8, 2012 at 3:19 PM, Pearl Flath wrote: > Hi all, > > I have a matrix M, for which I set up a Cholesky decomposition M = L L^T > using MUMPS and PETSc. I then obtained the factored matrix with > > PCFactorGetMatrix(pc,&factor); > > However, I do not want to solve M*x = b, which seems to be what MatSolve > does. How do I tell it to solve Lx = b instead? > Looks like you want MatForwardSolve(). Note that there is also PCApplySymmetric{Left,Right}. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Sep 8 17:30:55 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 8 Sep 2012 17:30:55 -0500 Subject: [petsc-users] Cholesky matrix factorization In-Reply-To: References: Message-ID: On Sep 8, 2012, at 3:36 PM, Jed Brown wrote: > On Sat, Sep 8, 2012 at 3:19 PM, Pearl Flath wrote: > Hi all, > > I have a matrix M, for which I set up a Cholesky decomposition M = L L^T using MUMPS and PETSc. I then obtained the factored matrix with > > PCFactorGetMatrix(pc,&factor); > > However, I do not want to solve M*x = b, which seems to be what MatSolve does. How do I tell it to solve Lx = b instead? > > Looks like you want MatForwardSolve(). We don't have an interface for just the forward solve for the MUMPS from PETSc. If MUMPS provides separate routines for forward and backward solve (check the mumps documentation) let us know and we'll add the forward and backward solve for our MUMPS interface. Barry > > Note that there is also PCApplySymmetric{Left,Right}. From u.tabak at tudelft.nl Sat Sep 8 18:33:27 2012 From: u.tabak at tudelft.nl (Umut Tabak) Date: Sun, 09 Sep 2012 01:33:27 +0200 Subject: [petsc-users] Cholesky matrix factorization In-Reply-To: References: Message-ID: <504BD5C7.9050603@tudelft.nl> On 09/09/2012 12:30 AM, Barry Smith wrote: > We don't have an interface for just the forward solve for the MUMPS from PETSc. If MUMPS provides separate routines for forward and backward solve (check the mumps documentation) > let us know and we'll add the forward and backward solve for our MUMPS interface. As far as I know, MUMPS does not store the factors transparently so that any user can do forward backward substitution with them. You can also check MUMPS list, there were similar questions like this before BR, Umut From bsmith at mcs.anl.gov Sat Sep 8 18:40:16 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 8 Sep 2012 18:40:16 -0500 Subject: [petsc-users] Cholesky matrix factorization In-Reply-To: <504BD5C7.9050603@tudelft.nl> References: <504BD5C7.9050603@tudelft.nl> Message-ID: <77B56E3F-1BEA-4BC4-BA4C-99FC2F8D7E2E@mcs.anl.gov> On Sep 8, 2012, at 6:33 PM, Umut Tabak wrote: > On 09/09/2012 12:30 AM, Barry Smith wrote: >> We don't have an interface for just the forward solve for the MUMPS from PETSc. If MUMPS provides separate routines for forward and backward solve (check the mumps documentation) >> let us know and we'll add the forward and backward solve for our MUMPS interface. > As far as I know, MUMPS does not store the factors transparently so that any user can do forward backward substitution with them. > You can also check MUMPS list, there were similar questions like this before > BR, > Umut Thanks Perhaps SuperLU_Dist or PasTix has this support. If someone tells us the calling sequences to do the two parts separately we'll add support for it. Note you can do this for sequential PETSc factorizations. Barry From balay at mcs.anl.gov Sun Sep 9 00:31:00 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 9 Sep 2012 00:31:00 -0500 (CDT) Subject: [petsc-users] configure error with netcdf In-Reply-To: <15A5E49D-DC6A-4D55-8A41-5BF2A771020F@mcs.anl.gov> References: <15A5E49D-DC6A-4D55-8A41-5BF2A771020F@mcs.anl.gov> Message-ID: > ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' I don't know whats going on here. I don't see lex/flex invoked. Perhaps you can try the attached patch and see if it works. Its attempting to use the latest netcdf tarball. cd config/BuidSystem patch -Np1 < netcdf.patch Matt - If the changes look ok - I can push to petsc-dev - update to 4.2.1.1 tarball - remove configEnv. all configure options should be passed via command-line. - also remove make.inc. Its never used. - Add in --disable-hdf4 --disable-netcdf-4 to get rid of hdf5 errors - libnetcdf_c++.a not created by the new version? remove it from liblist exodusii.py also has some issues. Its makefiles are relying on PETSC_DIR/PETSC_ARCH env variable to be set by user - for obtaining ntcdf location, install-dir for 'make install' etc.. Satish On Sat, 8 Sep 2012, Barry Smith wrote: > > This is likely the same flex problem you say with ptscotch: > > ncgen.l:142: warning: passing argument 1 of ?ncgerror? discards qualifiers from pointer target type > ncgentab.o: In function `ignore': > ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' > collect2: ld returned 1 exit status > make[2]: *** [ncgen3] Error 1 > make[1]: *** [all-recursive] Error 1 > make: *** [all] Error 2 > > > netcdf.py would have to be modified in the same way as ptscotch.py was modified to do the right thing with flex > > Satish, do you know how to do it? Cause I sure don't. > > Barry > > > > On Sep 8, 2012, at 10:35 AM, fdkong wrote: > > > Thank you Matt. I added MPI shared libraries to my LD_LIBRARY_PATH. The error disappeared. But other errors happened. Please find the attached files! > > > > > > > > ------------------ Original ------------------ > > From: "knepley"; > > Date: Sat, Sep 8, 2012 12:30 PM > > To: "fdkong"; > > Cc: "petsc-users"; > > Subject: Re: configure error with netcdf > > > > On Fri, Sep 7, 2012 at 11:28 PM, fdkong wrote: > > >> Hi all, > > >> > > >> There are anyone who know how to install netcdf with cross compile? I > > >> configured petsc with netcdf, but got error below: > > >> > > > > >Send externalpackages./netcdf*/config.log > > > > > Thanks, > > > > > Matt > > > > I attached the file config.log. > > > > Your MPI shared libraries are not in your LD_LIBRARY_PATH > > > > configure:5252: checking whether we are cross compiling > > configure:5260: /curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc -o conftest -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I. conftest.c >&5 > > configure:5264: $? = 0 > > configure:5271: ./conftest > > ./conftest: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory > > configure:5275: $? = 127 > > > > Matt > > >> > > >> >===========================================================================>==== > > >> Configuring PETSc to compile on your system > > >> > > >> > > >> >===========================================================================>==== > > > >===========================================================================>==== > > >> > > >> ***** WARNING: MPI_DIR > > >> found in enviornment variables - ignoring ****** > > >>> > > >> > > >> >===========================================================================>==== > > >> > > >> > > >> =============================================================================== > > >> > > >> WARNING! > > >> Compiling PETSc with no debugging, this should > > >> > > >> only be done for timing and > > >> production runs. All development should > > >> > > >> be done when configured using --with-debugging=1 > > > > > > > > > =============================================================================== > > >> > > >> > > >> =============================================================================== > > >> > > >> Configuring NetCDF; this > > >> may take several minutes > > >> > > >> > > >> =============================================================================== > > >> > > >> > > >> > > >> > > >> > > >> ******************************************************************************* > > >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > >> details): > > >> > > >> ------------------------------------------------------------------------------- > > >> Error running make on NetCDF: Could not execute "cd > > >> /projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1 && > > >> AR="/usr/bin/ar" ARFLAGS="cr" > > >> CC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc" CFLAGS=" > > >> -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > > >> CXX="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicxx" > > >> CXXFLAGS=" -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > > >> -fPIC -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > > >> FC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" FCFLAGS=" > > >> -fPIC -Wno-unused-variable -O > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > > >> F90="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" > > >> ./configure --prefix=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt > > >> --libdir=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib > > >> --disable-dap --enable-shared": > > >> configure: netCDF 4.1.1 > > >> checking build system type... x86_64-unknown-linux-gnu > > >> checking host system type... x86_64-unknown-linux-gnu > > >> checking for a BSD-compatible install... /usr/bin/install -c > > >> checking whether build environment is sane... yes > > >> checking for a thread-safe mkdir -p... /bin/mkdir -p > > >> checking for gawk... gawk > > >> checking whether make sets $(MAKE)... yes > > >> configure: checking user options > > > >checking whether CXX is set to ''... no > > >> checking whether FC is set to ''... no > > >> checking whether F90 is set to ''... no > > >> checking whether a NCIO_MINBLOCKSIZE was specified... 256 > > >> checking whether udunits is to be built... no > > > >checking if fsync support is enabled... yes > > >> checking whether extra valgrind tests should be run... no > > >> checking whether libcf is to be built... no > > >> checking whether reading of HDF4 SD files is to be enabled... no > > >> checking whether to fetch some sample HDF4 files from Unidata ftp site to > > > >test HDF4 reading (requires wget)... no > > >> checking whether parallel I/O for classic and 64-bit offset files using > > >> parallel-netcdf is to be enabled... no > > > >checking whether a location for the parallel-netcdf library was > > >> specified... no > > >> checking whether new netCDF-4 C++ API is to be built... no > > > >checking whether extra example tests should be run... no > > > >checking whether parallel IO tests should be run... no > > > >checking whether a location for the HDF5 library was specified... > > >> checking whether a location for the ZLIB library was specified... > > >> checking whether a location for the SZLIB library was specified... > > >> checking whether a location for the HDF4 library was specified... > > >> checking whether a default chunk size in bytes was specified... 4194304 > > > >checking whether a maximum per-variable cache size for HDF5 was > > > >specified... 67108864 > > > >checking whether a number of chunks for the default per-variable cache was > > >> specified... 10 > > >> checking whether a default file cache size for HDF5 was specified... > > > >4194304 > > > >checking whether a default file cache maximum number of elements for HDF5 > > > >was specified... 1009 > > > >checking whether a default cache preemption for HDF5 was specified... 0.75 > > > >checking whether netCDF-4 logging is enabled... no > > > >checking whether a path for curl-config was specified... no > > > >checking whether a location for curl installation was specified... no > > > >configure: checking whether a location for curl-config is in PATH... yes > > > >checking whether DAP client is to be built... no > > > >checking whether dap remote testing should be enabled (default on)... no > > > >checking whether the time-consuming dap tests should be enabled (default > > > >off)... no > > > >checking whether a location for liboc was specified... no > > >> checking whether netCDF extra tests should be run (developers only)... no > > > >checking whether Fortran compiler(s) should be tested during configure... > > > >yes > > > >checking whether FFIO will be used... no > > > >checking whether to skip C++, F77, or F90 APIs if compiler is broken... yes > > > >checking whether only the C library is desired... no > > >> checking whether examples should be built... yes > > >> checking whether F77 API is desired... yes > > > >checking whether any Fortran API is desired... yes > > > >checking whether F90 API is desired... yes > > > >checking whether fortran type sizes should be checked... yes > > >> checking whether C API is desired... yes > > >> checking where to get netCDF C-only library for separate fortran > > >> libraries... > > > >checking whether CXX API is desired... yes > > > >checking whether v2 netCDF API should be built... yes > > > >checking whether the ncgen/ncdump should be built... yes > > >> checking whether large file (> 2GB) tests should be run... no > > > >checking whether benchmaks should be run (experimental)... no > > >> checking whether extreme numbers should be used in tests... yes > > > >checking where to put large temp files if large file tests are run... . > > > >checking whether a win32 DLL is desired... no > > > >checking whether separate fortran libs are desired... yes > > >> configure: finding C compiler > > > >checking whether the C compiler works... yes > > >> checking for C compiler default output file name... a.out > > > >checking for suffix of executables... > > > >checking whether we are cross compiling... configure: error: in > > >> `/projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1': > > > >configure: error: cannot run C compiled programs. > > > >If you meant to cross compile, use `--host'. > > > >See `config.log' for more details. > > >> > > >> ******************************************************************************* > > >> > > > >The configure script is: > > >> > > > > ./configure --with-clanguage=cxx --with-shared-libraries=1 > > >> --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-mpi-dir=$MPI_DIR > > >> --known-mpi-shared-libraries=0 --with-batch=1 --download-parmetis=1 > > > >--download-metis=1 --with-64-bit-indices=1 --download-netcdf=1 > > > >--download-exodusii=1 --with-debugging=no --download-ptscotch=1 > > >> > > >> I also attached the file configure.log > > >> > > > >------------------ > > > >Fande Kong > > >> ShenZhen Institutes of Advanced Technology > > > >Chinese Academy of Sciences > > >> > > >> > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > -------------- next part -------------- diff --git a/config/packages/netcdf.py b/config/packages/netcdf.py --- a/config/packages/netcdf.py +++ b/config/packages/netcdf.py @@ -8,10 +8,10 @@ config.package.GNUPackage.__init__(self, framework) self.downloadpath = 'http://www.unidata.ucar.edu/downloads/netcdf/ftp/' self.downloadext = 'tar.gz' - self.downloadversion = '4.1.1' + self.downloadversion = '4.2.1.1' self.functions = ['nccreate'] self.includes = ['netcdf.h'] - self.liblist = [['libnetcdf_c++.a','libnetcdf.a']] + self.liblist = [['libnetcdf.a']] self.cxx = 1 return @@ -20,45 +20,36 @@ self.mpi = framework.require('config.packages.MPI', self) self.hdf5 = framework.require('config.packages.hdf5', self) self.odeps = [self.mpi, self.hdf5] + self.deps = [self.mpi] return def Install(self): import os, sys - makeinc = os.path.join(self.packageDir, 'make.inc') - installmakeinc = os.path.join(self.confDir, 'NetCDF') - configEnv = [] configOpts = [] # Unused flags: F90, CPPFLAGS, LIBS, FLIBS - g = open(makeinc, 'w') - g.write('AR = '+self.setCompilers.AR+'\n') - g.write('ARFLAGS = '+self.setCompilers.AR_FLAGS+'\n') - configEnv.append('AR="'+self.setCompilers.AR+'"') - configEnv.append('ARFLAGS="'+self.setCompilers.AR_FLAGS+'"') + configOpts.append('AR="'+self.setCompilers.AR+'"') + configOpts.append('ARFLAGS="'+self.setCompilers.AR_FLAGS+'"') - g.write('NETCDF_ROOT = '+self.packageDir+'\n') - g.write('PREFIX = '+self.installDir+'\n') configOpts.append('--prefix='+self.installDir) configOpts.append('--libdir='+os.path.join(self.installDir,self.libdir)) configOpts.append('--disable-dap') + configOpts.append('--disable-hdf4') + configOpts.append('--disable-netcdf-4') self.setCompilers.pushLanguage('C') cflags = self.setCompilers.getCompilerFlags().replace('-Wall','').replace('-Wshadow','') cflags += ' ' + self.headers.toString(self.mpi.include)+' '+self.headers.toString('.') - g.write('CC = '+self.setCompilers.getCompiler()+'\n') - g.write('CFLAGS = '+cflags+'\n') - configEnv.append('CC="'+self.setCompilers.getCompiler()+'"') - configEnv.append('CFLAGS="'+cflags+'"') + configOpts.append('CC="'+self.setCompilers.getCompiler()+'"') + configOpts.append('CFLAGS="'+cflags+'"') self.setCompilers.popLanguage() if hasattr(self.setCompilers, 'CXX'): self.setCompilers.pushLanguage('Cxx') cxxflags = self.setCompilers.getCompilerFlags().replace('-Wall','').replace('-Wshadow','') cxxflags += ' ' + self.headers.toString(self.mpi.include)+' '+self.headers.toString('.') - g.write('CXX = '+self.setCompilers.getCompiler()+'\n') - g.write('CXXFLAGS = '+cflags+'\n') - configEnv.append('CXX="'+self.setCompilers.getCompiler()+'"') - configEnv.append('CXXFLAGS="'+cxxflags+'"') + configOpts.append('CXX="'+self.setCompilers.getCompiler()+'"') + configOpts.append('CXXFLAGS="'+cxxflags+'"') self.setCompilers.popLanguage() else: configOpts.append('--disable-cxx') @@ -67,12 +58,10 @@ self.setCompilers.pushLanguage('FC') fcflags = self.setCompilers.getCompilerFlags().replace('-Wall','').replace('-Wshadow','') fcflags += ' ' + self.headers.toString(self.mpi.include)+' '+self.headers.toString('.') - g.write('FC = '+self.setCompilers.getCompiler()+'\n') - g.write('FCFLAGS = '+fcflags+'\n') - configEnv.append('FC="'+self.setCompilers.getCompiler()+'"') - configEnv.append('FCFLAGS="'+fcflags+'"') + configOpts.append('FC="'+self.setCompilers.getCompiler()+'"') + configOpts.append('FCFLAGS="'+fcflags+'"') if self.compilers.fortranIsF90: - configEnv.append('F90="'+self.setCompilers.getCompiler()+'"') + configOpts.append('F90="'+self.setCompilers.getCompiler()+'"') else: configOpts.append('--disable-f90') self.setCompilers.popLanguage() @@ -81,15 +70,19 @@ if self.setCompilers.sharedLibraries: configOpts.append('--enable-shared') - g.close() - if self.installNeeded('make.inc'): # Now compile & install + args = ' '.join(configOpts) + fd = file(os.path.join(self.packageDir,'netcdf'), 'w') + fd.write(args) + fd.close() + + if self.installNeeded('netcdf'): try: self.logPrintBox('Configuring NetCDF; this may take several minutes') - output,err,ret = self.executeShellCommand('cd '+self.packageDir+' && '+' '.join(configEnv)+' ./configure '+' '.join(configOpts), timeout=2500, log = self.framework.log) + output,err,ret = self.executeShellCommand('cd '+self.packageDir+' && ./configure '+args, timeout=2500, log = self.framework.log) self.logPrintBox('Compiling & installing NetCDF; this may take several minutes') output,err,ret = self.executeShellCommand('cd '+self.packageDir+' && make clean && make && make install && make clean', timeout=2500, log = self.framework.log) except RuntimeError, e: raise RuntimeError('Error running make on NetCDF: '+str(e)) - self.postInstall(output+err,'make.inc') + self.postInstall(output+err,'netcdf') return self.installDir From knepley at gmail.com Sun Sep 9 07:55:45 2012 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 9 Sep 2012 07:55:45 -0500 Subject: [petsc-users] configure error with netcdf In-Reply-To: References: <15A5E49D-DC6A-4D55-8A41-5BF2A771020F@mcs.anl.gov> Message-ID: On Sun, Sep 9, 2012 at 12:31 AM, Satish Balay wrote: > > ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' > > I don't know whats going on here. I don't see lex/flex invoked. > lex is invoked right above. Barry did not paste it. > Perhaps you can try the attached patch and see if it works. Its > attempting to use the latest netcdf tarball. > > cd config/BuidSystem > patch -Np1 < netcdf.patch > > Matt - If the changes look ok - I can push to petsc-dev > > - update to 4.2.1.1 tarball > - remove configEnv. all configure options should be passed via > command-line. > - also remove make.inc. Its never used. > - Add in --disable-hdf4 --disable-netcdf-4 to get rid of hdf5 errors > - libnetcdf_c++.a not created by the new version? remove it from liblist > > exodusii.py also has some issues. Its makefiles are relying on > PETSC_DIR/PETSC_ARCH env variable to be set by user - for obtaining > ntcdf location, install-dir for 'make install' etc.. > Will look. Matt > Satish > > On Sat, 8 Sep 2012, Barry Smith wrote: > > > > > This is likely the same flex problem you say with ptscotch: > > > > ncgen.l:142: warning: passing argument 1 of ?ncgerror? discards > qualifiers from pointer target type > > ncgentab.o: In function `ignore': > > ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' > > collect2: ld returned 1 exit status > > make[2]: *** [ncgen3] Error 1 > > make[1]: *** [all-recursive] Error 1 > > make: *** [all] Error 2 > > > > > > netcdf.py would have to be modified in the same way as ptscotch.py was > modified to do the right thing with flex > > > > Satish, do you know how to do it? Cause I sure don't. > > > > Barry > > > > > > > > On Sep 8, 2012, at 10:35 AM, fdkong wrote: > > > > > Thank you Matt. I added MPI shared libraries to my LD_LIBRARY_PATH. > The error disappeared. But other errors happened. Please find the attached > files! > > > > > > > > > > > > ------------------ Original ------------------ > > > From: "knepley"; > > > Date: Sat, Sep 8, 2012 12:30 PM > > > To: "fdkong"; > > > Cc: "petsc-users"; > > > Subject: Re: configure error with netcdf > > > > > > On Fri, Sep 7, 2012 at 11:28 PM, fdkong wrote: > > > >> Hi all, > > > >> > > > >> There are anyone who know how to install netcdf with cross compile? > I > > > >> configured petsc with netcdf, but got error below: > > > >> > > > > > > >Send externalpackages./netcdf*/config.log > > > > > > > Thanks, > > > > > > > Matt > > > > > > I attached the file config.log. > > > > > > Your MPI shared libraries are not in your LD_LIBRARY_PATH > > > > > > configure:5252: checking whether we are cross compiling > > > configure:5260: > /curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc -o conftest > -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I. conftest.c > >&5 > > > configure:5264: $? = 0 > > > configure:5271: ./conftest > > > ./conftest: error while loading shared libraries: libmpi.so.0: cannot > open shared object file: No such file or directory > > > configure:5275: $? = 127 > > > > > > Matt > > > >> > > > >> > >===========================================================================>==== > > > >> Configuring PETSc to compile on your system > > > >> > > > >> > > > >> > >===========================================================================>==== > > > > > >===========================================================================>==== > > > >> > > > >> ***** WARNING: MPI_DIR > > > >> found in enviornment variables - ignoring ****** > > > >>> > > > >> > > > >> > >===========================================================================>==== > > > >> > > > >> > > > >> > =============================================================================== > > > >> > > > >> WARNING! > > > >> Compiling PETSc with no debugging, this should > > > >> > > > >> only be done for timing and > > > >> production runs. All development should > > > >> > > > >> be done when configured using --with-debugging=1 > > > > > > > > > > > > > =============================================================================== > > > >> > > > >> > > > >> > =============================================================================== > > > >> > > > >> Configuring NetCDF; this > > > >> may take several minutes > > > >> > > > >> > > > >> > =============================================================================== > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > ******************************************************************************* > > > >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > > >> details): > > > >> > > > >> > ------------------------------------------------------------------------------- > > > >> Error running make on NetCDF: Could not execute "cd > > > >> /projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1 && > > > >> AR="/usr/bin/ar" ARFLAGS="cr" > > > >> CC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc" > CFLAGS=" > > > >> -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > > > >> CXX="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicxx" > > > >> CXXFLAGS=" -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -O > > > >> -fPIC -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > > > >> FC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" > FCFLAGS=" > > > >> -fPIC -Wno-unused-variable -O > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > > > >> F90="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" > > > >> ./configure > --prefix=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt > > > >> --libdir=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib > > > >> --disable-dap --enable-shared": > > > >> configure: netCDF 4.1.1 > > > >> checking build system type... x86_64-unknown-linux-gnu > > > >> checking host system type... x86_64-unknown-linux-gnu > > > >> checking for a BSD-compatible install... /usr/bin/install -c > > > >> checking whether build environment is sane... yes > > > >> checking for a thread-safe mkdir -p... /bin/mkdir -p > > > >> checking for gawk... gawk > > > >> checking whether make sets $(MAKE)... yes > > > >> configure: checking user options > > > > >checking whether CXX is set to ''... no > > > >> checking whether FC is set to ''... no > > > >> checking whether F90 is set to ''... no > > > >> checking whether a NCIO_MINBLOCKSIZE was specified... 256 > > > >> checking whether udunits is to be built... no > > > > >checking if fsync support is enabled... yes > > > >> checking whether extra valgrind tests should be run... no > > > >> checking whether libcf is to be built... no > > > >> checking whether reading of HDF4 SD files is to be enabled... no > > > >> checking whether to fetch some sample HDF4 files from Unidata ftp > site to > > > > >test HDF4 reading (requires wget)... no > > > >> checking whether parallel I/O for classic and 64-bit offset files > using > > > >> parallel-netcdf is to be enabled... no > > > > >checking whether a location for the parallel-netcdf library was > > > >> specified... no > > > >> checking whether new netCDF-4 C++ API is to be built... no > > > > >checking whether extra example tests should be run... no > > > > >checking whether parallel IO tests should be run... no > > > > >checking whether a location for the HDF5 library was specified... > > > >> checking whether a location for the ZLIB library was specified... > > > >> checking whether a location for the SZLIB library was specified... > > > >> checking whether a location for the HDF4 library was specified... > > > >> checking whether a default chunk size in bytes was specified... > 4194304 > > > > >checking whether a maximum per-variable cache size for HDF5 was > > > > >specified... 67108864 > > > > >checking whether a number of chunks for the default per-variable > cache was > > > >> specified... 10 > > > >> checking whether a default file cache size for HDF5 was specified... > > > > >4194304 > > > > >checking whether a default file cache maximum number of elements > for HDF5 > > > > >was specified... 1009 > > > > >checking whether a default cache preemption for HDF5 was > specified... 0.75 > > > > >checking whether netCDF-4 logging is enabled... no > > > > >checking whether a path for curl-config was specified... no > > > > >checking whether a location for curl installation was specified... > no > > > > >configure: checking whether a location for curl-config is in > PATH... yes > > > > >checking whether DAP client is to be built... no > > > > >checking whether dap remote testing should be enabled (default > on)... no > > > > >checking whether the time-consuming dap tests should be enabled > (default > > > > >off)... no > > > > >checking whether a location for liboc was specified... no > > > >> checking whether netCDF extra tests should be run (developers > only)... no > > > > >checking whether Fortran compiler(s) should be tested during > configure... > > > > >yes > > > > >checking whether FFIO will be used... no > > > > >checking whether to skip C++, F77, or F90 APIs if compiler is > broken... yes > > > > >checking whether only the C library is desired... no > > > >> checking whether examples should be built... yes > > > >> checking whether F77 API is desired... yes > > > > >checking whether any Fortran API is desired... yes > > > > >checking whether F90 API is desired... yes > > > > >checking whether fortran type sizes should be checked... yes > > > >> checking whether C API is desired... yes > > > >> checking where to get netCDF C-only library for separate fortran > > > >> libraries... > > > > >checking whether CXX API is desired... yes > > > > >checking whether v2 netCDF API should be built... yes > > > > >checking whether the ncgen/ncdump should be built... yes > > > >> checking whether large file (> 2GB) tests should be run... no > > > > >checking whether benchmaks should be run (experimental)... no > > > >> checking whether extreme numbers should be used in tests... yes > > > > >checking where to put large temp files if large file tests are > run... . > > > > >checking whether a win32 DLL is desired... no > > > > >checking whether separate fortran libs are desired... yes > > > >> configure: finding C compiler > > > > >checking whether the C compiler works... yes > > > >> checking for C compiler default output file name... a.out > > > > >checking for suffix of executables... > > > > >checking whether we are cross compiling... configure: error: in > > > >> `/projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1': > > > > >configure: error: cannot run C compiled programs. > > > > >If you meant to cross compile, use `--host'. > > > > >See `config.log' for more details. > > > >> > > > >> > ******************************************************************************* > > > >> > > > > >The configure script is: > > > >> > > > > > ./configure --with-clanguage=cxx --with-shared-libraries=1 > > > >> --with-dynamic-loading=1 --download-f-blas-lapack=1 > --with-mpi-dir=$MPI_DIR > > > >> --known-mpi-shared-libraries=0 --with-batch=1 --download-parmetis=1 > > > > >--download-metis=1 --with-64-bit-indices=1 --download-netcdf=1 > > > > >--download-exodusii=1 --with-debugging=no --download-ptscotch=1 > > > >> > > > >> I also attached the file configure.log > > > >> > > > > >------------------ > > > > >Fande Kong > > > >> ShenZhen Institutes of Advanced Technology > > > > >Chinese Academy of Sciences > > > >> > > > >> > > > > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > > -- Norbert Wiener > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Sep 9 10:02:23 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 9 Sep 2012 10:02:23 -0500 Subject: [petsc-users] configure error with netcdf In-Reply-To: References: <15A5E49D-DC6A-4D55-8A41-5BF2A771020F@mcs.anl.gov> Message-ID: <7EC1F88D-080F-48E6-BBEE-A4613D78DDB6@mcs.anl.gov> On Sep 9, 2012, at 7:55 AM, Matthew Knepley wrote: > On Sun, Sep 9, 2012 at 12:31 AM, Satish Balay wrote: > > ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' Satish, Weird undefined symbols like yyunput are always a result of flex issues. Could you please make the same changes to netcdf.py that you made to the other package that used flex (which resolved the problem there). Barry > > I don't know whats going on here. I don't see lex/flex invoked. > > lex is invoked right above. Barry did not paste it. > > Perhaps you can try the attached patch and see if it works. Its > attempting to use the latest netcdf tarball. > > cd config/BuidSystem > patch -Np1 < netcdf.patch > > Matt - If the changes look ok - I can push to petsc-dev > > - update to 4.2.1.1 tarball > - remove configEnv. all configure options should be passed via command-line. > - also remove make.inc. Its never used. > - Add in --disable-hdf4 --disable-netcdf-4 to get rid of hdf5 errors > - libnetcdf_c++.a not created by the new version? remove it from liblist > > exodusii.py also has some issues. Its makefiles are relying on > PETSC_DIR/PETSC_ARCH env variable to be set by user - for obtaining > ntcdf location, install-dir for 'make install' etc.. > > Will look. > > Matt > > Satish > > On Sat, 8 Sep 2012, Barry Smith wrote: > > > > > This is likely the same flex problem you say with ptscotch: > > > > ncgen.l:142: warning: passing argument 1 of ?ncgerror? discards qualifiers from pointer target type > > ncgentab.o: In function `ignore': > > ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' > > collect2: ld returned 1 exit status > > make[2]: *** [ncgen3] Error 1 > > make[1]: *** [all-recursive] Error 1 > > make: *** [all] Error 2 > > > > > > netcdf.py would have to be modified in the same way as ptscotch.py was modified to do the right thing with flex > > > > Satish, do you know how to do it? Cause I sure don't. > > > > Barry > > > > > > > > On Sep 8, 2012, at 10:35 AM, fdkong wrote: > > > > > Thank you Matt. I added MPI shared libraries to my LD_LIBRARY_PATH. The error disappeared. But other errors happened. Please find the attached files! > > > > > > > > > > > > ------------------ Original ------------------ > > > From: "knepley"; > > > Date: Sat, Sep 8, 2012 12:30 PM > > > To: "fdkong"; > > > Cc: "petsc-users"; > > > Subject: Re: configure error with netcdf > > > > > > On Fri, Sep 7, 2012 at 11:28 PM, fdkong wrote: > > > >> Hi all, > > > >> > > > >> There are anyone who know how to install netcdf with cross compile? I > > > >> configured petsc with netcdf, but got error below: > > > >> > > > > > > >Send externalpackages./netcdf*/config.log > > > > > > > Thanks, > > > > > > > Matt > > > > > > I attached the file config.log. > > > > > > Your MPI shared libraries are not in your LD_LIBRARY_PATH > > > > > > configure:5252: checking whether we are cross compiling > > > configure:5260: /curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc -o conftest -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I. conftest.c >&5 > > > configure:5264: $? = 0 > > > configure:5271: ./conftest > > > ./conftest: error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory > > > configure:5275: $? = 127 > > > > > > Matt > > > >> > > > >> >===========================================================================>==== > > > >> Configuring PETSc to compile on your system > > > >> > > > >> > > > >> >===========================================================================>==== > > > > >===========================================================================>==== > > > >> > > > >> ***** WARNING: MPI_DIR > > > >> found in enviornment variables - ignoring ****** > > > >>> > > > >> > > > >> >===========================================================================>==== > > > >> > > > >> > > > >> =============================================================================== > > > >> > > > >> WARNING! > > > >> Compiling PETSc with no debugging, this should > > > >> > > > >> only be done for timing and > > > >> production runs. All development should > > > >> > > > >> be done when configured using --with-debugging=1 > > > > > > > > > > > > =============================================================================== > > > >> > > > >> > > > >> =============================================================================== > > > >> > > > >> Configuring NetCDF; this > > > >> may take several minutes > > > >> > > > >> > > > >> =============================================================================== > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> ******************************************************************************* > > > >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > > >> details): > > > >> > > > >> ------------------------------------------------------------------------------- > > > >> Error running make on NetCDF: Could not execute "cd > > > >> /projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1 && > > > >> AR="/usr/bin/ar" ARFLAGS="cr" > > > >> CC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicc" CFLAGS=" > > > >> -fPIC -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > > > >> CXX="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpicxx" > > > >> CXXFLAGS=" -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O > > > >> -fPIC -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > > > >> FC="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" FCFLAGS=" > > > >> -fPIC -Wno-unused-variable -O > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include > > > >> -I/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/include -I." > > > >> F90="/curc/tools/free/redhat_5_x86_64/openmpi-1.4.3_ib/bin/mpif90" > > > >> ./configure --prefix=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt > > > >> --libdir=/projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib > > > >> --disable-dap --enable-shared": > > > >> configure: netCDF 4.1.1 > > > >> checking build system type... x86_64-unknown-linux-gnu > > > >> checking host system type... x86_64-unknown-linux-gnu > > > >> checking for a BSD-compatible install... /usr/bin/install -c > > > >> checking whether build environment is sane... yes > > > >> checking for a thread-safe mkdir -p... /bin/mkdir -p > > > >> checking for gawk... gawk > > > >> checking whether make sets $(MAKE)... yes > > > >> configure: checking user options > > > > >checking whether CXX is set to ''... no > > > >> checking whether FC is set to ''... no > > > >> checking whether F90 is set to ''... no > > > >> checking whether a NCIO_MINBLOCKSIZE was specified... 256 > > > >> checking whether udunits is to be built... no > > > > >checking if fsync support is enabled... yes > > > >> checking whether extra valgrind tests should be run... no > > > >> checking whether libcf is to be built... no > > > >> checking whether reading of HDF4 SD files is to be enabled... no > > > >> checking whether to fetch some sample HDF4 files from Unidata ftp site to > > > > >test HDF4 reading (requires wget)... no > > > >> checking whether parallel I/O for classic and 64-bit offset files using > > > >> parallel-netcdf is to be enabled... no > > > > >checking whether a location for the parallel-netcdf library was > > > >> specified... no > > > >> checking whether new netCDF-4 C++ API is to be built... no > > > > >checking whether extra example tests should be run... no > > > > >checking whether parallel IO tests should be run... no > > > > >checking whether a location for the HDF5 library was specified... > > > >> checking whether a location for the ZLIB library was specified... > > > >> checking whether a location for the SZLIB library was specified... > > > >> checking whether a location for the HDF4 library was specified... > > > >> checking whether a default chunk size in bytes was specified... 4194304 > > > > >checking whether a maximum per-variable cache size for HDF5 was > > > > >specified... 67108864 > > > > >checking whether a number of chunks for the default per-variable cache was > > > >> specified... 10 > > > >> checking whether a default file cache size for HDF5 was specified... > > > > >4194304 > > > > >checking whether a default file cache maximum number of elements for HDF5 > > > > >was specified... 1009 > > > > >checking whether a default cache preemption for HDF5 was specified... 0.75 > > > > >checking whether netCDF-4 logging is enabled... no > > > > >checking whether a path for curl-config was specified... no > > > > >checking whether a location for curl installation was specified... no > > > > >configure: checking whether a location for curl-config is in PATH... yes > > > > >checking whether DAP client is to be built... no > > > > >checking whether dap remote testing should be enabled (default on)... no > > > > >checking whether the time-consuming dap tests should be enabled (default > > > > >off)... no > > > > >checking whether a location for liboc was specified... no > > > >> checking whether netCDF extra tests should be run (developers only)... no > > > > >checking whether Fortran compiler(s) should be tested during configure... > > > > >yes > > > > >checking whether FFIO will be used... no > > > > >checking whether to skip C++, F77, or F90 APIs if compiler is broken... yes > > > > >checking whether only the C library is desired... no > > > >> checking whether examples should be built... yes > > > >> checking whether F77 API is desired... yes > > > > >checking whether any Fortran API is desired... yes > > > > >checking whether F90 API is desired... yes > > > > >checking whether fortran type sizes should be checked... yes > > > >> checking whether C API is desired... yes > > > >> checking where to get netCDF C-only library for separate fortran > > > >> libraries... > > > > >checking whether CXX API is desired... yes > > > > >checking whether v2 netCDF API should be built... yes > > > > >checking whether the ncgen/ncdump should be built... yes > > > >> checking whether large file (> 2GB) tests should be run... no > > > > >checking whether benchmaks should be run (experimental)... no > > > >> checking whether extreme numbers should be used in tests... yes > > > > >checking where to put large temp files if large file tests are run... . > > > > >checking whether a win32 DLL is desired... no > > > > >checking whether separate fortran libs are desired... yes > > > >> configure: finding C compiler > > > > >checking whether the C compiler works... yes > > > >> checking for C compiler default output file name... a.out > > > > >checking for suffix of executables... > > > > >checking whether we are cross compiling... configure: error: in > > > >> `/projects/fako9399/petsc-3.3-p3/externalpackages/netcdf-4.1.1': > > > > >configure: error: cannot run C compiled programs. > > > > >If you meant to cross compile, use `--host'. > > > > >See `config.log' for more details. > > > >> > > > >> ******************************************************************************* > > > >> > > > > >The configure script is: > > > >> > > > > > ./configure --with-clanguage=cxx --with-shared-libraries=1 > > > >> --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-mpi-dir=$MPI_DIR > > > >> --known-mpi-shared-libraries=0 --with-batch=1 --download-parmetis=1 > > > > >--download-metis=1 --with-64-bit-indices=1 --download-netcdf=1 > > > > >--download-exodusii=1 --with-debugging=no --download-ptscotch=1 > > > >> > > > >> I also attached the file configure.log > > > >> > > > > >------------------ > > > > >Fande Kong > > > >> ShenZhen Institutes of Advanced Technology > > > > >Chinese Academy of Sciences > > > >> > > > >> > > > > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > > -- Norbert Wiener > > > > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From balay at mcs.anl.gov Sun Sep 9 12:02:03 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 9 Sep 2012 12:02:03 -0500 (CDT) Subject: [petsc-users] configure error with netcdf In-Reply-To: <7EC1F88D-080F-48E6-BBEE-A4613D78DDB6@mcs.anl.gov> References: <15A5E49D-DC6A-4D55-8A41-5BF2A771020F@mcs.anl.gov> <7EC1F88D-080F-48E6-BBEE-A4613D78DDB6@mcs.anl.gov> Message-ID: On Sun, 9 Sep 2012, Barry Smith wrote: > > On Sep 9, 2012, at 7:55 AM, Matthew Knepley wrote: > > > On Sun, Sep 9, 2012 at 12:31 AM, Satish Balay wrote: > > > ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' > > Satish, > > Weird undefined symbols like yyunput are always a result of flex issues. > > Could you please make the same changes to netcdf.py that you made to the other package that used flex (which resolved the problem there). In this case netcdf has its own configure - and does its own search for flex etc. >>> checking for flex... flex checking lex output file root... lex.yy checking lex library... -lfl <<< So the tools appear to be found. And I don't see them being used further down [in configure.log]. So its not clear if 'missing flex' is the cause of this error. And after my patch [with netcdf upgrade] - I have a successful build of netcdf on rhel5 - even without flex/bison. However flex is needed for ptscotch anyway [so one would get an error from ptscotch if flex is missing] Satish From bsmith at mcs.anl.gov Sun Sep 9 13:04:39 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 9 Sep 2012 13:04:39 -0500 Subject: [petsc-users] configure error with netcdf In-Reply-To: References: <15A5E49D-DC6A-4D55-8A41-5BF2A771020F@mcs.anl.gov> <7EC1F88D-080F-48E6-BBEE-A4613D78DDB6@mcs.anl.gov> Message-ID: On Sep 9, 2012, at 12:02 PM, Satish Balay wrote: > On Sun, 9 Sep 2012, Barry Smith wrote: > >> >> On Sep 9, 2012, at 7:55 AM, Matthew Knepley wrote: >> >>> On Sun, Sep 9, 2012 at 12:31 AM, Satish Balay wrote: >>>> ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' >> >> Satish, >> >> Weird undefined symbols like yyunput are always a result of flex issues. >> >> Could you please make the same changes to netcdf.py that you made to the other package that used flex (which resolved the problem there). > > In this case netcdf has its own configure - and does its own search for flex etc. > >>>> > checking for flex... flex > checking lex output file root... lex.yy > checking lex library... -lfl > <<< > > So the tools appear to be found. And I don't see them being used > further down [in configure.log]. Agreed. It seems some of the processing is lost; perhaps netcdf build tools do not properly display everything they are doing Note the use of .l files and the symbol yyunput which is very specific to flex. > So its not clear if 'missing flex' > is the cause of this error. Yes, it may not be a "missing" flex that is causing the problem but the problem is related flex, bison and all that crap. Unfortunately if we cannot reproduce on a machine we have access to there is no way we can debug it. Barry > > > And after my patch [with netcdf upgrade] - I have a successful build > of netcdf on rhel5 - even without flex/bison. > > However flex is needed for ptscotch anyway [so one would get an error > from ptscotch if flex is missing] > > Satish From fd.kong at siat.ac.cn Mon Sep 10 22:05:31 2012 From: fd.kong at siat.ac.cn (=?ISO-8859-1?B?ZmRrb25n?=) Date: Tue, 11 Sep 2012 11:05:31 +0800 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3 with 64 integers Message-ID: Hi all, The function PetscSFReduceBegin runs well on MPICH, but does not work on openmpi-1.4.3, with 64 integers. Anyone knows why? Maybe this link could help us guess why? http://www.open-mpi.org/community/lists/devel/2005/11/0517.php I attached the configure.log and make.log files. ------------------ Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.zip Type: application/octet-stream Size: 198777 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.zip Type: application/octet-stream Size: 6363 bytes Desc: not available URL: From knepley at gmail.com Mon Sep 10 22:12:07 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 10 Sep 2012 22:12:07 -0500 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3 with 64 integers In-Reply-To: References: Message-ID: On Mon, Sep 10, 2012 at 10:05 PM, fdkong wrote: > Hi all, > > The function PetscSFReduceBegin runs well on MPICH, but does not work > on openmpi-1.4.3, with 64 integers. Anyone knows why? > 1) What error are you seeing? There are no errors in the build. 2) Please do not send logs to petsc-users, send them to petsc-maint at mcs.anl.gov Matt > Maybe this link could help us guess why? > http://www.open-mpi.org/community/lists/devel/2005/11/0517.php > > I attached the configure.log and make.log files. > ------------------ > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Sep 10 22:54:22 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 10 Sep 2012 22:54:22 -0500 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3 with 64 integers In-Reply-To: References: Message-ID: On Mon, Sep 10, 2012 at 10:47 PM, fdkong wrote: > > >> Hi all, > >> > >> The function PetscSFReduceBegin runs well on MPICH, but does not work > >> on openmpi-1.4.3, with 64 integers. Anyone knows why? > >> > > >1) What error are you seeing? There are no errors in the build. > > Yes, There are no errors in the build and configure. But when I ran my > code involved the function PetscSFReduceBegin on supercomputer, I got the > error below: > Can you run src/sys/sf/examples/tutorials/ex1? There are several tests in the makefile there. I suspect that your graph is not correctly specified. Matt > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and > run > [0]PETSC ERROR: to get more information on the crash. > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 > CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./linearElasticity on a arch-linu named node0353 by > fako9399 Mon Sep 10 16:50:42 2012 > [0]PETSC ERROR: Libraries linked from > /projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib > [0]PETSC ERROR: Configure run at Mon Sep 10 13:58:46 2012 > [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768 > --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 > --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 > --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 > --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 > --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 > --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-clanguage=cxx > --with-shared-libraries=1 --with-dynamic-loading=1 > --download-f-blas-lapack=1 --with-batch=1 --known-mpi-shared-libraries=0 > --with-mpi-shared=1 --download-parmetis=1 --download-metis=1 > --with-64-bit-indices=1 > --with-netcdf-dir=/projects/fako9399/petsc-3.3-p3/externalpackage/netcdf-4.1.3install > --download-exodusii=1 --with-debugging=no --download-ptscotch=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > mpirun has exited due to process rank 0 with PID 1517 on > node node0353 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > -------------------------------------------------------------------------- > > I had done some debugs, and then found the error came from the > function PetscSFReduceBegin. > > >2) Please do not send logs to petsc-users, send them to > >petsc-maint at mcs.anl.gov > > Ok, Thanks. > > > Matt > > > >> Maybe this link could help us guess why? > >> http://www.open-mpi.org/community/lists/devel/2005/11/0517.php > >> > >> I attached the configure.log and make.log files. > >> ------------------ > >> Fande Kong > >> ShenZhen Institutes of Advanced Technology > >> Chinese Academy of Sciences > >> > >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Mon Sep 10 23:00:41 2012 From: fd.kong at siat.ac.cn (=?ISO-8859-1?B?ZmRrb25n?=) Date: Tue, 11 Sep 2012 12:00:41 +0800 Subject: [petsc-users] configure error with netcdf Message-ID: >On Sep 9, 2012, at 12:02 PM, Satish Balay wrote: >> On Sun, 9 Sep 2012, Barry Smith wrote: > > >> > >> >On Sep 9, 2012, at 7:55 AM, Matthew Knepley wrote: >>> >>> >On Sun, Sep 9, 2012 at 12:31 AM, Satish Balay wrote: >>>>> ncgentab.c:(.text+0x1e2): undefined reference to `yyunput' >> > >> > Satish, >>> >> > Weird undefined symbols like yyunput are always a result of flex issues. >> > >> > Could you please make the same changes to netcdf.py that you made to the other package that used flex (which resolved the problem there). >> > >In this case netcdf has its own configure - and does its own search for flex etc. > > >>>> > > >checking for flex... flex > >checking lex output file root... lex.yy > >checking lex library... -lfl > ><<< > > >> So the tools appear to be found. And I don't see them being used > >further down [in configure.log]. > Agreed. It seems some of the processing is lost; perhaps netcdf build tools do not properly display everything >they are doing > Note the use of .l files and the symbol yyunput which is very specific to flex. > > > So its not clear if 'missing flex' > >is the cause of this error. > Yes, it may not be a "missing" flex that is causing the problem but the problem is related flex, bison and all that >crap. Unfortunately if we cannot reproduce on a machine we have access to there is no way we can debug it. I am not sure if the error is the result of the "missing" flex, but when I used netcdf-4.1.3, instead of netcdf-4.1.1, the petsc worked well. Thus, Maybe there is a bug in the netcdf itself. > Barry > > > > >And after my patch [with netcdf upgrade] - I have a successful build > >of netcdf on rhel5 - even without flex/bison. > > > >However flex is needed for ptscotch anyway [so one would get an error > >from ptscotch if flex is missing] >> > >Satish ------------------ Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Sep 11 00:33:16 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 11 Sep 2012 00:33:16 -0500 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3 with64 integers In-Reply-To: References: Message-ID: On Tue, Sep 11, 2012 at 12:05 AM, fdkong wrote: > Hi Matt, > > I tested src/sys/sf/examples/tutorials/ex1 on OpenMPI and MPICH > seperately respectively. I found the error come from the > function PetscSFReduceBegin called by PetscSFCreateInverseSF. I used the > script below: > Thanks for testing this. I will run it myself and track down the bug. Matt > mpirun -n 2 ./ex1 -test_invert > > (1) On OpenMPI, got the result below: > > Star Forest Object: 2 MPI processes > type not yet set > synchronization=FENCE sort=rank-order > [0] Number of roots=3, leaves=2, remote ranks=1 > [0] 0 <- (1,1) > [0] 1 <- (1,0) > [1] Number of roots=2, leaves=3, remote ranks=1 > [1] 0 <- (0,1) > [1] 1 <- (0,0) > [1] 2 <- (0,2) > ## Multi-SF > Star Forest Object: 2 MPI processes > type not yet set > synchronization=FENCE sort=rank-order > [0] Number of roots=3, leaves=2, remote ranks=1 > [0] 0 <- (1,1) > [0] 1 <- (1,0) > [1] Number of roots=2, leaves=3, remote ranks=1 > [1] 0 <- (0,2) > [1] 1 <- (0,0) > [1] 2 <- (0,2) > ## Inverse of Multi-SF > Star Forest Object: 2 MPI processes > type not yet set > synchronization=FENCE sort=rank-order > [0] Number of roots=2, leaves=0, remote ranks=0 > [1] Number of roots=3, leaves=0, remote ranks=0 > > (2) On MPICH, got the result below: > > Star Forest Object: 2 MPI processes > type not yet set > synchronization=FENCE sort=rank-order > [0] Number of roots=3, leaves=2, remote ranks=1 > [0] 0 <- (1,1) > [0] 1 <- (1,0) > [1] Number of roots=2, leaves=3, remote ranks=1 > [1] 0 <- (0,1) > [1] 1 <- (0,0) > [1] 2 <- (0,2) > ## Multi-SF > Star Forest Object: 2 MPI processes > type not yet set > synchronization=FENCE sort=rank-order > [0] Number of roots=3, leaves=2, remote ranks=1 > [0] 0 <- (1,1) > [0] 1 <- (1,0) > [1] Number of roots=2, leaves=3, remote ranks=1 > [1] 0 <- (0,1) > [1] 1 <- (0,0) > [1] 2 <- (0,2) > ## Inverse of Multi-SF > Star Forest Object: 2 MPI processes > type not yet set > synchronization=FENCE sort=rank-order > [0] Number of roots=2, leaves=3, remote ranks=1 > [0] 0 <- (1,1) > [0] 1 <- (1,0) > [0] 2 <- (1,2) > [1] Number of roots=3, leaves=2, remote ranks=1 > [1] 0 <- (0,1) > [1] 1 <- (0,0) > > From two above results, you could found that the inverse of Multi-SF is > incorrect on OpenMPI. Could you please take some debugs on OpenMPI (1.4.3) > with 64-bit integers? > > In my code, I call DMComplexDistribute that calls PetscSFCreateInverseSF > that calls PetscSFReduceBegin. I had taken a lot of debugs, and found the > error come from the PetscSFReduceBegin. > > On Mon, Sep 10, 2012 at 10:47 PM, fdkong wrote: > ** > >> >> >> Hi all, >> >> >> >> The function PetscSFReduceBegin runs well on MPICH, but does not work >> >> on openmpi-1.4.3, with 64 integers. Anyone knows why? >> >> >> >> >1) What error are you seeing? There are no errors in the build. >> >> Yes, There are no errors in the build and configure. But when I ran my >> code involved the function PetscSFReduceBegin on supercomputer, I got the >> error below: >> > > Can you run src/sys/sf/examples/tutorials/ex1? There are several tests in > the makefile there. I suspect > that your graph is not correctly specified. > > Matt > >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >> corruption errors >> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, >> and run >> [0]PETSC ERROR: to get more information on the crash. >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Signal received! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 >> CDT 2012 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./linearElasticity on a arch-linu named node0353 by >> fako9399 Mon Sep 10 16:50:42 2012 >> [0]PETSC ERROR: Libraries linked from >> /projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib >> [0]PETSC ERROR: Configure run at Mon Sep 10 13:58:46 2012 >> [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768 >> --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 >> --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 >> --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 >> --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 >> --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 >> --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-clanguage=cxx >> --with-shared-libraries=1 --with-dynamic-loading=1 >> --download-f-blas-lapack=1 --with-batch=1 --known-mpi-shared-libraries=0 >> --with-mpi-shared=1 --download-parmetis=1 --download-metis=1 >> --with-64-bit-indices=1 >> --with-netcdf-dir=/projects/fako9399/petsc-3.3-p3/externalpackage/netcdf-4.1.3install >> --download-exodusii=1 --with-debugging=no --download-ptscotch=1 >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: User provided function() line 0 in unknown directory >> unknown file >> -------------------------------------------------------------------------- >> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD >> with errorcode 59. >> >> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >> You may or may not see output from other processes, depending on >> exactly when Open MPI kills them. >> -------------------------------------------------------------------------- >> -------------------------------------------------------------------------- >> mpirun has exited due to process rank 0 with PID 1517 on >> node node0353 exiting without calling "finalize". This may >> have caused other processes in the application to be >> terminated by signals sent by mpirun (as reported here). >> -------------------------------------------------------------------------- >> >> I had done some debugs, and then found the error came from the function >> PetscSFReduceBegin. >> >> >2) Please do not send logs to petsc-users, send them to >> >petsc-maint at mcs.anl.gov >> >> Ok, Thanks. >> >> > Matt >> >> >> >> Maybe this link could help us guess why? >> >> http://www.open-mpi.org/community/lists/devel/2005/11/0517.php >> >> >> >> I attached the configure.log and make.log files. >> >> ------------------ >> >> Fande Kong >> >> ShenZhen Institutes of Advanced Technology >> >> Chinese Academy of Sciences >> >> >> >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > ** > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at foxmail.com Mon Sep 10 22:47:33 2012 From: fd.kong at foxmail.com (=?ISO-8859-1?B?ZmRrb25n?=) Date: Tue, 11 Sep 2012 11:47:33 +0800 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3 with 64 integers Message-ID: >> Hi all, >> >> The function PetscSFReduceBegin runs well on MPICH, but does not work >> on openmpi-1.4.3, with 64 integers. Anyone knows why? >> >1) What error are you seeing? There are no errors in the build. Yes, There are no errors in the build and configure. But when I ran my code involved the function PetscSFReduceBegin on supercomputer, I got the error below: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./linearElasticity on a arch-linu named node0353 by fako9399 Mon Sep 10 16:50:42 2012 [0]PETSC ERROR: Libraries linked from /projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib [0]PETSC ERROR: Configure run at Mon Sep 10 13:58:46 2012 [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-clanguage=cxx --with-shared-libraries=1 --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-batch=1 --known-mpi-shared-libraries=0 --with-mpi-shared=1 --download-parmetis=1 --download-metis=1 --with-64-bit-indices=1 --with-netcdf-dir=/projects/fako9399/petsc-3.3-p3/externalpackage/netcdf-4.1.3install --download-exodusii=1 --with-debugging=no --download-ptscotch=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 1517 on node node0353 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- I had done some debugs, and then found the error came from the function PetscSFReduceBegin. >2) Please do not send logs to petsc-users, send them to >petsc-maint at mcs.anl.gov Ok, Thanks. > Matt >> Maybe this link could help us guess why? >> http://www.open-mpi.org/community/lists/devel/2005/11/0517.php >> >> I attached the configure.log and make.log files. >> ------------------ >> Fande Kong >> ShenZhen Institutes of Advanced Technology >> Chinese Academy of Sciences >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at foxmail.com Tue Sep 11 00:05:30 2012 From: fd.kong at foxmail.com (=?ISO-8859-1?B?ZmRrb25n?=) Date: Tue, 11 Sep 2012 13:05:30 +0800 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3 with64 integers Message-ID: Hi Matt, I tested src/sys/sf/examples/tutorials/ex1 on OpenMPI and MPICH seperately respectively. I found the error come from the function PetscSFReduceBegin called by PetscSFCreateInverseSF. I used the script below: mpirun -n 2 ./ex1 -test_invert (1) On OpenMPI, got the result below: Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,2) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Inverse of Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=2, leaves=0, remote ranks=0 [1] Number of roots=3, leaves=0, remote ranks=0 (2) On MPICH, got the result below: Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Inverse of Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=2, leaves=3, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [0] 2 <- (1,2) [1] Number of roots=3, leaves=2, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) From two above results, you could found that the inverse of Multi-SF is incorrect on OpenMPI. Could you please take some debugs on OpenMPI (1.4.3) with 64-bit integers? In my code, I call DMComplexDistribute that calls PetscSFCreateInverseSF that calls PetscSFReduceBegin. I had taken a lot of debugs, and found the error come from the PetscSFReduceBegin. On Mon, Sep 10, 2012 at 10:47 PM, fdkong wrote: >> Hi all, >> >> The function PetscSFReduceBegin runs well on MPICH, but does not work >> on openmpi-1.4.3, with 64 integers. Anyone knows why? >> >1) What error are you seeing? There are no errors in the build. Yes, There are no errors in the build and configure. But when I ran my code involved the function PetscSFReduceBegin on supercomputer, I got the error below: Can you run src/sys/sf/examples/tutorials/ex1? There are several tests in the makefile there. I suspect that your graph is not correctly specified. Matt [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./linearElasticity on a arch-linu named node0353 by fako9399 Mon Sep 10 16:50:42 2012 [0]PETSC ERROR: Libraries linked from /projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib [0]PETSC ERROR: Configure run at Mon Sep 10 13:58:46 2012 [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-clanguage=cxx --with-shared-libraries=1 --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-batch=1 --known-mpi-shared-libraries=0 --with-mpi-shared=1 --download-parmetis=1 --download-metis=1 --with-64-bit-indices=1 --with-netcdf-dir=/projects/fako9399/petsc-3.3-p3/externalpackage/netcdf-4.1.3install --download-exodusii=1 --with-debugging=no --download-ptscotch=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 1517 on node node0353 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- I had done some debugs, and then found the error came from the function PetscSFReduceBegin. >2) Please do not send logs to petsc-users, send them to >petsc-maint at mcs.anl.gov Ok, Thanks. > Matt >> Maybe this link could help us guess why? >> http://www.open-mpi.org/community/lists/devel/2005/11/0517.php >> >> I attached the configure.log and make.log files. >> ------------------ >> Fande Kong >> ShenZhen Institutes of Advanced Technology >> Chinese Academy of Sciences >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at foxmail.com Tue Sep 11 00:05:41 2012 From: fd.kong at foxmail.com (=?ISO-8859-1?B?ZmRrb25n?=) Date: Tue, 11 Sep 2012 13:05:41 +0800 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3 with64 integers Message-ID: Hi Matt, I tested src/sys/sf/examples/tutorials/ex1 on OpenMPI and MPICH seperately respectively. I found the error come from the function PetscSFReduceBegin called by PetscSFCreateInverseSF. I used the script below: mpirun -n 2 ./ex1 -test_invert (1) On OpenMPI, got the result below: Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,2) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Inverse of Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=2, leaves=0, remote ranks=0 [1] Number of roots=3, leaves=0, remote ranks=0 (2) On MPICH, got the result below: Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Inverse of Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=2, leaves=3, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [0] 2 <- (1,2) [1] Number of roots=3, leaves=2, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) From two above results, you could found that the inverse of Multi-SF is incorrect on OpenMPI. Could you please take some debugs on OpenMPI (1.4.3) with 64-bit integers? In my code, I call DMComplexDistribute that calls PetscSFCreateInverseSF that calls PetscSFReduceBegin. I had taken a lot of debugs, and found the error come from the PetscSFReduceBegin. On Mon, Sep 10, 2012 at 10:47 PM, fdkong wrote: >> Hi all, >> >> The function PetscSFReduceBegin runs well on MPICH, but does not work >> on openmpi-1.4.3, with 64 integers. Anyone knows why? >> >1) What error are you seeing? There are no errors in the build. Yes, There are no errors in the build and configure. But when I ran my code involved the function PetscSFReduceBegin on supercomputer, I got the error below: Can you run src/sys/sf/examples/tutorials/ex1? There are several tests in the makefile there. I suspect that your graph is not correctly specified. Matt [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./linearElasticity on a arch-linu named node0353 by fako9399 Mon Sep 10 16:50:42 2012 [0]PETSC ERROR: Libraries linked from /projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib [0]PETSC ERROR: Configure run at Mon Sep 10 13:58:46 2012 [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-clanguage=cxx --with-shared-libraries=1 --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-batch=1 --known-mpi-shared-libraries=0 --with-mpi-shared=1 --download-parmetis=1 --download-metis=1 --with-64-bit-indices=1 --with-netcdf-dir=/projects/fako9399/petsc-3.3-p3/externalpackage/netcdf-4.1.3install --download-exodusii=1 --with-debugging=no --download-ptscotch=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 1517 on node node0353 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- I had done some debugs, and then found the error came from the function PetscSFReduceBegin. >2) Please do not send logs to petsc-users, send them to >petsc-maint at mcs.anl.gov Ok, Thanks. > Matt >> Maybe this link could help us guess why? >> http://www.open-mpi.org/community/lists/devel/2005/11/0517.php >> >> I attached the configure.log and make.log files. >> ------------------ >> Fande Kong >> ShenZhen Institutes of Advanced Technology >> Chinese Academy of Sciences >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at foxmail.com Tue Sep 11 00:08:36 2012 From: fd.kong at foxmail.com (=?ISO-8859-1?B?ZmRrb25n?=) Date: Tue, 11 Sep 2012 13:08:36 +0800 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3 with64 integers Message-ID: Hi Matt, I tested src/sys/sf/examples/tutorials/ex1 on OpenMPI and MPICH seperately respectively. I found the error come from the function PetscSFReduceBegin called by PetscSFCreateInverseSF. I used the script below: mpirun -n 2 ./ex1 -test_invert (1) On OpenMPI, got the result below: Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,2) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Inverse of Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=2, leaves=0, remote ranks=0 [1] Number of roots=3, leaves=0, remote ranks=0 (2) On MPICH, got the result below: Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Inverse of Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=2, leaves=3, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [0] 2 <- (1,2) [1] Number of roots=3, leaves=2, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) From two above results, you could found that the inverse of Multi-SF is incorrect on OpenMPI. Could you please take some debugs on OpenMPI (1.4.3) with 64-bit integers? In my code, I call DMComplexDistribute that calls PetscSFCreateInverseSF that calls PetscSFReduceBegin. I had taken a lot of debugs, and found the error come from the PetscSFReduceBegin. On Mon, Sep 10, 2012 at 10:47 PM, fdkong wrote: >> Hi all, >> >> The function PetscSFReduceBegin runs well on MPICH, but does not work >> on openmpi-1.4.3, with 64 integers. Anyone knows why? >> >1) What error are you seeing? There are no errors in the build. Yes, There are no errors in the build and configure. But when I ran my code involved the function PetscSFReduceBegin on supercomputer, I got the error below: Can you run src/sys/sf/examples/tutorials/ex1? There are several tests in the makefile there. I suspect that your graph is not correctly specified. Matt [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./linearElasticity on a arch-linu named node0353 by fako9399 Mon Sep 10 16:50:42 2012 [0]PETSC ERROR: Libraries linked from /projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib [0]PETSC ERROR: Configure run at Mon Sep 10 13:58:46 2012 [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-clanguage=cxx --with-shared-libraries=1 --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-batch=1 --known-mpi-shared-libraries=0 --with-mpi-shared=1 --download-parmetis=1 --download-metis=1 --with-64-bit-indices=1 --with-netcdf-dir=/projects/fako9399/petsc-3.3-p3/externalpackage/netcdf-4.1.3install --download-exodusii=1 --with-debugging=no --download-ptscotch=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 1517 on node node0353 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- I had done some debugs, and then found the error came from the function PetscSFReduceBegin. >2) Please do not send logs to petsc-users, send them to >petsc-maint at mcs.anl.gov Ok, Thanks. > Matt >> Maybe this link could help us guess why? >> http://www.open-mpi.org/community/lists/devel/2005/11/0517.php >> >> I attached the configure.log and make.log files. >> ------------------ >> Fande Kong >> ShenZhen Institutes of Advanced Technology >> Chinese Academy of Sciences >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener ------------------ Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at foxmail.com Tue Sep 11 11:44:43 2012 From: fd.kong at foxmail.com (=?ISO-8859-1?B?ZmRrb25n?=) Date: Wed, 12 Sep 2012 00:44:43 +0800 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3with64 integers Message-ID: Hi Matt, Thanks. I guess there are two reasons: (1) The MPI function MPI_Accumulate with operation MPI_RELACE is not supported in the implementation of OpenMPI 1.4.3. or other OpenMPI versions. (2) The MPI function dose not accept the datatype MPIU_2INT, when we use 64-bit integers. But when we run on MPICH, it works well! ------------------ Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences ------------------ Original ------------------ From: "knepley"; Date: Tue, Sep 11, 2012 01:33 PM To: "fdkong"; Cc: "petsc-users"; Subject: Re: PetscSFReduceBegin does not work correctly on openmpi-1.4.3with64 integers On Tue, Sep 11, 2012 at 12:05 AM, fdkong wrote: Hi Matt, I tested src/sys/sf/examples/tutorials/ex1 on OpenMPI and MPICH seperately respectively. I found the error come from the function PetscSFReduceBegin called by PetscSFCreateInverseSF. I used the script below: Thanks for testing this. I will run it myself and track down the bug. Matt mpirun -n 2 ./ex1 -test_invert (1) On OpenMPI, got the result below: Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,2) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Inverse of Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=2, leaves=0, remote ranks=0 [1] Number of roots=3, leaves=0, remote ranks=0 (2) On MPICH, got the result below: Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=3, leaves=2, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [1] Number of roots=2, leaves=3, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) [1] 2 <- (0,2) ## Inverse of Multi-SF Star Forest Object: 2 MPI processes type not yet set synchronization=FENCE sort=rank-order [0] Number of roots=2, leaves=3, remote ranks=1 [0] 0 <- (1,1) [0] 1 <- (1,0) [0] 2 <- (1,2) [1] Number of roots=3, leaves=2, remote ranks=1 [1] 0 <- (0,1) [1] 1 <- (0,0) From two above results, you could found that the inverse of Multi-SF is incorrect on OpenMPI. Could you please take some debugs on OpenMPI (1.4.3) with 64-bit integers? In my code, I call DMComplexDistribute that calls PetscSFCreateInverseSF that calls PetscSFReduceBegin. I had taken a lot of debugs, and found the error come from the PetscSFReduceBegin. On Mon, Sep 10, 2012 at 10:47 PM, fdkong wrote: >> Hi all, >> >> The function PetscSFReduceBegin runs well on MPICH, but does not work >> on openmpi-1.4.3, with 64 integers. Anyone knows why? >> >1) What error are you seeing? There are no errors in the build. Yes, There are no errors in the build and configure. But when I ran my code involved the function PetscSFReduceBegin on supercomputer, I got the error below: Can you run src/sys/sf/examples/tutorials/ex1? There are several tests in the makefile there. I suspect that your graph is not correctly specified. Matt [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./linearElasticity on a arch-linu named node0353 by fako9399 Mon Sep 10 16:50:42 2012 [0]PETSC ERROR: Libraries linked from /projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib [0]PETSC ERROR: Configure run at Mon Sep 10 13:58:46 2012 [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768 --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-clanguage=cxx --with-shared-libraries=1 --with-dynamic-loading=1 --download-f-blas-lapack=1 --with-batch=1 --known-mpi-shared-libraries=0 --with-mpi-shared=1 --download-parmetis=1 --download-metis=1 --with-64-bit-indices=1 --with-netcdf-dir=/projects/fako9399/petsc-3.3-p3/externalpackage/netcdf-4.1.3install --download-exodusii=1 --with-debugging=no --download-ptscotch=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 1517 on node node0353 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- I had done some debugs, and then found the error came from the function PetscSFReduceBegin. >2) Please do not send logs to petsc-users, send them to >petsc-maint at mcs.anl.gov Ok, Thanks. > Matt >> Maybe this link could help us guess why? >> http://www.open-mpi.org/community/lists/devel/2005/11/0517.php >> >> I attached the configure.log and make.log files. >> ------------------ >> Fande Kong >> ShenZhen Institutes of Advanced Technology >> Chinese Academy of Sciences >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Sep 11 12:12:23 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 11 Sep 2012 13:12:23 -0400 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3with64 integers In-Reply-To: References: Message-ID: Open MPI one-sided operations with datatypes still have known bugs. They have had bug report s with reduced test cases for several years now. They need to fix those bugs. Please let them know that you are also waiting... To work around that, and for other reasons, I will write a new SF implementation using point-to-point. On Sep 11, 2012 12:44 PM, "fdkong" wrote: > Hi Matt, > > Thanks. I guess there are two reasons: > > (1) The MPI function MPI_Accumulate with operation MPI_RELACE is not > supported in the implementation of OpenMPI 1.4.3. or other OpenMPI versions. > > (2) The MPI function dose not accept the datatype MPIU_2INT, when we use > 64-bit integers. But when we run on MPICH, it works well! > > ------------------ > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences > > ** > > > ------------------ Original ------------------ > *From: * "knepley"; > *Date: * Tue, Sep 11, 2012 01:33 PM > *To: * "fdkong"; ** > *Cc: * "petsc-users"; ** > *Subject: * Re: PetscSFReduceBegin does not work correctly on > openmpi-1.4.3with64 integers > > On Tue, Sep 11, 2012 at 12:05 AM, fdkong wrote: > >> Hi Matt, >> >> I tested src/sys/sf/examples/tutorials/ex1 on OpenMPI and MPICH >> seperately respectively. I found the error come from the function >> PetscSFReduceBegin called by PetscSFCreateInverseSF. I used the script >> below: >> > > Thanks for testing this. I will run it myself and track down the bug. > > Matt > >> mpirun -n 2 ./ex1 -test_invert >> >> (1) On OpenMPI, got the result below: >> >> Star Forest Object: 2 MPI processes >> type not yet set >> synchronization=FENCE sort=rank-order >> [0] Number of roots=3, leaves=2, remote ranks=1 >> [0] 0 <- (1,1) >> [0] 1 <- (1,0) >> [1] Number of roots=2, leaves=3, remote ranks=1 >> [1] 0 <- (0,1) >> [1] 1 <- (0,0) >> [1] 2 <- (0,2) >> ## Multi-SF >> Star Forest Object: 2 MPI processes >> type not yet set >> synchronization=FENCE sort=rank-order >> [0] Number of roots=3, leaves=2, remote ranks=1 >> [0] 0 <- (1,1) >> [0] 1 <- (1,0) >> [1] Number of roots=2, leaves=3, remote ranks=1 >> [1] 0 <- (0,2) >> [1] 1 <- (0,0) >> [1] 2 <- (0,2) >> ## Inverse of Multi-SF >> Star Forest Object: 2 MPI processes >> type not yet set >> synchronization=FENCE sort=rank-order >> [0] Number of roots=2, leaves=0, remote ranks=0 >> [1] Number of roots=3, leaves=0, remote ranks=0 >> >> (2) On MPICH, got the result below: >> >> Star Forest Object: 2 MPI processes >> type not yet set >> synchronization=FENCE sort=rank-order >> [0] Number of roots=3, leaves=2, remote ranks=1 >> [0] 0 <- (1,1) >> [0] 1 <- (1,0) >> [1] Number of roots=2, leaves=3, remote ranks=1 >> [1] 0 <- (0,1) >> [1] 1 <- (0,0) >> [1] 2 <- (0,2) >> ## Multi-SF >> Star Forest Object: 2 MPI processes >> type not yet set >> synchronization=FENCE sort=rank-order >> [0] Number of roots=3, leaves=2, remote ranks=1 >> [0] 0 <- (1,1) >> [0] 1 <- (1,0) >> [1] Number of roots=2, leaves=3, remote ranks=1 >> [1] 0 <- (0,1) >> [1] 1 <- (0,0) >> [1] 2 <- (0,2) >> ## Inverse of Multi-SF >> Star Forest Object: 2 MPI processes >> type not yet set >> synchronization=FENCE sort=rank-order >> [0] Number of roots=2, leaves=3, remote ranks=1 >> [0] 0 <- (1,1) >> [0] 1 <- (1,0) >> [0] 2 <- (1,2) >> [1] Number of roots=3, leaves=2, remote ranks=1 >> [1] 0 <- (0,1) >> [1] 1 <- (0,0) >> >> From two above results, you could found that the inverse of Multi-SF is >> incorrect on OpenMPI. Could you please take some debugs on OpenMPI (1.4.3) >> with 64-bit integers? >> >> In my code, I call DMComplexDistribute that calls PetscSFCreateInverseSF >> that calls PetscSFReduceBegin. I had taken a lot of debugs, and found the >> error come from the PetscSFReduceBegin. >> >> On Mon, Sep 10, 2012 at 10:47 PM, fdkong wrote: >> ** >> >>> >>> >> Hi all, >>> >> >>> >> The function PetscSFReduceBegin runs well on MPICH, but does not work >>> >> on openmpi-1.4.3, with 64 integers. Anyone knows why? >>> >> >>> >>> >1) What error are you seeing? There are no errors in the build. >>> >>> Yes, There are no errors in the build and configure. But when I ran my >>> code involved the function PetscSFReduceBegin on supercomputer, I got the >>> error below: >>> >> >> Can you run src/sys/sf/examples/tutorials/ex1? There are several tests in >> the makefile there. I suspect >> that your graph is not correctly specified. >> >> Matt >> >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [0]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try >>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >>> corruption errors >>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, >>> and run >>> [0]PETSC ERROR: to get more information on the crash. >>> [0]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [0]PETSC ERROR: Signal received! >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 >>> 11:26:24 CDT 2012 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: ./linearElasticity on a arch-linu named node0353 by >>> fako9399 Mon Sep 10 16:50:42 2012 >>> [0]PETSC ERROR: Libraries linked from >>> /projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib >>> [0]PETSC ERROR: Configure run at Mon Sep 10 13:58:46 2012 >>> [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768 >>> --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 >>> --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 >>> --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 >>> --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 >>> --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 >>> --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-clanguage=cxx >>> --with-shared-libraries=1 --with-dynamic-loading=1 >>> --download-f-blas-lapack=1 --with-batch=1 --known-mpi-shared-libraries=0 >>> --with-mpi-shared=1 --download-parmetis=1 --download-metis=1 >>> --with-64-bit-indices=1 >>> --with-netcdf-dir=/projects/fako9399/petsc-3.3-p3/externalpackage/netcdf-4.1.3install >>> --download-exodusii=1 --with-debugging=no --download-ptscotch=1 >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: User provided function() line 0 in unknown directory >>> unknown file >>> >>> -------------------------------------------------------------------------- >>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD >>> with errorcode 59. >>> >>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >>> You may or may not see output from other processes, depending on >>> exactly when Open MPI kills them. >>> >>> -------------------------------------------------------------------------- >>> >>> -------------------------------------------------------------------------- >>> mpirun has exited due to process rank 0 with PID 1517 on >>> node node0353 exiting without calling "finalize". This may >>> have caused other processes in the application to be >>> terminated by signals sent by mpirun (as reported here). >>> >>> -------------------------------------------------------------------------- >>> >>> I had done some debugs, and then found the error came from the function >>> PetscSFReduceBegin. >>> >>> >2) Please do not send logs to petsc-users, send them to >>> >petsc-maint at mcs.anl.gov >>> >>> Ok, Thanks. >>> >>> > Matt >>> >>> >>> >> Maybe this link could help us guess why? >>> >> http://www.open-mpi.org/community/lists/devel/2005/11/0517.php >>> >> >>> >> I attached the configure.log and make.log files. >>> >> ------------------ >>> >> Fande Kong >>> >> ShenZhen Institutes of Advanced Technology >>> >> Chinese Academy of Sciences >>> >> >>> >> >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> ** >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > ** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Sep 11 13:02:31 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 11 Sep 2012 13:02:31 -0500 (CDT) Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3with64 integers In-Reply-To: References: Message-ID: What about using latest openmpi-1.6 series? Satish On Tue, 11 Sep 2012, Jed Brown wrote: > Open MPI one-sided operations with datatypes still have known bugs. They > have had bug report s with reduced test cases for several years now. They > need to fix those bugs. Please let them know that you are also waiting... > > To work around that, and for other reasons, I will write a new SF > implementation using point-to-point. > On Sep 11, 2012 12:44 PM, "fdkong" wrote: > > > Hi Matt, > > > > Thanks. I guess there are two reasons: > > > > (1) The MPI function MPI_Accumulate with operation MPI_RELACE is not > > supported in the implementation of OpenMPI 1.4.3. or other OpenMPI versions. > > > > (2) The MPI function dose not accept the datatype MPIU_2INT, when we use > > 64-bit integers. But when we run on MPICH, it works well! > > > > ------------------ > > Fande Kong > > ShenZhen Institutes of Advanced Technology > > Chinese Academy of Sciences > > > > ** > > > > > > ------------------ Original ------------------ > > *From: * "knepley"; > > *Date: * Tue, Sep 11, 2012 01:33 PM > > *To: * "fdkong"; ** > > *Cc: * "petsc-users"; ** > > *Subject: * Re: PetscSFReduceBegin does not work correctly on > > openmpi-1.4.3with64 integers > > > > On Tue, Sep 11, 2012 at 12:05 AM, fdkong wrote: > > > >> Hi Matt, > >> > >> I tested src/sys/sf/examples/tutorials/ex1 on OpenMPI and MPICH > >> seperately respectively. I found the error come from the function > >> PetscSFReduceBegin called by PetscSFCreateInverseSF. I used the script > >> below: > >> > > > > Thanks for testing this. I will run it myself and track down the bug. > > > > Matt > > > >> mpirun -n 2 ./ex1 -test_invert > >> > >> (1) On OpenMPI, got the result below: > >> > >> Star Forest Object: 2 MPI processes > >> type not yet set > >> synchronization=FENCE sort=rank-order > >> [0] Number of roots=3, leaves=2, remote ranks=1 > >> [0] 0 <- (1,1) > >> [0] 1 <- (1,0) > >> [1] Number of roots=2, leaves=3, remote ranks=1 > >> [1] 0 <- (0,1) > >> [1] 1 <- (0,0) > >> [1] 2 <- (0,2) > >> ## Multi-SF > >> Star Forest Object: 2 MPI processes > >> type not yet set > >> synchronization=FENCE sort=rank-order > >> [0] Number of roots=3, leaves=2, remote ranks=1 > >> [0] 0 <- (1,1) > >> [0] 1 <- (1,0) > >> [1] Number of roots=2, leaves=3, remote ranks=1 > >> [1] 0 <- (0,2) > >> [1] 1 <- (0,0) > >> [1] 2 <- (0,2) > >> ## Inverse of Multi-SF > >> Star Forest Object: 2 MPI processes > >> type not yet set > >> synchronization=FENCE sort=rank-order > >> [0] Number of roots=2, leaves=0, remote ranks=0 > >> [1] Number of roots=3, leaves=0, remote ranks=0 > >> > >> (2) On MPICH, got the result below: > >> > >> Star Forest Object: 2 MPI processes > >> type not yet set > >> synchronization=FENCE sort=rank-order > >> [0] Number of roots=3, leaves=2, remote ranks=1 > >> [0] 0 <- (1,1) > >> [0] 1 <- (1,0) > >> [1] Number of roots=2, leaves=3, remote ranks=1 > >> [1] 0 <- (0,1) > >> [1] 1 <- (0,0) > >> [1] 2 <- (0,2) > >> ## Multi-SF > >> Star Forest Object: 2 MPI processes > >> type not yet set > >> synchronization=FENCE sort=rank-order > >> [0] Number of roots=3, leaves=2, remote ranks=1 > >> [0] 0 <- (1,1) > >> [0] 1 <- (1,0) > >> [1] Number of roots=2, leaves=3, remote ranks=1 > >> [1] 0 <- (0,1) > >> [1] 1 <- (0,0) > >> [1] 2 <- (0,2) > >> ## Inverse of Multi-SF > >> Star Forest Object: 2 MPI processes > >> type not yet set > >> synchronization=FENCE sort=rank-order > >> [0] Number of roots=2, leaves=3, remote ranks=1 > >> [0] 0 <- (1,1) > >> [0] 1 <- (1,0) > >> [0] 2 <- (1,2) > >> [1] Number of roots=3, leaves=2, remote ranks=1 > >> [1] 0 <- (0,1) > >> [1] 1 <- (0,0) > >> > >> From two above results, you could found that the inverse of Multi-SF is > >> incorrect on OpenMPI. Could you please take some debugs on OpenMPI (1.4.3) > >> with 64-bit integers? > >> > >> In my code, I call DMComplexDistribute that calls PetscSFCreateInverseSF > >> that calls PetscSFReduceBegin. I had taken a lot of debugs, and found the > >> error come from the PetscSFReduceBegin. > >> > >> On Mon, Sep 10, 2012 at 10:47 PM, fdkong wrote: > >> ** > >> > >>> > >>> >> Hi all, > >>> >> > >>> >> The function PetscSFReduceBegin runs well on MPICH, but does not work > >>> >> on openmpi-1.4.3, with 64 integers. Anyone knows why? > >>> >> > >>> > >>> >1) What error are you seeing? There are no errors in the build. > >>> > >>> Yes, There are no errors in the build and configure. But when I ran my > >>> code involved the function PetscSFReduceBegin on supercomputer, I got the > >>> error below: > >>> > >> > >> Can you run src/sys/sf/examples/tutorials/ex1? There are several tests in > >> the makefile there. I suspect > >> that your graph is not correctly specified. > >> > >> Matt > >> > >>> [0]PETSC ERROR: > >>> ------------------------------------------------------------------------ > >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > >>> probably memory access out of range > >>> [0]PETSC ERROR: Try option -start_in_debugger or > >>> -on_error_attach_debugger > >>> [0]PETSC ERROR: or see > >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try > >>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > >>> corruption errors > >>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, > >>> and run > >>> [0]PETSC ERROR: to get more information on the crash. > >>> [0]PETSC ERROR: --------------------- Error Message > >>> ------------------------------------ > >>> [0]PETSC ERROR: Signal received! > >>> [0]PETSC ERROR: > >>> ------------------------------------------------------------------------ > >>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 > >>> 11:26:24 CDT 2012 > >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. > >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > >>> [0]PETSC ERROR: See docs/index.html for manual pages. > >>> [0]PETSC ERROR: > >>> ------------------------------------------------------------------------ > >>> [0]PETSC ERROR: ./linearElasticity on a arch-linu named node0353 by > >>> fako9399 Mon Sep 10 16:50:42 2012 > >>> [0]PETSC ERROR: Libraries linked from > >>> /projects/fako9399/petsc-3.3-p3/arch-linux264-cxx-opt/lib > >>> [0]PETSC ERROR: Configure run at Mon Sep 10 13:58:46 2012 > >>> [0]PETSC ERROR: Configure options --known-level1-dcache-size=32768 > >>> --known-level1-dcache-linesize=32 --known-level1-dcache-assoc=0 > >>> --known-memcmp-ok=1 --known-sizeof-char=1 --known-sizeof-void-p=8 > >>> --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 > >>> --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 > >>> --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=8 > >>> --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --with-clanguage=cxx > >>> --with-shared-libraries=1 --with-dynamic-loading=1 > >>> --download-f-blas-lapack=1 --with-batch=1 --known-mpi-shared-libraries=0 > >>> --with-mpi-shared=1 --download-parmetis=1 --download-metis=1 > >>> --with-64-bit-indices=1 > >>> --with-netcdf-dir=/projects/fako9399/petsc-3.3-p3/externalpackage/netcdf-4.1.3install > >>> --download-exodusii=1 --with-debugging=no --download-ptscotch=1 > >>> [0]PETSC ERROR: > >>> ------------------------------------------------------------------------ > >>> [0]PETSC ERROR: User provided function() line 0 in unknown directory > >>> unknown file > >>> > >>> -------------------------------------------------------------------------- > >>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > >>> with errorcode 59. > >>> > >>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > >>> You may or may not see output from other processes, depending on > >>> exactly when Open MPI kills them. > >>> > >>> -------------------------------------------------------------------------- > >>> > >>> -------------------------------------------------------------------------- > >>> mpirun has exited due to process rank 0 with PID 1517 on > >>> node node0353 exiting without calling "finalize". This may > >>> have caused other processes in the application to be > >>> terminated by signals sent by mpirun (as reported here). > >>> > >>> -------------------------------------------------------------------------- > >>> > >>> I had done some debugs, and then found the error came from the function > >>> PetscSFReduceBegin. > >>> > >>> >2) Please do not send logs to petsc-users, send them to > >>> >petsc-maint at mcs.anl.gov > >>> > >>> Ok, Thanks. > >>> > >>> > Matt > >>> > >>> > >>> >> Maybe this link could help us guess why? > >>> >> http://www.open-mpi.org/community/lists/devel/2005/11/0517.php > >>> >> > >>> >> I attached the configure.log and make.log files. > >>> >> ------------------ > >>> >> Fande Kong > >>> >> ShenZhen Institutes of Advanced Technology > >>> >> Chinese Academy of Sciences > >>> >> > >>> >> > >>> > >> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their > >> experiments is infinitely more interesting than any results to which their > >> experiments lead. > >> -- Norbert Wiener > >> ** > >> > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which their > > experiments lead. > > -- Norbert Wiener > > ** > > > From fd.kong at siat.ac.cn Tue Sep 11 15:35:41 2012 From: fd.kong at siat.ac.cn (=?ISO-8859-1?B?ZmRrb25n?=) Date: Wed, 12 Sep 2012 04:35:41 +0800 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3with64 integers Message-ID: >Open MPI one-sided operations with datatypes still have known bugs. They >have had bug report s with reduced test cases for several years now. They >need to fix those bugs. Please let them know that you are also waiting... >To work around that, and for other reasons, I will write a new SF >implementation using point-to-point. How long will it take for you to rewrite SF implementation using p2p? But now, I want to complete a project that needs my current code. Could you please tell how to work around the issue related with the function PetscSFReduceBegin? Or could you please first modify the PetscSFReduceBegin? And there are other problems, when you write new SF. Maybe the DMComplex and PetscSection will be changed. Because both objects use the SF for communication. >>On Sep 11, 2012 12:44 PM, "fdkong" wrote: > > >Hi Matt, >> > >Thanks. I guess there are two reasons: >> > >(1) The MPI function MPI_Accumulate with operation MPI_RELACE is not > >supported in the implementation of OpenMPI 1.4.3. or other OpenMPI versions. >> > > (2) The MPI function dose not accept the datatype MPIU_2INT, when we use > >64-bit integers. But when we run on MPICH, it works well! >> > >------------------ >> Fande Kong > >ShenZhen Institutes of Advanced Technology >> Chinese Academy of Sciences >> >> ** >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Sep 11 16:06:59 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 11 Sep 2012 16:06:59 -0500 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3with64 integers In-Reply-To: References: Message-ID: On Tue, Sep 11, 2012 at 3:35 PM, fdkong wrote: > >Open MPI one-sided operations with datatypes still have known bugs. They > >have had bug report s with reduced test cases for several years now. They > >need to fix those bugs. Please let them know that you are also waiting... > > >To work around that, and for other reasons, I will write a new SF > >implementation using point-to-point. > > How long will it take for you to rewrite SF implementation using p2p? But > now, I want to complete a project that needs my current code. Could you > please tell how to work around the issue related with the > function PetscSFReduceBegin? Or could you please first modify the > PetscSFReduceBegin? > > And there are other problems, when you write new SF. Maybe the DMComplex > and PetscSection will be changed. Because both objects use the SF for > communication. > The easiest solution is to install MPICH. Matt > >>On Sep 11, 2012 12:44 PM, "fdkong" wrote: > > > > >Hi Matt, > >> > > >Thanks. I guess there are two reasons: > >> > > >(1) The MPI function MPI_Accumulate with operation MPI_RELACE is not > > >supported in the implementation of OpenMPI 1.4.3. or other OpenMPI > versions. > >> > > > (2) The MPI function dose not accept the datatype MPIU_2INT, when we > use > > >64-bit integers. But when we run on MPICH, it works well! > >> > > >------------------ > >> Fande Kong > > >ShenZhen Institutes of Advanced Technology > >> Chinese Academy of Sciences > >> > >> ** > >> > >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Sep 11 16:17:10 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 11 Sep 2012 16:17:10 -0500 (CDT) Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3with64 integers In-Reply-To: References: Message-ID: On Tue, 11 Sep 2012, Matthew Knepley wrote: > On Tue, Sep 11, 2012 at 3:35 PM, fdkong wrote: > > How long will it take for you to rewrite SF implementation using p2p? But > > now, I want to complete a project that needs my current code. Could you > > please tell how to work around the issue related with the > > function PetscSFReduceBegin? Or could you please first modify the > > PetscSFReduceBegin? > > > > And there are other problems, when you write new SF. Maybe the DMComplex > > and PetscSection will be changed. Because both objects use the SF for > > communication. > > > > The easiest solution is to install MPICH. or mvapich - if you need infiniband? Satish From coco at dmi.unict.it Tue Sep 11 17:11:27 2012 From: coco at dmi.unict.it (coco at dmi.unict.it) Date: Wed, 12 Sep 2012 00:11:27 +0200 Subject: [petsc-users] interpolation function in multigrid Message-ID: <20120912001127.Horde.JXnXJuph4B9QT7cP2_VW0tA@mbox.dmi.unict.it> Dear all, I am using the multigrid as preconditioner, and I got in a trouble with the interpolation function. In practice, I defined a user interpolation function: PetscErrorCode UserMultAddInterpolation(Mat mat,Vec x,Vec v2, Vec y); and I noticed that the vectors v2 and y are zero in input, while I am expected that they are the solution computed in the smooth down process at the same level. In detail, given the following source code of the petsc file mg.c : ierr = KSPSolve(mglevels->smoothd,mglevels->b,mglevels->x);CHKERRQ(ierr); /* pre-smooth */ [...] while (cycles--) { ierr = PCMGMCycle_Private(pc,mglevelsin-1,reason);CHKERRQ(ierr); } [...] ierr = MatInterpolateAdd(mglevels->interpolate,mgc->x,mglevels->x,mglevels->x);CHKERRQ(ierr); I observed that mglevels->x is a vector which contains the correct values after the KSPSolve calling for the pre-smooth step, but it is a zero vector when given in input to the MatInterpolateAdd routine. I would like to debug the mg.c code, for instance accessing the single values of the vector mglevels->x and figuring out where they are zeroed out between the KSPSolve and MatInterpolateAdd callings. Would you have some suggestion for this debugging? Thank you for the collaboration. Best regards, Armando From jedbrown at mcs.anl.gov Tue Sep 11 17:28:42 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 11 Sep 2012 18:28:42 -0400 Subject: [petsc-users] PetscSFReduceBegin does not work correctly on openmpi-1.4.3with64 integers In-Reply-To: References: Message-ID: Yes, use mpich or mvapich. I plan to do it sometime this fall, but it is not my highest priority right now. On Sep 11, 2012 4:35 PM, "fdkong" wrote: > >Open MPI one-sided operations with datatypes still have known bugs. They > >have had bug report s with reduced test cases for several years now. They > >need to fix those bugs. Please let them know that you are also waiting... > > >To work around that, and for other reasons, I will write a new SF > >implementation using point-to-point. > > How long will it take for you to rewrite SF implementation using p2p? But > now, I want to complete a project that needs my current code. Could you > please tell how to work around the issue related with the > function PetscSFReduceBegin? Or could you please first modify the > PetscSFReduceBegin? > > And there are other problems, when you write new SF. Maybe the DMComplex > and PetscSection will be changed. Because both objects use the SF for > communication. > > >>On Sep 11, 2012 12:44 PM, "fdkong" wrote: > > > > >Hi Matt, > >> > > >Thanks. I guess there are two reasons: > >> > > >(1) The MPI function MPI_Accumulate with operation MPI_RELACE is not > > >supported in the implementation of OpenMPI 1.4.3. or other OpenMPI > versions. > >> > > > (2) The MPI function dose not accept the datatype MPIU_2INT, when we > use > > >64-bit integers. But when we run on MPICH, it works well! > >> > > >------------------ > >> Fande Kong > > >ShenZhen Institutes of Advanced Technology > >> Chinese Academy of Sciences > >> > >> ** > >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Sep 11 18:43:50 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 11 Sep 2012 18:43:50 -0500 Subject: [petsc-users] interpolation function in multigrid In-Reply-To: <20120912001127.Horde.JXnXJuph4B9QT7cP2_VW0tA@mbox.dmi.unict.it> References: <20120912001127.Horde.JXnXJuph4B9QT7cP2_VW0tA@mbox.dmi.unict.it> Message-ID: On Sep 11, 2012, at 5:11 PM, coco at dmi.unict.it wrote: > Dear all, > > I am using the multigrid as preconditioner, and I got in a trouble with the interpolation function. In practice, I defined a user interpolation function: > > PetscErrorCode UserMultAddInterpolation(Mat mat,Vec x,Vec v2, Vec y); > > and I noticed that the vectors v2 and y are zero in input, while I am expected that they are the solution computed in the smooth down process at the same level. In detail, given the following source code of the petsc file mg.c : > > ierr = KSPSolve(mglevels->smoothd,mglevels->b,mglevels->x);CHKERRQ(ierr); /* pre-smooth */ > [...] > while (cycles--) { > ierr = PCMGMCycle_Private(pc,mglevelsin-1,reason);CHKERRQ(ierr); > } > [...] > ierr = MatInterpolateAdd(mglevels->interpolate,mgc->x,mglevels->x,mglevels->x);CHKERRQ(ierr); > > I observed that mglevels->x is a vector which contains the correct values after the KSPSolve calling for the pre-smooth step, but it is a zero vector when given in input to the MatInterpolateAdd routine. > I would like to debug the mg.c code, for instance accessing the single values of the vector mglevels->x and figuring out where they are zeroed out between the KSPSolve and MatInterpolateAdd callings. Would you have some suggestion for this debugging? You can call VecView(mglevels->x,0) directly in the debugger (for a small problem) I would start by simply calling it right after the KSPSolve() then right before the MatInterpolateAdd() if the second is zero but the first ok then run with two levels and break in the inner PCMGMCycle_Private() and print the same thing there (remember the value of mglevels changes. Good luck, Barry > > Thank you for the collaboration. > Best regards, > Armando > From ckhroulev at alaska.edu Wed Sep 12 12:13:52 2012 From: ckhroulev at alaska.edu (Constantine Khroulev) Date: Wed, 12 Sep 2012 09:13:52 -0800 Subject: [petsc-users] Setting up a SNES problem using DMDASetLocalFunction/DMDASetLocalJacobian in PETSc 3.3 Message-ID: Hi, While updating our code to work with PETSc 3.3 I noticed that (as far as I can tell) there is no SNESDAFormFunction/SNESDAComputeJacobian equivalent in 3.3. I came up with a workaround (see below), but I am not sure if this is legal. (See the "SNESSetFunction(snes, F, PETSC_NULL, PETSC_NULL)", for example.) What would you recommend? /* begin code snippet */ { ierr = DMDACreate2d(..., &da); CHKERRQ(ierr); ierr = DMCreateGlobalVector(da, &F); CHKERRQ(ierr); ierr = DMCreateMatrix(da, "baij", &J); CHKERRQ(ierr); ierr = SNESCreate(com, &snes);CHKERRQ(ierr); ierr = DMDASetLocalFunction(da,(DMDALocalFunction1)LocalFunction);CHKERRQ(ierr); ierr = DMDASetLocalJacobian(da,(DMDALocalFunction1)LocalJacobian);CHKERRQ(ierr); #if I_HAVE_PETSC32==1 ierr = SNESSetFunction(snes, F, SNESDAFormFunction, &ctx); CHKERRQ(ierr); ierr = SNESSetJacobian(snes, J, J, SNESDAComputeJacobian, &ctx); CHKERRQ(ierr); #else /* PETSc 3.3 */ ierr = SNESSetFunction(snes, F, PETSC_NULL, PETSC_NULL); CHKERRQ(ierr); ierr = SNESSetJacobian(snes, J, J, PETSC_NULL, PETSC_NULL); CHKERRQ(ierr); ierr = DMSetApplicationContext(da, &ctx); CHKERRQ(ierr); #endif ierr = SNESSetDM(snes, da); CHKERRQ(ierr); ierr = SNESSetFromOptions(snes);CHKERRQ(ierr); } /* end code snippet */ -- Constantine From knepley at gmail.com Wed Sep 12 12:17:49 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 12 Sep 2012 12:17:49 -0500 Subject: [petsc-users] Setting up a SNES problem using DMDASetLocalFunction/DMDASetLocalJacobian in PETSc 3.3 In-Reply-To: References: Message-ID: On Wed, Sep 12, 2012 at 12:13 PM, Constantine Khroulev wrote: > Hi, > > While updating our code to work with PETSc 3.3 I noticed that (as far > as I can tell) there is no SNESDAFormFunction/SNESDAComputeJacobian > equivalent in 3.3. > > I came up with a workaround (see below), but I am not sure if this is > legal. (See the "SNESSetFunction(snes, F, PETSC_NULL, PETSC_NULL)", > for example.) > > What would you recommend? > > /* begin code snippet */ > { > ierr = DMDACreate2d(..., &da); CHKERRQ(ierr); > > ierr = DMCreateGlobalVector(da, &F); CHKERRQ(ierr); > ierr = DMCreateMatrix(da, "baij", &J); CHKERRQ(ierr); > > ierr = SNESCreate(com, &snes);CHKERRQ(ierr); > > ierr = > DMDASetLocalFunction(da,(DMDALocalFunction1)LocalFunction);CHKERRQ(ierr); > ierr = > DMDASetLocalJacobian(da,(DMDALocalFunction1)LocalJacobian);CHKERRQ(ierr); > > #if I_HAVE_PETSC32==1 > ierr = SNESSetFunction(snes, F, SNESDAFormFunction, &ctx); CHKERRQ(ierr); > ierr = SNESSetJacobian(snes, J, J, SNESDAComputeJacobian, &ctx); > CHKERRQ(ierr); > #else /* PETSc 3.3 */ > ierr = SNESSetFunction(snes, F, PETSC_NULL, PETSC_NULL); CHKERRQ(ierr); > ierr = SNESSetJacobian(snes, J, J, PETSC_NULL, PETSC_NULL); > CHKERRQ(ierr); > You do not need these two. If the DM is set, SNES will take the assembly function from the DM. Refer to SNES ex5. Thanks, Matt > ierr = DMSetApplicationContext(da, &ctx); CHKERRQ(ierr); > #endif > > ierr = SNESSetDM(snes, da); CHKERRQ(ierr); > > ierr = SNESSetFromOptions(snes);CHKERRQ(ierr); > } > /* end code snippet */ > > -- > Constantine > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Lukasz.Kaczmarczyk at glasgow.ac.uk Thu Sep 13 14:40:48 2012 From: Lukasz.Kaczmarczyk at glasgow.ac.uk (Lukasz Kaczmarczyk) Date: Thu, 13 Sep 2012 20:40:48 +0100 Subject: [petsc-users] MUMPS memory allocation problem Message-ID: <1678C689-9EE3-4AEF-AD0F-6FD21FFEAA7B@glasgow.ac.uk> Hello, I solve a strongly nonlinear problem using SNES. When I use MUMS with petsc form time to time I get following error, Fortran runtime error: Attempt to DEALLOCATE unallocated 'perm' -------------------------------------------------------------------------- mpirun has exited due to process rank 3 with PID 25487 on node rdb-srv1 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). Other solvers (f.e. SuperLU Dist) working without problems. Is this know issue with MUMPS and similar problem where reported in past? Lukasz From hzhang at mcs.anl.gov Thu Sep 13 15:23:03 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 13 Sep 2012 15:23:03 -0500 Subject: [petsc-users] MUMPS memory allocation problem In-Reply-To: <1678C689-9EE3-4AEF-AD0F-6FD21FFEAA7B@glasgow.ac.uk> References: <1678C689-9EE3-4AEF-AD0F-6FD21FFEAA7B@glasgow.ac.uk> Message-ID: Lukasz : > Hello, > > I solve a strongly nonlinear problem using SNES. When I use MUMS with > petsc form time to time I get following error, > > Fortran runtime error: Attempt to DEALLOCATE unallocated 'perm' > I've never seen report from our users about this error. Can you provide more info about it, e.g., run with a debugger and find error stack showing which routine and the line number that the error occurs? Which version of petsc and mumps do you use? Does it work on petsc examples, e.g. petsc-3.3/src/ksp/ksp/examples/tutorials>mpiexec -n 4 ./ex2 -pc_type lu -pc_factor_mat_solver_package mumps Norm of error 2.71948e-15 iterations 1 Hong > -------------------------------------------------------------------------- > mpirun has exited due to process rank 3 with PID 25487 on > node rdb-srv1 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > > Other solvers (f.e. SuperLU Dist) working without problems. Is this know > issue with MUMPS and similar problem where reported in past? > > Lukasz -------------- next part -------------- An HTML attachment was scrubbed... URL: From Lukasz.Kaczmarczyk at glasgow.ac.uk Thu Sep 13 16:24:05 2012 From: Lukasz.Kaczmarczyk at glasgow.ac.uk (Lukasz Kaczmarczyk) Date: Thu, 13 Sep 2012 22:24:05 +0100 Subject: [petsc-users] MUMPS memory allocation problem In-Reply-To: References: <1678C689-9EE3-4AEF-AD0F-6FD21FFEAA7B@glasgow.ac.uk> Message-ID: <494429C2-A2B5-44EA-B2D4-9306C447A1B1@glasgow.ac.uk> Hello, I use petsc-3.1-p3 and MUMPS_4.9.2. I hope that this will help, it is tricky to track the error, usually is for relatively large problems, I have to check if it happens always at the same time, or is random error. CYCLE 19 ref_mesh 0 Vol = 1.6749e+08 ave_Vol = 4.2698e+03 min_quali = 1.7609e-01 Nodes Ordering Elemnets Ordering arc length lambda 4.4581e-01 arc length lambda value 5.6751e+02 0 SNES Function norm 7.929158005697e+01 0 SNES (MY) norms Fint 6.7783e+01, arc length res_lambda 0.0000e+00 lambda 4.4581e-01 0 KSP (MY) F 9.0999e-01 Res 1.0000e+00 1 KSP (MY) F 3.8659e-15 Res 4.4892e-15 PC ArcLength norm dF_dLambda 5.8658e+03 norm dD_dF_dLambda 3.29e+01 b_dD_dF_dLambda 0.00e+00 0 KSP (MY) norms F 6.7783e+01 Res 7.9292e+01 res arc length lambda 0.0000e+00 dlambda 0.0000e+00 1 KSP (MY) norms F 9.1576e-14 Res 9.9498e-14 res arc length lambda 0.0000e+00 dlambda 0.0000e+00 1 SNES Function norm 2.355295496832e-02 1 SNES (MY) norms Fint 2.0118e-02, arc length res_lambda 0.0000e+00 lambda 4.4581e-01 At line 593 of file dmumps_part3.F Fortran runtime error: Attempt to DEALLOCATE unallocated 'perm' -------------------------------------------------------------------------- mpirun has exited due to process rank 3 with PID 25487 on node rdb-srv1 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). On 13 Sep 2012, at 21:23, Hong Zhang wrote: > Lukasz : > Hello, > > I solve a strongly nonlinear problem using SNES. When I use MUMS with petsc form time to time I get following error, > > Fortran runtime error: Attempt to DEALLOCATE unallocated 'perm' > > I've never seen report from our users about this error. > Can you provide more info about it, e.g., run with a debugger and find > error stack showing which routine and the line number that the error occurs? > Which version of petsc and mumps do you use? > Does it work on petsc examples, e.g. > petsc-3.3/src/ksp/ksp/examples/tutorials>mpiexec -n 4 ./ex2 -pc_type lu -pc_factor_mat_solver_package mumps > Norm of error 2.71948e-15 iterations 1 > > Hong > -------------------------------------------------------------------------- > mpirun has exited due to process rank 3 with PID 25487 on > node rdb-srv1 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > > Other solvers (f.e. SuperLU Dist) working without problems. Is this know issue with MUMPS and similar problem where reported in past? > > Lukasz > From jedbrown at mcs.anl.gov Thu Sep 13 16:27:07 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 13 Sep 2012 16:27:07 -0500 Subject: [petsc-users] MUMPS memory allocation problem In-Reply-To: <494429C2-A2B5-44EA-B2D4-9306C447A1B1@glasgow.ac.uk> References: <1678C689-9EE3-4AEF-AD0F-6FD21FFEAA7B@glasgow.ac.uk> <494429C2-A2B5-44EA-B2D4-9306C447A1B1@glasgow.ac.uk> Message-ID: No point wasting time debugging such archaeological relics. There have been numerous improvements and bug fixes to both packages since then. You should upgrade to petsc-3.3 which also has the latest MUMPS. On Thu, Sep 13, 2012 at 4:24 PM, Lukasz Kaczmarczyk < Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote: > Hello, > > I use petsc-3.1-p3 and MUMPS_4.9.2. I hope that this will help, it is > tricky to track the error, usually is for relatively large problems, I > have to check if it happens always at the same time, or is random error. > > CYCLE 19 ref_mesh 0 > Vol = 1.6749e+08 ave_Vol = 4.2698e+03 min_quali = 1.7609e-01 > Nodes Ordering > Elemnets Ordering > > arc length lambda 4.4581e-01 > > arc length lambda value 5.6751e+02 > > 0 SNES Function norm 7.929158005697e+01 > 0 SNES (MY) norms Fint 6.7783e+01, arc length res_lambda 0.0000e+00 > lambda 4.4581e-01 > 0 KSP (MY) F 9.0999e-01 Res 1.0000e+00 > 1 KSP (MY) F 3.8659e-15 Res 4.4892e-15 > PC ArcLength norm dF_dLambda 5.8658e+03 norm dD_dF_dLambda 3.29e+01 > b_dD_dF_dLambda 0.00e+00 > 0 KSP (MY) norms F 6.7783e+01 Res 7.9292e+01 res arc length lambda > 0.0000e+00 dlambda 0.0000e+00 > 1 KSP (MY) norms F 9.1576e-14 Res 9.9498e-14 res arc length lambda > 0.0000e+00 dlambda 0.0000e+00 > 1 SNES Function norm 2.355295496832e-02 > 1 SNES (MY) norms Fint 2.0118e-02, arc length res_lambda 0.0000e+00 > lambda 4.4581e-01 > At line 593 of file dmumps_part3.F > Fortran runtime error: Attempt to DEALLOCATE unallocated 'perm' > -------------------------------------------------------------------------- > mpirun has exited due to process rank 3 with PID 25487 on > node rdb-srv1 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > > > On 13 Sep 2012, at 21:23, Hong Zhang wrote: > > > Lukasz : > > Hello, > > > > I solve a strongly nonlinear problem using SNES. When I use MUMS with > petsc form time to time I get following error, > > > > Fortran runtime error: Attempt to DEALLOCATE unallocated 'perm' > > > > I've never seen report from our users about this error. > > Can you provide more info about it, e.g., run with a debugger and find > > error stack showing which routine and the line number that the error > occurs? > > Which version of petsc and mumps do you use? > > Does it work on petsc examples, e.g. > > petsc-3.3/src/ksp/ksp/examples/tutorials>mpiexec -n 4 ./ex2 -pc_type lu > -pc_factor_mat_solver_package mumps > > Norm of error 2.71948e-15 iterations 1 > > > > Hong > > > -------------------------------------------------------------------------- > > mpirun has exited due to process rank 3 with PID 25487 on > > node rdb-srv1 exiting without calling "finalize". This may > > have caused other processes in the application to be > > terminated by signals sent by mpirun (as reported here). > > > > Other solvers (f.e. SuperLU Dist) working without problems. Is this know > issue with MUMPS and similar problem where reported in past? > > > > Lukasz > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Lukasz.Kaczmarczyk at glasgow.ac.uk Thu Sep 13 16:44:14 2012 From: Lukasz.Kaczmarczyk at glasgow.ac.uk (Lukasz Kaczmarczyk) Date: Thu, 13 Sep 2012 22:44:14 +0100 Subject: [petsc-users] MUMPS memory allocation problem In-Reply-To: References: <1678C689-9EE3-4AEF-AD0F-6FD21FFEAA7B@glasgow.ac.uk> <494429C2-A2B5-44EA-B2D4-9306C447A1B1@glasgow.ac.uk> Message-ID: <6995003F-6D49-4225-B61C-83B5849F05AB@glasgow.ac.uk> Thanks, you right. On 13 Sep 2012, at 22:27, Jed Brown wrote: > No point wasting time debugging such archaeological relics. There have been numerous improvements and bug fixes to both packages since then. You should upgrade to petsc-3.3 which also has the latest MUMPS. > > On Thu, Sep 13, 2012 at 4:24 PM, Lukasz Kaczmarczyk wrote: > Hello, > > I use petsc-3.1-p3 and MUMPS_4.9.2. I hope that this will help, it is tricky to track the error, usually is for relatively large problems, I have to check if it happens always at the same time, or is random error. > > CYCLE 19 ref_mesh 0 > Vol = 1.6749e+08 ave_Vol = 4.2698e+03 min_quali = 1.7609e-01 > Nodes Ordering > Elemnets Ordering > > arc length lambda 4.4581e-01 > > arc length lambda value 5.6751e+02 > > 0 SNES Function norm 7.929158005697e+01 > 0 SNES (MY) norms Fint 6.7783e+01, arc length res_lambda 0.0000e+00 lambda 4.4581e-01 > 0 KSP (MY) F 9.0999e-01 Res 1.0000e+00 > 1 KSP (MY) F 3.8659e-15 Res 4.4892e-15 > PC ArcLength norm dF_dLambda 5.8658e+03 norm dD_dF_dLambda 3.29e+01 b_dD_dF_dLambda 0.00e+00 > 0 KSP (MY) norms F 6.7783e+01 Res 7.9292e+01 res arc length lambda 0.0000e+00 dlambda 0.0000e+00 > 1 KSP (MY) norms F 9.1576e-14 Res 9.9498e-14 res arc length lambda 0.0000e+00 dlambda 0.0000e+00 > 1 SNES Function norm 2.355295496832e-02 > 1 SNES (MY) norms Fint 2.0118e-02, arc length res_lambda 0.0000e+00 lambda 4.4581e-01 > At line 593 of file dmumps_part3.F > Fortran runtime error: Attempt to DEALLOCATE unallocated 'perm' > -------------------------------------------------------------------------- > mpirun has exited due to process rank 3 with PID 25487 on > node rdb-srv1 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > > > On 13 Sep 2012, at 21:23, Hong Zhang wrote: > > > Lukasz : > > Hello, > > > > I solve a strongly nonlinear problem using SNES. When I use MUMS with petsc form time to time I get following error, > > > > Fortran runtime error: Attempt to DEALLOCATE unallocated 'perm' > > > > I've never seen report from our users about this error. > > Can you provide more info about it, e.g., run with a debugger and find > > error stack showing which routine and the line number that the error occurs? > > Which version of petsc and mumps do you use? > > Does it work on petsc examples, e.g. > > petsc-3.3/src/ksp/ksp/examples/tutorials>mpiexec -n 4 ./ex2 -pc_type lu -pc_factor_mat_solver_package mumps > > Norm of error 2.71948e-15 iterations 1 > > > > Hong > > -------------------------------------------------------------------------- > > mpirun has exited due to process rank 3 with PID 25487 on > > node rdb-srv1 exiting without calling "finalize". This may > > have caused other processes in the application to be > > terminated by signals sent by mpirun (as reported here). > > > > Other solvers (f.e. SuperLU Dist) working without problems. Is this know issue with MUMPS and similar problem where reported in past? > > > > Lukasz > > > > From zonexo at gmail.com Fri Sep 14 03:47:27 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Fri, 14 Sep 2012 10:47:27 +0200 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 Message-ID: <5052EF1F.9090004@gmail.com> Hi, I need to solve the Poisson eqn on my win7 machine. I'm currently using BCGS without preconditioner. I can't use HYPRE since I'm using Fortran and win7. It's rather slow. Is there a recommended solver and preconditioner to solve the Poisson eqn to get me started? Thanks! -- Yours sincerely, TAY wee-beng From bsmith at mcs.anl.gov Fri Sep 14 07:46:32 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 14 Sep 2012 07:46:32 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: <5052EF1F.9090004@gmail.com> References: <5052EF1F.9090004@gmail.com> Message-ID: <1CBE1073-4895-4E98-8940-6B03D4885F8E@mcs.anl.gov> On Sep 14, 2012, at 3:47 AM, TAY wee-beng wrote: > Hi, > > I need to solve the Poisson eqn on my win7 machine. I'm currently using BCGS without preconditioner. I can't use HYPRE since I'm using Fortran and win7. It's rather slow. > > Is there a recommended solver and preconditioner to solve the Poisson eqn to get me started? What kind of grid is it? PETSc 3.3 and petsc-dev now has its own algebraic multigrid preconditioner called PCGAMG, you should try that. Barry > > Thanks! > > -- > Yours sincerely, > > TAY wee-beng > From jedbrown at mcs.anl.gov Fri Sep 14 07:47:04 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 14 Sep 2012 07:47:04 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: <5052EF1F.9090004@gmail.com> References: <5052EF1F.9090004@gmail.com> Message-ID: On Fri, Sep 14, 2012 at 3:47 AM, TAY wee-beng wrote: > Hi, > > I need to solve the Poisson eqn on my win7 machine. I'm currently using > BCGS without preconditioner. I can't use HYPRE since I'm using Fortran and > win7. It's rather slow. > > Is there a recommended solver and preconditioner to solve the Poisson eqn > to get me started? > What discretization? Run with -pc_type gamg to start. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Fri Sep 14 09:38:50 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Fri, 14 Sep 2012 10:38:50 -0400 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: References: <5052EF1F.9090004@gmail.com> Message-ID: <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> On Sep 14, 2012, at 8:47 AM, Jed Brown wrote: > On Fri, Sep 14, 2012 at 3:47 AM, TAY wee-beng wrote: > Hi, > > I need to solve the Poisson eqn on my win7 machine. I'm currently using BCGS without preconditioner. I can't use HYPRE since I'm using Fortran and win7. It's rather slow. > > Is there a recommended solver and preconditioner to solve the Poisson eqn to get me started? > > What discretization? > > Run with -pc_type gamg to start. and -pc_gamg_agg_nsmooths 1 (I should make this the default) Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From coco at dmi.unict.it Fri Sep 14 10:18:14 2012 From: coco at dmi.unict.it (coco at dmi.unict.it) Date: Fri, 14 Sep 2012 17:18:14 +0200 Subject: [petsc-users] petsc-users Digest, Vol 45, Issue 33 In-Reply-To: References: Message-ID: <20120914171814.Horde.ugICAuph4B9QU0q28MwmB-A@mbox.dmi.unict.it> Thank you! I solved my problems using your suggestion and I found the bug in my code. Best regards, Armando > Message: 5 > Date: Tue, 11 Sep 2012 18:43:50 -0500 > From: Barry Smith > To: PETSc users list > Subject: Re: [petsc-users] interpolation function in multigrid > Message-ID: > Content-Type: text/plain; charset=iso-8859-1 > > > On Sep 11, 2012, at 5:11 PM, coco at dmi.unict.it wrote: > >> Dear all, >> >> I am using the multigrid as preconditioner, and I got in a trouble >> with the interpolation function. In practice, I defined a user >> interpolation function: >> >> PetscErrorCode UserMultAddInterpolation(Mat mat,Vec x,Vec v2, Vec y); >> >> and I noticed that the vectors v2 and y are zero in input, while I >> am expected that they are the solution computed in the smooth down >> process at the same level. In detail, given the following source >> code of the petsc file mg.c : >> >> ierr = >> KSPSolve(mglevels->smoothd,mglevels->b,mglevels->x);CHKERRQ(ierr); >> /* pre-smooth */ >> [...] >> while (cycles--) { >> ierr = PCMGMCycle_Private(pc,mglevelsin-1,reason);CHKERRQ(ierr); >> } >> [...] >> ierr = >> MatInterpolateAdd(mglevels->interpolate,mgc->x,mglevels->x,mglevels->x);CHKERRQ(ierr); >> >> I observed that mglevels->x is a vector which contains the correct >> values after the KSPSolve calling for the pre-smooth step, but it >> is a zero vector when given in input to the MatInterpolateAdd >> routine. >> I would like to debug the mg.c code, for instance accessing the >> single values of the vector mglevels->x and figuring out where they >> are zeroed out between the KSPSolve and MatInterpolateAdd callings. >> Would you have some suggestion for this debugging? > > You can call VecView(mglevels->x,0) directly in the debugger (for > a small problem) I would start by simply calling it right after the > KSPSolve() then right before the MatInterpolateAdd() if the second > is zero but the first ok then run with two levels and break in the > inner PCMGMCycle_Private() and print the same thing there (remember > the value of mglevels changes. > > Good luck, > > Barry > >> >> Thank you for the collaboration. >> Best regards, >> Armando >> > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 45, Issue 33 > ******************************************* From rlmackie862 at gmail.com Fri Sep 14 11:28:59 2012 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 14 Sep 2012 09:28:59 -0700 Subject: [petsc-users] Problem with -pc_type gamg Message-ID: For quite some time I've been solving my problems using BCGS with ASM and that works quite well. I was curious to try gamg, but when I try, I get error messages about a new nonzero causing a malloc (see error message below). What is strange is that in my code, I specifically turn this off with: call MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE,ierr) Is there some way to turn this off globally, or is this error potentially caused by some other issue? Thanks, Randy [0]PCSetData_AGG bs=3 MM=1286334 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Argument out of range! [0]PETSC ERROR: New nonzero at (0,1) caused a malloc! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Configure run at Fri Aug 31 12:55:11 2012 [0]PETSC ERROR: Configure options --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --with-fortran-kernels=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatSetValues_SeqAIJ() line 346 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: MatSetValues() line 1025 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/interface/matrix.c [0]PETSC ERROR: PCGAMGCreateGraph() line 68 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/tools.c [0]PETSC ERROR: PCGAMGgraph_AGG() line 977 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/agg.c [0]PETSC ERROR: PCSetUp_GAMG() line 656 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/gamg.c [0]PETSC ERROR: PCSetUp() line 832 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: KSPSolve() line 402 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Fri Sep 14 12:41:07 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Fri, 14 Sep 2012 13:41:07 -0400 Subject: [petsc-users] Problem with -pc_type gamg In-Reply-To: References: Message-ID: <418C5B02-3BE5-4AE3-B794-41DD7DBBB440@columbia.edu> Randy, PETSc does not allow new non-zero entries any more so it requires me to get a an upper bound on non-zeros per row. This can be a little tricky to do perfectly. It looks to me like you have have set block size in your matrix > 1. Do you have a BC on one degree of freedom (the first one) on the first vertex, such that the first row of your matrix has just a diagonal entry, and other degrees of freedom on this first vertex do not have a BC? I should do a better job at doing this and can fix this in petsc-dev but you are using 3.3 so a fix, that might be easy for you to do, is to add or keep zeros in these matrix entries that are deleted by BCs. This will let me get an accurate count of the non-zeros in the row of the graph of your mesh, which is what I am constructing in this code. Mark On Sep 14, 2012, at 12:28 PM, Randall Mackie wrote: > For quite some time I've been solving my problems using BCGS with ASM and that works quite well. > I was curious to try gamg, but when I try, I get error messages about > a new nonzero causing a malloc (see error message below). What is strange is that in my code, I specifically > turn this off with: > > call MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE,ierr) > > Is there some way to turn this off globally, or is this error potentially caused by some other issue? > > Thanks, Randy > > [0]PCSetData_AGG bs=3 MM=1286334 > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Argument out of range! > [0]PETSC ERROR: New nonzero at (0,1) caused a malloc! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Configure run at Fri Aug 31 12:55:11 2012 > [0]PETSC ERROR: Configure options --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --with-fortran-kernels=1 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: MatSetValues_SeqAIJ() line 346 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/impls/aij/seq/aij.c > [0]PETSC ERROR: MatSetValues() line 1025 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/interface/matrix.c > [0]PETSC ERROR: PCGAMGCreateGraph() line 68 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/tools.c > [0]PETSC ERROR: PCGAMGgraph_AGG() line 977 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/agg.c > [0]PETSC ERROR: PCSetUp_GAMG() line 656 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/gamg.c > [0]PETSC ERROR: PCSetUp() line 832 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: KSPSolve() line 402 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c > From rlmackie862 at gmail.com Fri Sep 14 12:58:54 2012 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 14 Sep 2012 10:58:54 -0700 Subject: [petsc-users] Problem with -pc_type gamg In-Reply-To: <418C5B02-3BE5-4AE3-B794-41DD7DBBB440@columbia.edu> References: <418C5B02-3BE5-4AE3-B794-41DD7DBBB440@columbia.edu> Message-ID: <16E864F0-6EB6-4187-8204-C42ACC83B524@gmail.com> Hi Mark, Yes, the answer to your question is that the first entry in the matrix has a 1 on the diagonal due to boundary conditions. I will try your suggestion and see if it works, or if you improve on this, I can always try petsc-dev. Thanks, Randy On Sep 14, 2012, at 10:41 AM, Mark F. Adams wrote: > Randy, > > PETSc does not allow new non-zero entries any more so it requires me to get a an upper bound on non-zeros per row. This can be a little tricky to do perfectly. > > It looks to me like you have have set block size in your matrix > 1. Do you have a BC on one degree of freedom (the first one) on the first vertex, such that the first row of your matrix has just a diagonal entry, and other degrees of freedom on this first vertex do not have a BC? > > I should do a better job at doing this and can fix this in petsc-dev but you are using 3.3 so a fix, that might be easy for you to do, is to add or keep zeros in these matrix entries that are deleted by BCs. This will let me get an accurate count of the non-zeros in the row of the graph of your mesh, which is what I am constructing in this code. > > Mark > > On Sep 14, 2012, at 12:28 PM, Randall Mackie wrote: > >> For quite some time I've been solving my problems using BCGS with ASM and that works quite well. >> I was curious to try gamg, but when I try, I get error messages about >> a new nonzero causing a malloc (see error message below). What is strange is that in my code, I specifically >> turn this off with: >> >> call MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE,ierr) >> >> Is there some way to turn this off globally, or is this error potentially caused by some other issue? >> >> Thanks, Randy >> >> [0]PCSetData_AGG bs=3 MM=1286334 >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [0]PETSC ERROR: Argument out of range! >> [0]PETSC ERROR: New nonzero at (0,1) caused a malloc! >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Configure run at Fri Aug 31 12:55:11 2012 >> [0]PETSC ERROR: Configure options --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --with-fortran-kernels=1 >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: MatSetValues_SeqAIJ() line 346 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/impls/aij/seq/aij.c >> [0]PETSC ERROR: MatSetValues() line 1025 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/interface/matrix.c >> [0]PETSC ERROR: PCGAMGCreateGraph() line 68 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/tools.c >> [0]PETSC ERROR: PCGAMGgraph_AGG() line 977 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/agg.c >> [0]PETSC ERROR: PCSetUp_GAMG() line 656 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/gamg.c >> [0]PETSC ERROR: PCSetUp() line 832 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 278 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: KSPSolve() line 402 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >> > From mark.adams at columbia.edu Fri Sep 14 13:07:04 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Fri, 14 Sep 2012 14:07:04 -0400 Subject: [petsc-users] Problem with -pc_type gamg In-Reply-To: <16E864F0-6EB6-4187-8204-C42ACC83B524@gmail.com> References: <418C5B02-3BE5-4AE3-B794-41DD7DBBB440@columbia.edu> <16E864F0-6EB6-4187-8204-C42ACC83B524@gmail.com> Message-ID: <55A171C2-5721-4788-BFF2-3582D162A517@columbia.edu> I just pushed a fix in petsc-dev that should fix the problem. Mark On Sep 14, 2012, at 1:58 PM, Randall Mackie wrote: > Hi Mark, > > Yes, the answer to your question is that the first entry in the matrix has a 1 on the diagonal due to boundary conditions. > > I will try your suggestion and see if it works, or if you improve on this, I can always try petsc-dev. > > Thanks, Randy > > > On Sep 14, 2012, at 10:41 AM, Mark F. Adams wrote: > >> Randy, >> >> PETSc does not allow new non-zero entries any more so it requires me to get a an upper bound on non-zeros per row. This can be a little tricky to do perfectly. >> >> It looks to me like you have have set block size in your matrix > 1. Do you have a BC on one degree of freedom (the first one) on the first vertex, such that the first row of your matrix has just a diagonal entry, and other degrees of freedom on this first vertex do not have a BC? >> >> I should do a better job at doing this and can fix this in petsc-dev but you are using 3.3 so a fix, that might be easy for you to do, is to add or keep zeros in these matrix entries that are deleted by BCs. This will let me get an accurate count of the non-zeros in the row of the graph of your mesh, which is what I am constructing in this code. >> >> Mark >> >> On Sep 14, 2012, at 12:28 PM, Randall Mackie wrote: >> >>> For quite some time I've been solving my problems using BCGS with ASM and that works quite well. >>> I was curious to try gamg, but when I try, I get error messages about >>> a new nonzero causing a malloc (see error message below). What is strange is that in my code, I specifically >>> turn this off with: >>> >>> call MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE,ierr) >>> >>> Is there some way to turn this off globally, or is this error potentially caused by some other issue? >>> >>> Thanks, Randy >>> >>> [0]PCSetData_AGG bs=3 MM=1286334 >>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [0]PETSC ERROR: Argument out of range! >>> [0]PETSC ERROR: New nonzero at (0,1) caused a malloc! >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Configure run at Fri Aug 31 12:55:11 2012 >>> [0]PETSC ERROR: Configure options --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --with-fortran-kernels=1 >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: MatSetValues_SeqAIJ() line 346 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/impls/aij/seq/aij.c >>> [0]PETSC ERROR: MatSetValues() line 1025 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/interface/matrix.c >>> [0]PETSC ERROR: PCGAMGCreateGraph() line 68 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/tools.c >>> [0]PETSC ERROR: PCGAMGgraph_AGG() line 977 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/agg.c >>> [0]PETSC ERROR: PCSetUp_GAMG() line 656 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/gamg.c >>> [0]PETSC ERROR: PCSetUp() line 832 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: KSPSetUp() line 278 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: KSPSolve() line 402 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >>> >> > > From rlmackie862 at gmail.com Fri Sep 14 13:07:34 2012 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 14 Sep 2012 11:07:34 -0700 Subject: [petsc-users] Problem with -pc_type gamg In-Reply-To: <55A171C2-5721-4788-BFF2-3582D162A517@columbia.edu> References: <418C5B02-3BE5-4AE3-B794-41DD7DBBB440@columbia.edu> <16E864F0-6EB6-4187-8204-C42ACC83B524@gmail.com> <55A171C2-5721-4788-BFF2-3582D162A517@columbia.edu> Message-ID: Thanks, I will try and let you know. Randy On Sep 14, 2012, at 11:07 AM, "Mark F. Adams" wrote: > I just pushed a fix in petsc-dev that should fix the problem. > > Mark > > On Sep 14, 2012, at 1:58 PM, Randall Mackie wrote: > >> Hi Mark, >> >> Yes, the answer to your question is that the first entry in the matrix has a 1 on the diagonal due to boundary conditions. >> >> I will try your suggestion and see if it works, or if you improve on this, I can always try petsc-dev. >> >> Thanks, Randy >> >> >> On Sep 14, 2012, at 10:41 AM, Mark F. Adams wrote: >> >>> Randy, >>> >>> PETSc does not allow new non-zero entries any more so it requires me to get a an upper bound on non-zeros per row. This can be a little tricky to do perfectly. >>> >>> It looks to me like you have have set block size in your matrix > 1. Do you have a BC on one degree of freedom (the first one) on the first vertex, such that the first row of your matrix has just a diagonal entry, and other degrees of freedom on this first vertex do not have a BC? >>> >>> I should do a better job at doing this and can fix this in petsc-dev but you are using 3.3 so a fix, that might be easy for you to do, is to add or keep zeros in these matrix entries that are deleted by BCs. This will let me get an accurate count of the non-zeros in the row of the graph of your mesh, which is what I am constructing in this code. >>> >>> Mark >>> >>> On Sep 14, 2012, at 12:28 PM, Randall Mackie wrote: >>> >>>> For quite some time I've been solving my problems using BCGS with ASM and that works quite well. >>>> I was curious to try gamg, but when I try, I get error messages about >>>> a new nonzero causing a malloc (see error message below). What is strange is that in my code, I specifically >>>> turn this off with: >>>> >>>> call MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE,ierr) >>>> >>>> Is there some way to turn this off globally, or is this error potentially caused by some other issue? >>>> >>>> Thanks, Randy >>>> >>>> [0]PCSetData_AGG bs=3 MM=1286334 >>>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>>> [0]PETSC ERROR: Argument out of range! >>>> [0]PETSC ERROR: New nonzero at (0,1) caused a malloc! >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 >>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Configure run at Fri Aug 31 12:55:11 2012 >>>> [0]PETSC ERROR: Configure options --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --with-fortran-kernels=1 >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: MatSetValues_SeqAIJ() line 346 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/impls/aij/seq/aij.c >>>> [0]PETSC ERROR: MatSetValues() line 1025 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: PCGAMGCreateGraph() line 68 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/tools.c >>>> [0]PETSC ERROR: PCGAMGgraph_AGG() line 977 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/agg.c >>>> [0]PETSC ERROR: PCSetUp_GAMG() line 656 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/gamg.c >>>> [0]PETSC ERROR: PCSetUp() line 832 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: KSPSetUp() line 278 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: KSPSolve() line 402 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >>>> >>> >> >> > From rlmackie862 at gmail.com Fri Sep 14 17:11:00 2012 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 14 Sep 2012 15:11:00 -0700 Subject: [petsc-users] Problem with -pc_type gamg In-Reply-To: <55A171C2-5721-4788-BFF2-3582D162A517@columbia.edu> References: <418C5B02-3BE5-4AE3-B794-41DD7DBBB440@columbia.edu> <16E864F0-6EB6-4187-8204-C42ACC83B524@gmail.com> <55A171C2-5721-4788-BFF2-3582D162A517@columbia.edu> Message-ID: <7F28E5D0-7DCC-46A2-A96C-DCB84B011249@gmail.com> If you would be interested, I could dump the matrix and send it to you to see if you can figure out a fix. I have no idea if GAMG would even be a good preconditioner (this is a ill-conditioned EM problem), but I have reasons to believe that MG in general, if done right, would work. I was hoping to test this with the GAMG preconditioner, without having to do too much work on interpolation operators, etc. Randy On Sep 14, 2012, at 11:07 AM, Mark F. Adams wrote: > I just pushed a fix in petsc-dev that should fix the problem. > > Mark > > On Sep 14, 2012, at 1:58 PM, Randall Mackie wrote: > >> Hi Mark, >> >> Yes, the answer to your question is that the first entry in the matrix has a 1 on the diagonal due to boundary conditions. >> >> I will try your suggestion and see if it works, or if you improve on this, I can always try petsc-dev. >> >> Thanks, Randy >> >> >> On Sep 14, 2012, at 10:41 AM, Mark F. Adams wrote: >> >>> Randy, >>> >>> PETSc does not allow new non-zero entries any more so it requires me to get a an upper bound on non-zeros per row. This can be a little tricky to do perfectly. >>> >>> It looks to me like you have have set block size in your matrix > 1. Do you have a BC on one degree of freedom (the first one) on the first vertex, such that the first row of your matrix has just a diagonal entry, and other degrees of freedom on this first vertex do not have a BC? >>> >>> I should do a better job at doing this and can fix this in petsc-dev but you are using 3.3 so a fix, that might be easy for you to do, is to add or keep zeros in these matrix entries that are deleted by BCs. This will let me get an accurate count of the non-zeros in the row of the graph of your mesh, which is what I am constructing in this code. >>> >>> Mark >>> >>> On Sep 14, 2012, at 12:28 PM, Randall Mackie wrote: >>> >>>> For quite some time I've been solving my problems using BCGS with ASM and that works quite well. >>>> I was curious to try gamg, but when I try, I get error messages about >>>> a new nonzero causing a malloc (see error message below). What is strange is that in my code, I specifically >>>> turn this off with: >>>> >>>> call MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE,ierr) >>>> >>>> Is there some way to turn this off globally, or is this error potentially caused by some other issue? >>>> >>>> Thanks, Randy >>>> >>>> [0]PCSetData_AGG bs=3 MM=1286334 >>>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>>> [0]PETSC ERROR: Argument out of range! >>>> [0]PETSC ERROR: New nonzero at (0,1) caused a malloc! >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 >>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Configure run at Fri Aug 31 12:55:11 2012 >>>> [0]PETSC ERROR: Configure options --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --with-fortran-kernels=1 >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: MatSetValues_SeqAIJ() line 346 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/impls/aij/seq/aij.c >>>> [0]PETSC ERROR: MatSetValues() line 1025 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: PCGAMGCreateGraph() line 68 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/tools.c >>>> [0]PETSC ERROR: PCGAMGgraph_AGG() line 977 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/agg.c >>>> [0]PETSC ERROR: PCSetUp_GAMG() line 656 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/gamg.c >>>> [0]PETSC ERROR: PCSetUp() line 832 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: KSPSetUp() line 278 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: KSPSolve() line 402 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >>>> >>> >> >> > From rlmackie862 at gmail.com Fri Sep 14 17:11:00 2012 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 14 Sep 2012 15:11:00 -0700 Subject: [petsc-users] Problem with -pc_type gamg In-Reply-To: <55A171C2-5721-4788-BFF2-3582D162A517@columbia.edu> References: <418C5B02-3BE5-4AE3-B794-41DD7DBBB440@columbia.edu> <16E864F0-6EB6-4187-8204-C42ACC83B524@gmail.com> <55A171C2-5721-4788-BFF2-3582D162A517@columbia.edu> Message-ID: <7F28E5D0-7DCC-46A2-A96C-DCB84B011249@gmail.com> If you would be interested, I could dump the matrix and send it to you to see if you can figure out a fix. I have no idea if GAMG would even be a good preconditioner (this is a ill-conditioned EM problem), but I have reasons to believe that MG in general, if done right, would work. I was hoping to test this with the GAMG preconditioner, without having to do too much work on interpolation operators, etc. Randy On Sep 14, 2012, at 11:07 AM, Mark F. Adams wrote: > I just pushed a fix in petsc-dev that should fix the problem. > > Mark > > On Sep 14, 2012, at 1:58 PM, Randall Mackie wrote: > >> Hi Mark, >> >> Yes, the answer to your question is that the first entry in the matrix has a 1 on the diagonal due to boundary conditions. >> >> I will try your suggestion and see if it works, or if you improve on this, I can always try petsc-dev. >> >> Thanks, Randy >> >> >> On Sep 14, 2012, at 10:41 AM, Mark F. Adams wrote: >> >>> Randy, >>> >>> PETSc does not allow new non-zero entries any more so it requires me to get a an upper bound on non-zeros per row. This can be a little tricky to do perfectly. >>> >>> It looks to me like you have have set block size in your matrix > 1. Do you have a BC on one degree of freedom (the first one) on the first vertex, such that the first row of your matrix has just a diagonal entry, and other degrees of freedom on this first vertex do not have a BC? >>> >>> I should do a better job at doing this and can fix this in petsc-dev but you are using 3.3 so a fix, that might be easy for you to do, is to add or keep zeros in these matrix entries that are deleted by BCs. This will let me get an accurate count of the non-zeros in the row of the graph of your mesh, which is what I am constructing in this code. >>> >>> Mark >>> >>> On Sep 14, 2012, at 12:28 PM, Randall Mackie wrote: >>> >>>> For quite some time I've been solving my problems using BCGS with ASM and that works quite well. >>>> I was curious to try gamg, but when I try, I get error messages about >>>> a new nonzero causing a malloc (see error message below). What is strange is that in my code, I specifically >>>> turn this off with: >>>> >>>> call MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE,ierr) >>>> >>>> Is there some way to turn this off globally, or is this error potentially caused by some other issue? >>>> >>>> Thanks, Randy >>>> >>>> [0]PCSetData_AGG bs=3 MM=1286334 >>>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>>> [0]PETSC ERROR: Argument out of range! >>>> [0]PETSC ERROR: New nonzero at (0,1) caused a malloc! >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 >>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Configure run at Fri Aug 31 12:55:11 2012 >>>> [0]PETSC ERROR: Configure options --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --with-fortran-kernels=1 >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: MatSetValues_SeqAIJ() line 346 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/impls/aij/seq/aij.c >>>> [0]PETSC ERROR: MatSetValues() line 1025 in /home/MackieR/PETSc/petsc-3.3-p3/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: PCGAMGCreateGraph() line 68 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/tools.c >>>> [0]PETSC ERROR: PCGAMGgraph_AGG() line 977 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/agg.c >>>> [0]PETSC ERROR: PCSetUp_GAMG() line 656 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/impls/gamg/gamg.c >>>> [0]PETSC ERROR: PCSetUp() line 832 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: KSPSetUp() line 278 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: KSPSolve() line 402 in /home/MackieR/PETSc/petsc-3.3-p3/src/ksp/ksp/interface/itfunc.c >>>> >>> >> >> > From jedbrown at mcs.anl.gov Fri Sep 14 17:18:11 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 14 Sep 2012 17:18:11 -0500 Subject: [petsc-users] Problem with -pc_type gamg In-Reply-To: <7F28E5D0-7DCC-46A2-A96C-DCB84B011249@gmail.com> References: <418C5B02-3BE5-4AE3-B794-41DD7DBBB440@columbia.edu> <16E864F0-6EB6-4187-8204-C42ACC83B524@gmail.com> <55A171C2-5721-4788-BFF2-3582D162A517@columbia.edu> <7F28E5D0-7DCC-46A2-A96C-DCB84B011249@gmail.com> Message-ID: On Fri, Sep 14, 2012 at 5:11 PM, Randall Mackie wrote: > If you would be interested, I could dump the matrix and send it to you to > see if you can figure out a fix. > I have no idea if GAMG would even be a good preconditioner (this is a > ill-conditioned EM > This is very important information. What specific EM system are you solving? Is the shift, positive, negative or complex? What discretization. What scale do you need to solve and how performance-sensitive is the application. These problems can be huge rabbit holes for multilevel methods, depending on the parameter range and necessary scale. Efficient solvers will require extra work since black-box methods cannot cheaply determine things like the large curl-curl null space. PETSc folks: We should make an example using auxiliary space preconditioning so that we can have an FAQ on this. > problem), but > I have reasons to believe that MG in general, if done right, would work. I > was hoping to test this > with the GAMG preconditioner, without having to do too much work on > interpolation operators, etc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhenglun.wei at gmail.com Fri Sep 14 17:40:47 2012 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Fri, 14 Sep 2012 17:40:47 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> Message-ID: <5053B26F.9050609@gmail.com> Dear folks, I did some test with -pc_type gamg with /src/ksp/ksp/example/tutorial/ex45.c. It is not as good as default -pc_type when my mesh (Cartisian) is 100*50*50; while it is a little bit better than the default one when the mesh is 200*100*100. Therefore, I guess this type of pc is good for larger problem. Is that ture? or is there any rule of thumb for this type of preconditioner? BTW, I tested it with 8 processes. thanks, Alan On 9/14/2012 9:38 AM, Mark F. Adams wrote: > > On Sep 14, 2012, at 8:47 AM, Jed Brown > wrote: > >> On Fri, Sep 14, 2012 at 3:47 AM, TAY wee-beng > > wrote: >> >> Hi, >> >> I need to solve the Poisson eqn on my win7 machine. I'm currently >> using BCGS without preconditioner. I can't use HYPRE since I'm >> using Fortran and win7. It's rather slow. >> >> Is there a recommended solver and preconditioner to solve the >> Poisson eqn to get me started? >> >> >> What discretization? >> >> Run with -pc_type gamg to start. > > and > > -pc_gamg_agg_nsmooths 1 > > (I should make this the default) > > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Sep 14 17:49:51 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 14 Sep 2012 17:49:51 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: <5053B26F.9050609@gmail.com> References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <5053B26F.9050609@gmail.com> Message-ID: On Fri, Sep 14, 2012 at 5:40 PM, Zhenglun (Alan) Wei wrote: > Dear folks, > I did some test with -pc_type gamg with > /src/ksp/ksp/example/tutorial/ex45.c. It is not as good as default -pc_type > when my mesh (Cartisian) is 100*50*50; while it is a little bit better than > the default one when the mesh is 200*100*100. Therefore, I guess this type > of pc is good for larger problem. Is that ture? or is there any rule of > thumb for this type of preconditioner? BTW, I tested it with 8 processes. > When asking questions about convergence, always always ALWAYS send the output of -ksp_monitor -ksp_view. If you don't, we are just guessing blindly. Matt > thanks, > Alan > On 9/14/2012 9:38 AM, Mark F. Adams wrote: > > > On Sep 14, 2012, at 8:47 AM, Jed Brown wrote: > > On Fri, Sep 14, 2012 at 3:47 AM, TAY wee-beng wrote: > >> Hi, >> >> I need to solve the Poisson eqn on my win7 machine. I'm currently using >> BCGS without preconditioner. I can't use HYPRE since I'm using Fortran and >> win7. It's rather slow. >> >> Is there a recommended solver and preconditioner to solve the Poisson eqn >> to get me started? >> > > What discretization? > > Run with -pc_type gamg to start. > > > and > > -pc_gamg_agg_nsmooths 1 > > (I should make this the default) > > Mark > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Sep 14 17:51:08 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 14 Sep 2012 17:51:08 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <5053B26F.9050609@gmail.com> Message-ID: On Fri, Sep 14, 2012 at 5:49 PM, Matthew Knepley wrote: > On Fri, Sep 14, 2012 at 5:40 PM, Zhenglun (Alan) Wei < > zhenglun.wei at gmail.com> wrote: > >> Dear folks, >> I did some test with -pc_type gamg with >> /src/ksp/ksp/example/tutorial/ex45.c. It is not as good as default -pc_type >> when my mesh (Cartisian) is 100*50*50; while it is a little bit better than >> the default one when the mesh is 200*100*100. Therefore, I guess this type >> of pc is good for larger problem. Is that ture? or is there any rule of >> thumb for this type of preconditioner? BTW, I tested it with 8 processes. >> > > When asking questions about convergence, always always ALWAYS send the > output of -ksp_monitor -ksp_view. If > you don't, we are just guessing blindly. > And -log_summary because this is about performance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhenglun.wei at gmail.com Fri Sep 14 18:08:37 2012 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Fri, 14 Sep 2012 18:08:37 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <5053B26F.9050609@gmail.com> Message-ID: <5053B8F5.2050901@gmail.com> I'm sorry about that. I attached the output files here with ' -ksp_monitor -ksp_view -log_summary'. They are named after the grid size and pc-type. cheers, Alan On 9/14/2012 5:51 PM, Jed Brown wrote: > On Fri, Sep 14, 2012 at 5:49 PM, Matthew Knepley > wrote: > > On Fri, Sep 14, 2012 at 5:40 PM, Zhenglun (Alan) Wei > > wrote: > > Dear folks, > I did some test with -pc_type gamg with > /src/ksp/ksp/example/tutorial/ex45.c. It is not as good as > default -pc_type when my mesh (Cartisian) is 100*50*50; while > it is a little bit better than the default one when the mesh > is 200*100*100. Therefore, I guess this type of pc is good for > larger problem. Is that ture? or is there any rule of thumb > for this type of preconditioner? BTW, I tested it with 8 > processes. > > > When asking questions about convergence, always always ALWAYS send > the output of -ksp_monitor -ksp_view. If > you don't, we are just guessing blindly. > > > And -log_summary because this is about performance. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 0 KSP Residual norm 1.669256249193e+02 1 KSP Residual norm 3.874064408589e+01 2 KSP Residual norm 1.954600014030e+01 3 KSP Residual norm 1.244345922145e+01 4 KSP Residual norm 9.064420170785e+00 5 KSP Residual norm 7.275278824753e+00 6 KSP Residual norm 5.782414175300e+00 7 KSP Residual norm 4.677087789418e+00 8 KSP Residual norm 3.946201128884e+00 9 KSP Residual norm 3.420632944675e+00 10 KSP Residual norm 2.955422198070e+00 11 KSP Residual norm 2.592490394060e+00 12 KSP Residual norm 2.303387891861e+00 13 KSP Residual norm 2.056577525302e+00 14 KSP Residual norm 1.857163034085e+00 15 KSP Residual norm 1.677130693211e+00 16 KSP Residual norm 1.512895894610e+00 17 KSP Residual norm 1.372371861084e+00 18 KSP Residual norm 1.253935781302e+00 19 KSP Residual norm 1.147442107353e+00 20 KSP Residual norm 1.053519715486e+00 21 KSP Residual norm 9.698850093905e-01 22 KSP Residual norm 8.845629782375e-01 23 KSP Residual norm 7.865775890900e-01 24 KSP Residual norm 6.898777348204e-01 25 KSP Residual norm 6.049168916000e-01 26 KSP Residual norm 5.207655781898e-01 27 KSP Residual norm 4.358566752368e-01 28 KSP Residual norm 3.606652037110e-01 29 KSP Residual norm 2.945720874157e-01 30 KSP Residual norm 2.381008300123e-01 31 KSP Residual norm 2.101595975863e-01 32 KSP Residual norm 1.766392142763e-01 33 KSP Residual norm 1.458305208202e-01 34 KSP Residual norm 1.202168443895e-01 35 KSP Residual norm 9.934133007087e-02 36 KSP Residual norm 8.352384804046e-02 37 KSP Residual norm 7.134843832394e-02 38 KSP Residual norm 6.342135745158e-02 39 KSP Residual norm 5.838796270013e-02 40 KSP Residual norm 5.467571802684e-02 41 KSP Residual norm 5.125401049798e-02 42 KSP Residual norm 4.794972060697e-02 43 KSP Residual norm 4.492615630663e-02 44 KSP Residual norm 4.196741113595e-02 45 KSP Residual norm 3.892472635334e-02 46 KSP Residual norm 3.550920516488e-02 47 KSP Residual norm 3.195558023701e-02 48 KSP Residual norm 2.868405521348e-02 49 KSP Residual norm 2.587274813660e-02 50 KSP Residual norm 2.328392008646e-02 51 KSP Residual norm 2.107487668110e-02 52 KSP Residual norm 1.893796101150e-02 53 KSP Residual norm 1.648168199594e-02 54 KSP Residual norm 1.390814960805e-02 55 KSP Residual norm 1.135250892417e-02 56 KSP Residual norm 8.795176079893e-03 57 KSP Residual norm 6.603350000225e-03 58 KSP Residual norm 4.793743880387e-03 59 KSP Residual norm 3.160719306137e-03 60 KSP Residual norm 1.977784164249e-03 61 KSP Residual norm 1.468666200316e-03 62 KSP Residual norm 1.083389354485e-03 63 KSP Residual norm 8.520500282120e-04 64 KSP Residual norm 6.518964823622e-04 65 KSP Residual norm 5.138109780444e-04 66 KSP Residual norm 4.115277543760e-04 67 KSP Residual norm 3.361506034186e-04 68 KSP Residual norm 2.797128704246e-04 69 KSP Residual norm 2.415674178545e-04 70 KSP Residual norm 2.159180377331e-04 71 KSP Residual norm 1.977197186285e-04 72 KSP Residual norm 1.827136280528e-04 73 KSP Residual norm 1.669270522643e-04 74 KSP Residual norm 1.506437271409e-04 75 KSP Residual norm 1.353521734114e-04 76 KSP Residual norm 1.204344753199e-04 77 KSP Residual norm 1.070648089746e-04 78 KSP Residual norm 9.624021696680e-05 79 KSP Residual norm 8.762931970435e-05 80 KSP Residual norm 8.027844190242e-05 81 KSP Residual norm 7.405766359992e-05 82 KSP Residual norm 6.789476644149e-05 83 KSP Residual norm 6.150052082511e-05 84 KSP Residual norm 5.461716716910e-05 85 KSP Residual norm 4.773931323050e-05 86 KSP Residual norm 4.134556977071e-05 87 KSP Residual norm 3.578449180759e-05 88 KSP Residual norm 3.150018194966e-05 89 KSP Residual norm 2.810040239809e-05 90 KSP Residual norm 2.557532547531e-05 91 KSP Residual norm 2.381861052813e-05 92 KSP Residual norm 2.205833402284e-05 93 KSP Residual norm 2.030591797591e-05 94 KSP Residual norm 1.832395951111e-05 95 KSP Residual norm 1.628084367638e-05 KSP Object: 8 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000 tolerances: relative=1e-07, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: bjacobi block Jacobi: number of blocks = 8 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=31250, cols=31250 package used to perform factorization: petsc total: nonzeros=212500, allocated nonzeros=212500 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=31250, cols=31250 total: nonzeros=212500, allocated nonzeros=212500 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=250000, cols=250000 total: nonzeros=1725000, allocated nonzeros=1725000 total number of mallocs used during MatSetValues calls =0 Residual norm 4.4807e-07 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex45 on a arch-linux2-c-debug named compute-5-2.local with 8 processors, by zlwei Fri Sep 14 18:03:35 2012 Using Petsc Development HG revision: 98bf11863c3be31b7c2af504314a500bc64d88c9 HG Date: Wed Aug 29 13:51:08 2012 -0500 Max Max/Min Avg Total Time (sec): 3.476e+00 1.00009 3.476e+00 Objects: 7.400e+01 1.00000 7.400e+01 Flops: 2.712e+08 1.00003 2.712e+08 2.170e+09 Flops/sec: 7.803e+07 1.00009 7.802e+07 6.242e+08 Memory: 1.770e+07 1.00000 1.416e+08 MPI Messages: 3.160e+02 1.01935 3.108e+02 2.486e+03 MPI Message Lengths: 2.500e+06 1.00001 8.045e+03 2.000e+07 MPI Reductions: 2.078e+03 1.00096 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 3.4761e+00 100.0% 2.1698e+09 100.0% 2.486e+03 100.0% 8.045e+03 100.0% 2.075e+03 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage KSPGMRESOrthog 95 1.0 1.0751e+00 1.0 1.76e+08 1.0 0.0e+00 0.0e+00 1.5e+03 30 65 0 0 72 30 65 0 0 73 1312 KSPSetUp 2 1.0 7.6380e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 3.4126e+00 1.0 2.71e+08 1.0 2.4e+03 8.2e+03 2.0e+03 98100 97 98 98 98100 97 98 98 635 VecMDot 95 1.0 4.9396e-01 1.1 8.81e+07 1.0 0.0e+00 0.0e+00 9.5e+01 14 32 0 0 5 14 32 0 0 5 1427 VecNorm 100 1.0 3.0509e-02 1.1 6.25e+06 1.0 0.0e+00 0.0e+00 1.0e+02 1 2 0 0 5 1 2 0 0 5 1639 VecScale 99 1.0 1.4340e-02 1.3 3.09e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1726 VecCopy 4 1.0 2.5320e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 105 1.0 2.6484e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 8 1.0 2.3842e-03 1.0 5.00e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1678 VecMAXPY 99 1.0 4.3568e-01 1.0 9.41e+07 1.0 0.0e+00 0.0e+00 0.0e+00 12 35 0 0 0 12 35 0 0 0 1727 VecScatterBegin 99 1.0 1.7329e-02 1.4 0.00e+00 0.0 2.4e+03 8.3e+03 0.0e+00 0 0 96 99 0 0 0 96 99 0 0 VecScatterEnd 99 1.0 1.8952e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 99 1.0 4.7056e-02 1.1 9.28e+06 1.0 0.0e+00 0.0e+00 9.9e+01 1 3 0 0 5 1 3 0 0 5 1578 MatMult 99 1.0 5.4279e-01 1.0 3.96e+07 1.0 2.4e+03 8.3e+03 0.0e+00 15 15 96 99 0 15 15 96 99 0 584 MatSolve 99 1.0 3.6140e-01 1.0 3.90e+07 1.0 0.0e+00 0.0e+00 0.0e+00 10 14 0 0 0 10 14 0 0 0 863 MatLUFactorNum 1 1.0 1.4360e-02 1.0 6.12e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 339 MatILUFactorSym 1 1.0 1.3215e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 2 1.0 5.4438e-03 3.9 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 2 1.0 1.1969e-02 1.0 0.00e+00 0.0 4.8e+01 2.1e+03 2.3e+01 0 0 2 1 1 0 0 2 1 1 0 MatGetRowIJ 1 1.0 7.1526e-06 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 4.9279e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 3 3.0 5.5695e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCSetUp 2 1.0 3.4658e-02 1.0 6.12e+05 1.0 0.0e+00 0.0e+00 8.0e+00 1 0 0 0 0 1 0 0 0 0 140 PCSetUpOnBlocks 1 1.0 3.3920e-02 1.0 6.12e+05 1.0 0.0e+00 0.0e+00 4.0e+00 1 0 0 0 0 1 0 0 0 0 143 PCApply 99 1.0 5.7720e-01 1.0 3.90e+07 1.0 0.0e+00 0.0e+00 2.0e+02 16 14 0 0 10 16 14 0 0 10 540 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 1 1 548 0 Krylov Solver 2 2 19360 0 Vector 43 43 9339072 0 Vector Scatter 3 3 3108 0 Matrix 4 4 6660212 0 Distributed Mesh 2 2 285240 0 Bipartite Graph 4 4 2736 0 Index Set 10 10 282424 0 IS L to G Mapping 1 1 138468 0 Preconditioner 2 2 1784 0 Viewer 2 1 712 0 ======================================================================================================================== Average time to get PetscTime(): 6.91414e-07 Average time for MPI_Barrier(): 0.000108814 Average time for zero size MPI_Send(): 2.01166e-05 #PETSc Option Table entries: -ksp_monitor -ksp_rtol 1.0e-7 -ksp_view -log_summary #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Aug 29 14:54:25 2012 Configure options: --prefix=/work/zlwei/PETSc --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich ----------------------------------------- Libraries compiled on Wed Aug 29 14:54:25 2012 on firefox.bioinfo.ittc.ku.edu Machine characteristics: Linux-2.6.18-92.1.13.el5-x86_64-with-redhat-5.2-Final Using PETSc directory: /nfs/work/zlwei/PETSc/petsc-dev Using PETSc arch: arch-linux2-c-debug ----------------------------------------- Using C compiler: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpif90 -Wall -Wno-unused-variable -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/include -I/nfs/work/zlwei/PETSc/petsc-dev/include -I/nfs/work/zlwei/PETSc/petsc-dev/include -I/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/include ----------------------------------------- Using C linker: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpicc Using Fortran linker: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpif90 Using libraries: -Wl,-rpath,/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -L/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -L/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -lflapack -lfblas -lm -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl ----------------------------------------- -------------- next part -------------- 0 KSP Residual norm 1.875294504732e+02 1 KSP Residual norm 6.461816762057e+01 2 KSP Residual norm 3.470907566660e+01 3 KSP Residual norm 2.099089429528e+01 4 KSP Residual norm 1.437522110067e+01 5 KSP Residual norm 9.245678477105e+00 6 KSP Residual norm 5.901095748255e+00 7 KSP Residual norm 3.667568893250e+00 8 KSP Residual norm 2.100454200874e+00 9 KSP Residual norm 1.151109746641e+00 10 KSP Residual norm 6.512533958321e-01 11 KSP Residual norm 3.268299134386e-01 12 KSP Residual norm 1.338085587322e-01 13 KSP Residual norm 6.206661527722e-02 14 KSP Residual norm 3.045503185174e-02 15 KSP Residual norm 1.336637007228e-02 16 KSP Residual norm 5.597304881397e-03 17 KSP Residual norm 2.926115919013e-03 18 KSP Residual norm 1.931646953591e-03 19 KSP Residual norm 1.181176745071e-03 20 KSP Residual norm 6.854711612750e-04 21 KSP Residual norm 3.242234399228e-04 22 KSP Residual norm 1.402658814864e-04 23 KSP Residual norm 6.074816600231e-05 24 KSP Residual norm 3.055482416759e-05 25 KSP Residual norm 1.566477423228e-05 KSP Object: 8 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000 tolerances: relative=1e-07, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=5 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 8 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 8 MPI processes type: bjacobi block Jacobi: number of blocks = 8 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 KSP Object: KSP Object: (mg_coarse_sub_) KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly [0] local block number 0 (mg_coarse_sub_) 1 MPI processes type: preonly 1 MPI processes maximum iterations=10000, initial guess is zero KSP Object: (mg_coarse_sub_) 1 MPI processes KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero maximum iterations=10000, initial guess is zero type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) type: preonly maximum iterations=10000, initial guess is zero maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: 1 MPI processes type: lu LU: out-of-place factorization tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: (mg_coarse_sub_) 1 MPI processes type: lu 1 MPI processes type: lu LU: out-of-place factorization PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 tolerance for zero pivot 2.22045e-14 matrix ordering: nd LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 tolerance for zero pivot 2.22045e-14 matrix ordering: nd Factored matrix follows: factor fill ratio given 5, needed 0 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: factor fill ratio given 5, needed 0 Factored matrix follows: factor fill ratio given 5, needed 0 Factored matrix follows: Matrix Object: Matrix Object: 1 MPI processes type: seqaij Matrix Object: 1 MPI processes package used to perform factorization: petsc Matrix Object: 1 MPI processes Matrix Object: 1 MPI processes 1 MPI processes type: seqaij rows=0, cols=0 rows=0, cols=0 type: seqaij rows=0, cols=0 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 type: seqaij rows=0, cols=0 type: seqaij rows=0, cols=0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 not using I-node routines package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 total number of mallocs used during MatSetValues calls =0 total number of mallocs used during MatSetValues calls =0 linear system matrix = precond matrix: package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines not using I-node routines Matrix Object: 1 MPI processes not using I-node routines type: seqaij not using I-node routines not using I-node routines linear system matrix = precond matrix: linear system matrix = precond matrix: rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 linear system matrix = precond matrix: Matrix Object: Matrix Object: 1 MPI processes Matrix Object: 1 MPI processes total number of mallocs used during MatSetValues calls =0 linear system matrix = precond matrix: Matrix Object: 1 MPI processes linear system matrix = precond matrix: Matrix Object: 1 MPI processes 1 MPI processes type: seqaij rows=0, cols=0 type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 not using I-node routines type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines total number of mallocs used during MatSetValues calls =0 not using I-node routines total number of mallocs used during MatSetValues calls =0 not using I-node routines total number of mallocs used during MatSetValues calls =0 not using I-node routines total number of mallocs used during MatSetValues calls =0 not using I-node routines KSP Object: KSP Object: (mg_coarse_sub_) 1 MPI processes (mg_coarse_sub_) type: preonly maximum iterations=10000, initial guess is zero 1 MPI processes tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test type: preonly PC Object: maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 2.23871 Factored matrix follows: 1 MPI processes Matrix Object: 1 MPI processes type: seqaij rows=179, cols=179 package used to perform factorization: petsc type: lu total: nonzeros=3817, allocated nonzeros=3817 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: LU: out-of-place factorization Matrix Object: 1 MPI processes type: seqaij rows=179, cols=179 total: nonzeros=1705, allocated nonzeros=1705 total number of mallocs used during MatSetValues calls =0 not using I-node routines - - - - - - - - - - - - - - - - - - tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines [1] number of local blocks = 1, first local block number = 1 [1] local block number 0 - - - - - - - - - - - - - - - - - - [2] number of local blocks = 1, first local block number = 2 [2] local block number 0 - - - - - - - - - - - - - - - - - - [3] number of local blocks = 1, first local block number = 3 [3] local block number 0 - - - - - - - - - - - - - - - - - - [4] number of local blocks = 1, first local block number = 4 [4] local block number 0 - - - - - - - - - - - - - - - - - - [5] number of local blocks = 1, first local block number = 5 [5] local block number 0 - - - - - - - - - - - - - - - - - - [6] number of local blocks = 1, first local block number = 6 [6] local block number 0 - - - - - - - - - - - - - - - - - - [7] number of local blocks = 1, first local block number = 7 [7] local block number 0 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=179, cols=179 total: nonzeros=1705, allocated nonzeros=1705 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 8 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0707274, max = 1.48527 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 8 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=958, cols=958 total: nonzeros=7836, allocated nonzeros=7836 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 8 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0708876, max = 1.48864 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_2_) 8 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=4973, cols=4973 total: nonzeros=43735, allocated nonzeros=43735 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 8 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0762465, max = 1.60118 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_3_) 8 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=33833, cols=33833 total: nonzeros=355743, allocated nonzeros=355743 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 8 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0975151, max = 2.04782 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_4_) 8 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=250000, cols=250000 total: nonzeros=1725000, allocated nonzeros=1725000 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=250000, cols=250000 total: nonzeros=1725000, allocated nonzeros=1725000 total number of mallocs used during MatSetValues calls =0 Residual norm 6.64872e-07 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex45 on a arch-linux2-c-debug named compute-5-2.local with 8 processors, by zlwei Fri Sep 14 18:04:45 2012 Using Petsc Development HG revision: 98bf11863c3be31b7c2af504314a500bc64d88c9 HG Date: Wed Aug 29 13:51:08 2012 -0500 Max Max/Min Avg Total Time (sec): 8.570e+00 1.00005 8.570e+00 Objects: 4.570e+02 1.00000 4.570e+02 Flops: 2.035e+08 1.00383 2.032e+08 1.625e+09 Flops/sec: 2.375e+07 1.00378 2.371e+07 1.896e+08 Memory: 3.716e+07 1.00000 2.973e+08 MPI Messages: 5.506e+03 1.10619 5.194e+03 4.155e+04 MPI Message Lengths: 8.593e+06 1.01613 1.639e+03 6.811e+07 MPI Reductions: 4.630e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 8.5701e+00 100.0% 1.6252e+09 100.0% 4.155e+04 100.0% 1.639e+03 100.0% 4.629e+03 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage KSPGMRESOrthog 91 1.0 3.1189e-01 1.0 4.86e+07 1.0 0.0e+00 0.0e+00 6.6e+02 4 24 0 0 14 4 24 0 0 14 1246 KSPSetUp 11 1.0 4.4149e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 1 0 0 0 1 1 0 0 0 1 0 KSPSolve 1 1.0 8.0593e+00 1.0 2.03e+08 1.0 4.1e+04 1.6e+03 4.6e+03 94100100 99 99 94100100 99 99 201 VecMDot 91 1.0 1.4309e-01 1.0 2.43e+07 1.0 0.0e+00 0.0e+00 9.1e+01 2 12 0 0 2 2 12 0 0 2 1358 VecNorm 123 1.0 2.2507e-02 1.1 2.50e+06 1.0 0.0e+00 0.0e+00 1.2e+02 0 1 0 0 3 0 1 0 0 3 884 VecScale 538 1.0 2.0931e-02 1.1 4.99e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 1903 VecCopy 135 1.0 1.1293e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 505 1.0 1.5996e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 864 1.0 7.1485e-02 1.1 1.53e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 8 0 0 0 1 8 0 0 0 1708 VecAYPX 832 1.0 7.6809e-02 1.1 9.43e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 5 0 0 0 1 5 0 0 0 981 VecMAXPY 122 1.0 1.0202e-01 1.0 2.66e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 13 0 0 0 1 13 0 0 0 2085 VecAssemblyBegin 53 1.0 3.2252e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.6e+02 0 0 0 0 3 0 0 0 0 3 0 VecAssemblyEnd 53 1.0 4.4584e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 668 1.0 1.1690e-01 1.0 6.06e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 414 VecScatterBegin 981 1.0 8.3254e-02 1.1 0.00e+00 0.0 3.8e+04 1.6e+03 0.0e+00 1 0 91 90 0 1 0 91 90 0 0 VecScatterEnd 981 1.0 6.4690e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSetRandom 4 1.0 4.8918e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecNormalize 122 1.0 2.9767e-02 1.0 3.66e+06 1.0 0.0e+00 0.0e+00 1.2e+02 0 2 0 0 3 0 2 0 0 3 977 MatMult 716 1.0 1.6713e+00 1.0 9.31e+07 1.0 3.0e+04 1.8e+03 0.0e+00 19 46 71 77 0 19 46 71 77 0 445 MatMultAdd 104 1.0 8.3075e-02 1.0 1.88e+06 1.0 2.7e+03 2.7e+02 0.0e+00 1 1 6 1 0 1 1 6 1 0 181 MatMultTranspose 104 1.0 1.0354e-01 1.0 1.88e+06 1.0 2.7e+03 2.7e+02 2.1e+02 1 1 6 1 4 1 1 6 1 4 145 MatSolve 52 0.0 2.3191e-03 0.0 3.88e+05 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 167 MatLUFactorSym 1 1.0 4.6897e-04 4.7 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 1 1.0 4.5013e-0418.9 4.21e+04 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 94 MatConvert 4 1.0 2.0307e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 0 0 0 0 1 0 0 0 0 1 0 MatScale 4 1.0 6.1872e-03 1.0 5.35e+05 1.0 1.7e+02 1.6e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 689 MatAssemblyBegin 50 1.0 4.0176e-02 1.7 0.00e+00 0.0 3.8e+02 3.5e+02 5.6e+01 0 0 1 0 1 0 0 1 0 1 0 MatAssemblyEnd 50 1.0 1.3871e-01 1.0 0.00e+00 0.0 1.5e+03 3.9e+02 4.0e+02 2 0 4 1 9 2 0 4 1 9 0 MatGetRow 72562 1.0 2.7369e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 MatGetRowIJ 1 0.0 4.1008e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 0.0 4.0412e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.5e-01 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 4 1.0 1.6034e-01 1.0 0.00e+00 0.0 1.3e+03 2.7e+03 1.2e+02 2 0 3 5 3 2 0 3 5 3 0 MatView 8 1.0 4.7970e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatPtAP 4 1.0 1.4446e-01 1.0 1.63e+06 1.1 1.4e+03 6.3e+02 2.3e+02 2 1 3 1 5 2 1 3 1 5 88 MatPtAPSymbolic 4 1.0 1.0185e-01 1.1 0.00e+00 0.0 1.3e+03 5.6e+02 2.0e+02 1 0 3 1 4 1 0 3 1 4 0 MatPtAPNumeric 4 1.0 4.2606e-02 1.0 1.63e+06 1.1 1.2e+02 1.4e+03 2.4e+01 0 1 0 0 1 0 1 0 0 1 297 MatTrnMatMult 4 1.0 7.4679e-01 1.0 1.54e+07 1.0 1.1e+03 4.8e+03 2.5e+02 9 8 3 8 5 9 8 3 8 5 163 MatGetLocalMat 12 1.0 3.0601e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.2e+01 0 0 0 0 1 0 0 0 0 1 0 MatGetBrAoCol 4 1.0 1.4587e-02 1.7 0.00e+00 0.0 5.2e+02 1.1e+03 1.6e+01 0 0 1 1 0 0 0 1 1 0 0 MatGetSymTrans 8 1.0 1.3292e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCSetUp 2 1.0 3.9298e+00 1.0 3.29e+07 1.0 8.3e+03 2.0e+03 1.7e+03 46 16 20 25 38 46 16 20 25 38 67 PCSetUpOnBlocks 26 1.0 2.2025e-03 2.0 4.21e+04 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 19 PCApply 26 1.0 2.9146e+00 1.0 1.15e+08 1.0 3.2e+04 1.4e+03 2.3e+03 34 57 78 67 50 34 57 78 67 50 316 PCGAMGgraph_AGG 4 1.0 1.6139e+00 1.0 5.35e+05 1.0 5.2e+02 7.9e+02 1.9e+02 19 0 1 1 4 19 0 1 1 4 3 PCGAMGcoarse_AGG 4 1.0 1.3659e+00 1.0 1.54e+07 1.0 3.5e+03 3.3e+03 4.7e+02 16 8 8 17 10 16 8 8 17 10 89 PCGAMGProl_AGG 4 1.0 3.1176e-01 1.0 0.00e+00 0.0 1.1e+03 1.2e+03 2.0e+02 4 0 3 2 4 4 0 3 2 4 0 PCGAMGPOpt_AGG 4 1.0 2.5988e-05 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 1 1 548 0 Krylov Solver 11 11 162856 0 Vector 235 235 18591120 0 Vector Scatter 26 26 26936 0 Matrix 92 92 31827700 0 Matrix Coarsen 4 4 2448 0 Distributed Mesh 2 2 285240 0 Bipartite Graph 4 4 2736 0 Index Set 64 64 256456 0 IS L to G Mapping 1 1 138468 0 Preconditioner 11 11 10092 0 Viewer 2 1 712 0 PetscRandom 4 4 2432 0 ======================================================================================================================== Average time to get PetscTime(): 5.00679e-07 Average time for MPI_Barrier(): 0.000130177 Average time for zero size MPI_Send(): 2.22325e-05 #PETSc Option Table entries: -ksp_monitor -ksp_rtol 1.0e-7 -ksp_view -log_summary -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Aug 29 14:54:25 2012 Configure options: --prefix=/work/zlwei/PETSc --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich ----------------------------------------- Libraries compiled on Wed Aug 29 14:54:25 2012 on firefox.bioinfo.ittc.ku.edu Machine characteristics: Linux-2.6.18-92.1.13.el5-x86_64-with-redhat-5.2-Final Using PETSc directory: /nfs/work/zlwei/PETSc/petsc-dev Using PETSc arch: arch-linux2-c-debug ----------------------------------------- Using C compiler: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpif90 -Wall -Wno-unused-variable -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/include -I/nfs/work/zlwei/PETSc/petsc-dev/include -I/nfs/work/zlwei/PETSc/petsc-dev/include -I/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/include ----------------------------------------- Using C linker: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpicc Using Fortran linker: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpif90 Using libraries: -Wl,-rpath,/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -L/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -L/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -lflapack -lfblas -lm -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl ----------------------------------------- -------------- next part -------------- 0 KSP Residual norm 3.368148596765e+02 1 KSP Residual norm 7.884667648061e+01 2 KSP Residual norm 3.983220874680e+01 3 KSP Residual norm 2.529465334442e+01 4 KSP Residual norm 1.819644017854e+01 5 KSP Residual norm 1.458219316768e+01 6 KSP Residual norm 1.180412704465e+01 7 KSP Residual norm 9.575897241358e+00 8 KSP Residual norm 8.043264261713e+00 9 KSP Residual norm 7.018519352883e+00 10 KSP Residual norm 6.130476554332e+00 11 KSP Residual norm 5.397263048170e+00 12 KSP Residual norm 4.835967632485e+00 13 KSP Residual norm 4.352028777238e+00 14 KSP Residual norm 3.952712102274e+00 15 KSP Residual norm 3.617454495697e+00 16 KSP Residual norm 3.301898504445e+00 17 KSP Residual norm 3.027795288920e+00 18 KSP Residual norm 2.801330506731e+00 19 KSP Residual norm 2.600088323848e+00 20 KSP Residual norm 2.415396607825e+00 21 KSP Residual norm 2.251635941363e+00 22 KSP Residual norm 2.107710728118e+00 23 KSP Residual norm 1.976827356870e+00 24 KSP Residual norm 1.855722787536e+00 25 KSP Residual norm 1.744283789182e+00 26 KSP Residual norm 1.643979116571e+00 27 KSP Residual norm 1.555420193633e+00 28 KSP Residual norm 1.473610804642e+00 29 KSP Residual norm 1.395581630789e+00 30 KSP Residual norm 1.324035740635e+00 31 KSP Residual norm 1.289620876123e+00 32 KSP Residual norm 1.254163025674e+00 33 KSP Residual norm 1.218122418418e+00 34 KSP Residual norm 1.181980463870e+00 35 KSP Residual norm 1.145084266546e+00 36 KSP Residual norm 1.107771307421e+00 37 KSP Residual norm 1.070515918578e+00 38 KSP Residual norm 1.033345073830e+00 39 KSP Residual norm 9.957326457777e-01 40 KSP Residual norm 9.574090681547e-01 41 KSP Residual norm 9.206214332288e-01 42 KSP Residual norm 8.860402343860e-01 43 KSP Residual norm 8.519570071849e-01 44 KSP Residual norm 8.160171689292e-01 45 KSP Residual norm 7.780106459520e-01 46 KSP Residual norm 7.384806314831e-01 47 KSP Residual norm 7.011353257545e-01 48 KSP Residual norm 6.660776428339e-01 49 KSP Residual norm 6.290570146661e-01 50 KSP Residual norm 5.898901582810e-01 51 KSP Residual norm 5.530351127027e-01 52 KSP Residual norm 5.150136349357e-01 53 KSP Residual norm 4.769315717084e-01 54 KSP Residual norm 4.418695812249e-01 55 KSP Residual norm 4.055730558383e-01 56 KSP Residual norm 3.701573952578e-01 57 KSP Residual norm 3.405955774779e-01 58 KSP Residual norm 3.138542961303e-01 59 KSP Residual norm 2.904777931959e-01 60 KSP Residual norm 2.721221568117e-01 61 KSP Residual norm 2.637111203338e-01 62 KSP Residual norm 2.555282288716e-01 63 KSP Residual norm 2.457633020644e-01 64 KSP Residual norm 2.312064551884e-01 65 KSP Residual norm 2.168687024333e-01 66 KSP Residual norm 2.021462808115e-01 67 KSP Residual norm 1.882502502712e-01 68 KSP Residual norm 1.726156627947e-01 69 KSP Residual norm 1.580948063184e-01 70 KSP Residual norm 1.450188243537e-01 71 KSP Residual norm 1.329090943840e-01 72 KSP Residual norm 1.224648275772e-01 73 KSP Residual norm 1.134430099592e-01 74 KSP Residual norm 1.062156089215e-01 75 KSP Residual norm 1.009515456891e-01 76 KSP Residual norm 9.677088026876e-02 77 KSP Residual norm 9.330462638461e-02 78 KSP Residual norm 9.014986471375e-02 79 KSP Residual norm 8.728725736359e-02 80 KSP Residual norm 8.474425436748e-02 81 KSP Residual norm 8.239085729749e-02 82 KSP Residual norm 8.004171069055e-02 83 KSP Residual norm 7.754583057709e-02 84 KSP Residual norm 7.503607926802e-02 85 KSP Residual norm 7.251754709396e-02 86 KSP Residual norm 7.023044020357e-02 87 KSP Residual norm 6.816259549138e-02 88 KSP Residual norm 6.615708630367e-02 89 KSP Residual norm 6.410877480267e-02 90 KSP Residual norm 6.220199693340e-02 91 KSP Residual norm 6.079561000422e-02 92 KSP Residual norm 5.945366054862e-02 93 KSP Residual norm 5.805745558808e-02 94 KSP Residual norm 5.647865842490e-02 95 KSP Residual norm 5.482579066632e-02 96 KSP Residual norm 5.327464699030e-02 97 KSP Residual norm 5.181785368265e-02 98 KSP Residual norm 5.026058189172e-02 99 KSP Residual norm 4.853529822466e-02 100 KSP Residual norm 4.672552830768e-02 101 KSP Residual norm 4.488451706047e-02 102 KSP Residual norm 4.274002202667e-02 103 KSP Residual norm 4.066715145826e-02 104 KSP Residual norm 3.879543017112e-02 105 KSP Residual norm 3.692545546597e-02 106 KSP Residual norm 3.496725881242e-02 107 KSP Residual norm 3.291047034156e-02 108 KSP Residual norm 3.073335561917e-02 109 KSP Residual norm 2.849664394983e-02 110 KSP Residual norm 2.626973767994e-02 111 KSP Residual norm 2.438406681556e-02 112 KSP Residual norm 2.253397732039e-02 113 KSP Residual norm 2.054090707797e-02 114 KSP Residual norm 1.890158567808e-02 115 KSP Residual norm 1.742573595014e-02 116 KSP Residual norm 1.629263326782e-02 117 KSP Residual norm 1.532685125519e-02 118 KSP Residual norm 1.465738126879e-02 119 KSP Residual norm 1.408292302474e-02 120 KSP Residual norm 1.353341128860e-02 121 KSP Residual norm 1.310193200424e-02 122 KSP Residual norm 1.268439577064e-02 123 KSP Residual norm 1.221270299177e-02 124 KSP Residual norm 1.153059978148e-02 125 KSP Residual norm 1.084782525529e-02 126 KSP Residual norm 1.008849505102e-02 127 KSP Residual norm 9.358912388407e-03 128 KSP Residual norm 8.541223977083e-03 129 KSP Residual norm 7.865522556463e-03 130 KSP Residual norm 7.286455417054e-03 131 KSP Residual norm 6.746096551092e-03 132 KSP Residual norm 6.265169639034e-03 133 KSP Residual norm 5.831333351878e-03 134 KSP Residual norm 5.460797382663e-03 135 KSP Residual norm 5.177705767837e-03 136 KSP Residual norm 4.933073975857e-03 137 KSP Residual norm 4.721768175681e-03 138 KSP Residual norm 4.524437438027e-03 139 KSP Residual norm 4.341955963482e-03 140 KSP Residual norm 4.188429974280e-03 141 KSP Residual norm 4.043862515122e-03 142 KSP Residual norm 3.918929706117e-03 143 KSP Residual norm 3.814454962740e-03 144 KSP Residual norm 3.716285727000e-03 145 KSP Residual norm 3.616166834928e-03 146 KSP Residual norm 3.517411257480e-03 147 KSP Residual norm 3.424011069705e-03 148 KSP Residual norm 3.333161789233e-03 149 KSP Residual norm 3.238552146236e-03 150 KSP Residual norm 3.148952887727e-03 151 KSP Residual norm 3.071169436807e-03 152 KSP Residual norm 2.995279685803e-03 153 KSP Residual norm 2.918890614973e-03 154 KSP Residual norm 2.841924652276e-03 155 KSP Residual norm 2.767745676000e-03 156 KSP Residual norm 2.709880325144e-03 157 KSP Residual norm 2.655188186095e-03 158 KSP Residual norm 2.594800880316e-03 159 KSP Residual norm 2.511751585705e-03 160 KSP Residual norm 2.418560169069e-03 161 KSP Residual norm 2.323693463105e-03 162 KSP Residual norm 2.202281878316e-03 163 KSP Residual norm 2.081222945431e-03 164 KSP Residual norm 1.973162638634e-03 165 KSP Residual norm 1.862310198198e-03 166 KSP Residual norm 1.749849970665e-03 167 KSP Residual norm 1.633061071294e-03 168 KSP Residual norm 1.521124032557e-03 169 KSP Residual norm 1.408323314143e-03 170 KSP Residual norm 1.293048175498e-03 171 KSP Residual norm 1.198666442585e-03 172 KSP Residual norm 1.104292340266e-03 173 KSP Residual norm 9.903517641547e-04 174 KSP Residual norm 9.006840819784e-04 175 KSP Residual norm 8.220140598814e-04 176 KSP Residual norm 7.701687567256e-04 177 KSP Residual norm 7.277976116145e-04 178 KSP Residual norm 7.000140296237e-04 179 KSP Residual norm 6.773749425038e-04 180 KSP Residual norm 6.550713166809e-04 181 KSP Residual norm 6.359176664418e-04 182 KSP Residual norm 6.168972906949e-04 183 KSP Residual norm 5.950139987555e-04 184 KSP Residual norm 5.622068365562e-04 185 KSP Residual norm 5.273547552299e-04 186 KSP Residual norm 4.818810755826e-04 187 KSP Residual norm 4.384533123217e-04 188 KSP Residual norm 3.907464303241e-04 189 KSP Residual norm 3.586153163812e-04 190 KSP Residual norm 3.336501378016e-04 191 KSP Residual norm 3.104406491737e-04 192 KSP Residual norm 2.901902786842e-04 193 KSP Residual norm 2.716799088525e-04 194 KSP Residual norm 2.537323499807e-04 195 KSP Residual norm 2.393026806501e-04 196 KSP Residual norm 2.263905103671e-04 197 KSP Residual norm 2.157021715528e-04 198 KSP Residual norm 2.057920256098e-04 199 KSP Residual norm 1.969756726158e-04 200 KSP Residual norm 1.900711849490e-04 201 KSP Residual norm 1.834129243787e-04 202 KSP Residual norm 1.778746265038e-04 203 KSP Residual norm 1.740906692660e-04 204 KSP Residual norm 1.706199678117e-04 205 KSP Residual norm 1.668919206574e-04 206 KSP Residual norm 1.622105706626e-04 207 KSP Residual norm 1.572060039751e-04 208 KSP Residual norm 1.518496090051e-04 209 KSP Residual norm 1.463692706552e-04 210 KSP Residual norm 1.418011930173e-04 211 KSP Residual norm 1.377765629498e-04 212 KSP Residual norm 1.334751880052e-04 213 KSP Residual norm 1.291286531826e-04 214 KSP Residual norm 1.251832711544e-04 215 KSP Residual norm 1.218001179107e-04 216 KSP Residual norm 1.196976772828e-04 217 KSP Residual norm 1.177303742190e-04 218 KSP Residual norm 1.155947299539e-04 219 KSP Residual norm 1.122085992182e-04 220 KSP Residual norm 1.083653641467e-04 221 KSP Residual norm 1.043599353987e-04 222 KSP Residual norm 9.863046990455e-05 223 KSP Residual norm 9.288192270074e-05 224 KSP Residual norm 8.783813676585e-05 225 KSP Residual norm 8.246551518283e-05 226 KSP Residual norm 7.720911064459e-05 227 KSP Residual norm 7.171219551448e-05 228 KSP Residual norm 6.709483001659e-05 229 KSP Residual norm 6.232136142160e-05 230 KSP Residual norm 5.725941336029e-05 231 KSP Residual norm 5.315634584336e-05 232 KSP Residual norm 4.868189668074e-05 233 KSP Residual norm 4.242157170447e-05 234 KSP Residual norm 3.768648100701e-05 235 KSP Residual norm 3.337401865099e-05 KSP Object: 8 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000 tolerances: relative=1e-07, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: bjacobi block Jacobi: number of blocks = 8 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=250000, cols=250000 package used to perform factorization: petsc total: nonzeros=1725000, allocated nonzeros=1725000 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=250000, cols=250000 total: nonzeros=1725000, allocated nonzeros=1725000 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=2000000, cols=2000000 total: nonzeros=13900000, allocated nonzeros=13900000 total number of mallocs used during MatSetValues calls =0 Residual norm 4.10372e-07 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex45 on a arch-linux2-c-debug named compute-5-2.local with 8 processors, by zlwei Fri Sep 14 18:02:30 2012 Using Petsc Development HG revision: 98bf11863c3be31b7c2af504314a500bc64d88c9 HG Date: Wed Aug 29 13:51:08 2012 -0500 Max Max/Min Avg Total Time (sec): 6.489e+01 1.00001 6.489e+01 Objects: 7.400e+01 1.00000 7.400e+01 Flops: 5.455e+09 1.00001 5.455e+09 4.364e+10 Flops/sec: 8.407e+07 1.00001 8.407e+07 6.725e+08 Memory: 1.397e+08 1.00000 1.118e+09 MPI Messages: 7.480e+02 1.00809 7.428e+02 5.942e+03 MPI Message Lengths: 2.440e+07 1.00000 3.285e+04 1.952e+08 MPI Reductions: 4.972e+03 1.00040 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 6.4885e+01 100.0% 4.3638e+10 100.0% 5.942e+03 100.0% 3.285e+04 100.0% 4.969e+03 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage KSPGMRESOrthog 235 1.0 2.4753e+01 1.0 3.58e+09 1.0 0.0e+00 0.0e+00 3.8e+03 38 66 0 0 77 38 66 0 0 77 1157 KSPSetUp 2 1.0 2.5209e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 6.4022e+01 1.0 5.45e+09 1.0 5.9e+03 3.3e+04 4.9e+03 99100 99 99 99 99100 99 99 99 681 VecMDot 235 1.0 1.1786e+01 1.1 1.79e+09 1.0 0.0e+00 0.0e+00 2.4e+02 18 33 0 0 5 18 33 0 0 5 1215 VecNorm 244 1.0 7.7111e-01 1.4 1.22e+08 1.0 0.0e+00 0.0e+00 2.4e+02 1 2 0 0 5 1 2 0 0 5 1266 VecScale 243 1.0 2.0378e-01 1.0 6.08e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2385 VecCopy 8 1.0 4.7706e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 253 1.0 6.1563e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 16 1.0 8.1648e-02 1.0 8.00e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 784 VecMAXPY 243 1.0 1.3352e+01 1.0 1.91e+09 1.0 0.0e+00 0.0e+00 0.0e+00 20 35 0 0 0 20 35 0 0 0 1143 VecScatterBegin 243 1.0 1.3329e-01 1.0 0.00e+00 0.0 5.8e+03 3.3e+04 0.0e+00 0 0 98100 0 0 0 98100 0 0 VecScatterEnd 243 1.0 1.6686e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 243 1.0 9.7965e-01 1.3 1.82e+08 1.0 0.0e+00 0.0e+00 2.4e+02 1 3 0 0 5 1 3 0 0 5 1488 MatMult 243 1.0 1.0068e+01 1.0 7.84e+08 1.0 5.8e+03 3.3e+04 0.0e+00 15 14 98100 0 15 14 98100 0 623 MatSolve 243 1.0 9.0452e+00 1.0 7.78e+08 1.0 0.0e+00 0.0e+00 0.0e+00 14 14 0 0 0 14 14 0 0 0 688 MatLUFactorNum 1 1.0 1.2613e-01 1.0 5.19e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 328 MatILUFactorSym 1 1.0 1.2127e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 2 1.0 5.0238e-02 6.8 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 2 1.0 9.5135e-02 1.0 0.00e+00 0.0 4.8e+01 8.3e+03 2.3e+01 0 0 1 0 0 0 0 1 0 0 0 MatGetRowIJ 1 1.0 7.1526e-06 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 4.3743e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 3 3.0 5.2476e-04 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCSetUp 2 1.0 2.9335e-01 1.0 5.19e+06 1.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 141 PCSetUpOnBlocks 1 1.0 2.9270e-01 1.0 5.19e+06 1.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 141 PCApply 243 1.0 1.3113e+01 1.0 7.78e+08 1.0 0.0e+00 0.0e+00 4.9e+02 20 14 0 0 10 20 14 0 0 10 474 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 1 1 548 0 Krylov Solver 2 2 19360 0 Vector 43 43 74164072 0 Vector Scatter 3 3 3108 0 Matrix 4 4 53610212 0 Distributed Mesh 2 2 2111040 0 Bipartite Graph 4 4 2736 0 Index Set 10 10 2107424 0 IS L to G Mapping 1 1 1051368 0 Preconditioner 2 2 1784 0 Viewer 2 1 712 0 ======================================================================================================================== Average time to get PetscTime(): 5.96046e-07 Average time for MPI_Barrier(): 0.000142002 Average time for zero size MPI_Send(): 0.00040701 #PETSc Option Table entries: -ksp_monitor -ksp_rtol 1.0e-7 -ksp_view -log_summary #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Aug 29 14:54:25 2012 Configure options: --prefix=/work/zlwei/PETSc --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich ----------------------------------------- Libraries compiled on Wed Aug 29 14:54:25 2012 on firefox.bioinfo.ittc.ku.edu Machine characteristics: Linux-2.6.18-92.1.13.el5-x86_64-with-redhat-5.2-Final Using PETSc directory: /nfs/work/zlwei/PETSc/petsc-dev Using PETSc arch: arch-linux2-c-debug ----------------------------------------- Using C compiler: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpif90 -Wall -Wno-unused-variable -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/include -I/nfs/work/zlwei/PETSc/petsc-dev/include -I/nfs/work/zlwei/PETSc/petsc-dev/include -I/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/include ----------------------------------------- Using C linker: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpicc Using Fortran linker: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpif90 Using libraries: -Wl,-rpath,/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -L/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -L/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -lflapack -lfblas -lm -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl ----------------------------------------- -------------- next part -------------- 0 KSP Residual norm 3.783525866986e+02 1 KSP Residual norm 1.355893441623e+02 2 KSP Residual norm 7.589085973838e+01 3 KSP Residual norm 4.823882044088e+01 4 KSP Residual norm 3.576939460641e+01 5 KSP Residual norm 2.568586752716e+01 6 KSP Residual norm 1.911353651617e+01 7 KSP Residual norm 1.476612759706e+01 8 KSP Residual norm 1.120103269246e+01 9 KSP Residual norm 8.447905007266e+00 10 KSP Residual norm 6.370754282832e+00 11 KSP Residual norm 4.663740863807e+00 12 KSP Residual norm 3.270563368805e+00 13 KSP Residual norm 2.221723082951e+00 14 KSP Residual norm 1.499655110516e+00 15 KSP Residual norm 1.025805172424e+00 16 KSP Residual norm 6.958772552651e-01 17 KSP Residual norm 4.398302154107e-01 18 KSP Residual norm 2.533473339850e-01 19 KSP Residual norm 1.446856653276e-01 20 KSP Residual norm 8.823403825208e-02 21 KSP Residual norm 5.562369474397e-02 22 KSP Residual norm 3.414214762893e-02 23 KSP Residual norm 2.080524442410e-02 24 KSP Residual norm 1.195406832279e-02 25 KSP Residual norm 6.116395185712e-03 26 KSP Residual norm 3.571727881359e-03 27 KSP Residual norm 2.211651069789e-03 28 KSP Residual norm 1.307637746982e-03 29 KSP Residual norm 8.576482161323e-04 30 KSP Residual norm 6.057261603377e-04 31 KSP Residual norm 5.157507148603e-04 32 KSP Residual norm 3.933614801888e-04 33 KSP Residual norm 2.865687664919e-04 34 KSP Residual norm 1.847542136621e-04 35 KSP Residual norm 1.141737708009e-04 36 KSP Residual norm 6.706587799191e-05 37 KSP Residual norm 4.120603316253e-05 38 KSP Residual norm 2.698388463745e-05 KSP Object: 8 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000 tolerances: relative=1e-07, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 8 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=5 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 8 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 8 MPI processes type: bjacobi block Jacobi: number of blocks = 8 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 KSP Object: left preconditioning KSP Object: (mg_coarse_sub_) KSP Object: (mg_coarse_sub_) 1 MPI processes KSP Object: (mg_coarse_sub_) 1 MPI processes KSP Object: (mg_coarse_sub_) 1 MPI processes KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes (mg_coarse_sub_) 1 MPI processes type: preonly 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero type: preonly maximum iterations=10000, initial guess is zero type: preonly maximum iterations=10000, initial guess is zero type: preonly maximum iterations=10000, initial guess is zero maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 type: lu LU: out-of-place factorization maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test left preconditioning using NONE norm type for convergence test PC Object: tolerance for zero pivot 2.22045e-14 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: PC Object: (mg_coarse_sub_) 1 MPI processes tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu matrix ordering: nd factor fill ratio given 5, needed 2.91134 Factored matrix follows: using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes PC Object: (mg_coarse_sub_) 1 MPI processes type: lu (mg_coarse_sub_) 1 MPI processes type: lu (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 Matrix Object: type: lu LU: out-of-place factorization LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: 1 MPI processes type: seqaij tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 factor fill ratio given 5, needed 0 Factored matrix follows: matrix ordering: nd factor fill ratio given 5, needed 0 factor fill ratio given 5, needed 0 Factored matrix follows: rows=718, cols=718 Factored matrix follows: Factored matrix follows: Matrix Object: 1 MPI processes package used to perform factorization: petsc Matrix Object: type: seqaij total: nonzeros=18324, allocated nonzeros=18324 Matrix Object: 1 MPI processes 1 MPI processes type: seqaij Matrix Object: 1 MPI processes Matrix Object: 1 MPI processes type: seqaij total number of mallocs used during MatSetValues calls =0 type: seqaij rows=0, cols=0 type: seqaij rows=0, cols=0 rows=0, cols=0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 not using I-node routines rows=0, cols=0 rows=0, cols=0 total number of mallocs used during MatSetValues calls =0 package used to perform factorization: petsc package used to perform factorization: petsc package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 linear system matrix = precond matrix: package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 total number of mallocs used during MatSetValues calls =0 not using I-node routines Matrix Object: 1 MPI processes type: seqaij total number of mallocs used during MatSetValues calls =0 linear system matrix = precond matrix: not using I-node routines not using I-node routines not using I-node routines rows=718, cols=718 total: nonzeros=6294, allocated nonzeros=6294 not using I-node routines Matrix Object: 1 MPI processes type: seqaij total number of mallocs used during MatSetValues calls =0 linear system matrix = precond matrix: linear system matrix = precond matrix: linear system matrix = precond matrix: Matrix Object: not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij Matrix Object: 1 MPI processes 1 MPI processes type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines - - - - - - - - - - - - - - - - - - Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 type: seqaij rows=0, cols=0 rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 rows=0, cols=0 total number of mallocs used during MatSetValues calls =0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 total: nonzeros=0, allocated nonzeros=0 not using I-node routines total number of mallocs used during MatSetValues calls =0 not using I-node routines total number of mallocs used during MatSetValues calls =0 not using I-node routines not using I-node routines type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test Matrix Object: PC Object: (mg_coarse_sub_) 1 MPI processes 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 type: seqaij matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: rows=0, cols=0 Matrix Object: 1 MPI processes type: seqaij package used to perform factorization: petsc rows=0, cols=0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: not using I-node routines Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 linear system matrix = precond matrix: total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines [1] number of local blocks = 1, first local block number = 1 [1] local block number 0 - - - - - - - - - - - - - - - - - - [2] number of local blocks = 1, first local block number = 2 [2] local block number 0 - - - - - - - - - - - - - - - - - - [3] number of local blocks = 1, first local block number = 3 [3] local block number 0 - - - - - - - - - - - - - - - - - - [4] number of local blocks = 1, first local block number = 4 [4] local block number 0 - - - - - - - - - - - - - - - - - - [5] number of local blocks = 1, first local block number = 5 [5] local block number 0 - - - - - - - - - - - - - - - - - - [6] number of local blocks = 1, first local block number = 6 [6] local block number 0 - - - - - - - - - - - - - - - - - - [7] number of local blocks = 1, first local block number = 7 [7] local block number 0 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=718, cols=718 total: nonzeros=6294, allocated nonzeros=6294 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 8 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0710112, max = 1.49123 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 8 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=3821, cols=3821 total: nonzeros=31479, allocated nonzeros=31479 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 8 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.071608, max = 1.50377 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_2_) 8 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=23143, cols=23143 total: nonzeros=230755, allocated nonzeros=230755 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 8 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.075852, max = 1.59289 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_3_) 8 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=224819, cols=224819 total: nonzeros=2625247, allocated nonzeros=2625247 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 8 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0977111, max = 2.05193 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_4_) 8 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=2000000, cols=2000000 total: nonzeros=13900000, allocated nonzeros=13900000 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 8 MPI processes type: mpiaij rows=2000000, cols=2000000 total: nonzeros=13900000, allocated nonzeros=13900000 total number of mallocs used during MatSetValues calls =0 Residual norm 4.49376e-07 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex45 on a arch-linux2-c-debug named compute-5-2.local with 8 processors, by zlwei Fri Sep 14 17:59:53 2012 Using Petsc Development HG revision: 98bf11863c3be31b7c2af504314a500bc64d88c9 HG Date: Wed Aug 29 13:51:08 2012 -0500 Max Max/Min Avg Total Time (sec): 7.150e+01 1.00001 7.150e+01 Objects: 4.570e+02 1.00000 4.570e+02 Flops: 2.338e+09 1.00241 2.335e+09 1.868e+10 Flops/sec: 3.270e+07 1.00241 3.266e+07 2.613e+08 Memory: 2.987e+08 1.00000 2.390e+09 MPI Messages: 7.807e+03 1.08415 7.544e+03 6.035e+04 MPI Message Lengths: 4.598e+07 1.00776 6.071e+03 3.664e+08 MPI Reductions: 6.099e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 7.1498e+01 100.0% 1.8683e+10 100.0% 6.035e+04 100.0% 6.071e+03 100.0% 6.098e+03 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage KSPGMRESOrthog 118 1.0 3.8889e+00 1.0 5.63e+08 1.0 0.0e+00 0.0e+00 8.8e+02 5 24 0 0 14 5 24 0 0 14 1158 KSPSetUp 11 1.0 9.3905e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 7.0933e+01 1.0 2.33e+09 1.0 6.0e+04 6.1e+03 6.0e+03 99100100100 99 99100100100 99 263 VecMDot 118 1.0 1.8252e+00 1.0 2.82e+08 1.0 0.0e+00 0.0e+00 1.2e+02 3 12 0 0 2 3 12 0 0 2 1234 VecNorm 165 1.0 1.4453e-01 1.2 2.68e+07 1.0 0.0e+00 0.0e+00 1.6e+02 0 1 0 0 3 0 1 0 0 3 1478 VecScale 804 1.0 1.8971e-01 1.0 5.82e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 2452 VecCopy 206 1.0 2.8914e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 744 1.0 1.7086e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 1328 1.0 1.5434e+00 1.0 1.83e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 8 0 0 0 2 8 0 0 0 947 VecAYPX 1280 1.0 1.3966e+00 1.0 1.13e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 5 0 0 0 2 5 0 0 0 645 VecMAXPY 164 1.0 2.1316e+00 1.0 3.06e+08 1.0 0.0e+00 0.0e+00 0.0e+00 3 13 0 0 0 3 13 0 0 0 1149 VecAssemblyBegin 54 1.0 9.9215e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 1.6e+02 0 0 0 0 3 0 0 0 0 3 0 VecAssemblyEnd 54 1.0 4.9758e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 1004 1.0 2.0735e+00 1.1 7.07e+07 1.0 0.0e+00 0.0e+00 0.0e+00 3 3 0 0 0 3 3 0 0 0 273 VecScatterBegin 1458 1.0 3.4748e-01 1.1 0.00e+00 0.0 5.6e+04 6.0e+03 0.0e+00 0 0 94 93 0 0 0 94 93 0 0 VecScatterEnd 1458 1.0 2.9766e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSetRandom 4 1.0 3.8469e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecNormalize 164 1.0 1.9133e-01 1.1 3.95e+07 1.0 0.0e+00 0.0e+00 1.6e+02 0 2 0 0 3 0 2 0 0 3 1644 MatMult 1080 1.0 1.5564e+01 1.0 1.11e+09 1.0 4.4e+04 6.9e+03 0.0e+00 22 47 73 83 0 22 47 73 83 0 569 MatMultAdd 160 1.0 7.8439e-01 1.0 2.25e+07 1.0 4.8e+03 9.2e+02 0.0e+00 1 1 8 1 0 1 1 8 1 0 229 MatMultTranspose 160 1.0 7.3742e-01 1.0 2.25e+07 1.0 4.8e+03 9.2e+02 3.2e+02 1 1 8 1 5 1 1 8 1 5 244 MatSolve 80 0.0 1.2254e-02 0.0 2.87e+06 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 235 MatLUFactorSym 1 1.0 1.6282e-0317.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 1 1.0 2.3890e-03107.7 3.37e+05 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 141 MatConvert 4 1.0 1.1830e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 0 0 0 0 0 0 0 0 0 0 0 MatScale 4 1.0 8.0437e-02 1.0 4.21e+06 1.0 1.7e+02 6.3e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 417 MatAssemblyBegin 50 1.0 1.9675e-01 2.4 0.00e+00 0.0 4.3e+02 1.2e+03 5.6e+01 0 0 1 0 1 0 0 1 0 1 0 MatAssemblyEnd 50 1.0 8.5181e-01 1.0 0.00e+00 0.0 1.5e+03 1.5e+03 4.0e+02 1 0 3 1 7 1 0 3 1 7 0 MatGetRow 563366 1.0 2.0994e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 MatGetRowIJ 1 0.0 1.4687e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 0.0 1.5841e-03 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.5e-01 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 4 1.0 1.4392e+00 1.0 0.00e+00 0.0 1.4e+03 9.7e+03 1.2e+02 2 0 2 4 2 2 0 2 4 2 0 MatView 8 1.0 4.0925e-03 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatPtAP 4 1.0 8.0547e-01 1.1 1.25e+07 1.0 1.5e+03 2.3e+03 2.3e+02 1 1 2 1 4 1 1 2 1 4 123 MatPtAPSymbolic 4 1.0 5.3591e-01 1.1 0.00e+00 0.0 1.3e+03 2.1e+03 2.0e+02 1 0 2 1 3 1 0 2 1 3 0 MatPtAPNumeric 4 1.0 2.6955e-01 1.0 1.25e+07 1.0 1.4e+02 4.7e+03 2.4e+01 0 1 0 0 0 0 1 0 0 0 368 MatTrnMatMult 4 1.0 5.9091e+00 1.0 1.28e+08 1.0 1.1e+03 1.9e+04 2.5e+02 8 5 2 6 4 8 5 2 6 4 172 MatGetLocalMat 12 1.0 2.5454e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 3.2e+01 0 0 0 0 1 0 0 0 0 1 0 MatGetBrAoCol 4 1.0 6.7071e-02 2.8 0.00e+00 0.0 5.2e+02 4.2e+03 1.6e+01 0 0 1 1 0 0 0 1 1 0 0 MatGetSymTrans 8 1.0 1.0458e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCSetUp 2 1.0 2.9746e+01 1.0 2.64e+08 1.0 8.4e+03 7.7e+03 1.7e+03 42 11 14 18 29 42 11 14 18 29 71 PCSetUpOnBlocks 40 1.0 6.5384e-03 5.5 3.37e+05 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 52 PCApply 40 1.0 2.9190e+01 1.0 1.39e+09 1.0 5.1e+04 5.3e+03 3.6e+03 41 60 84 73 59 41 60 84 73 59 381 PCGAMGgraph_AGG 4 1.0 1.2728e+01 1.0 4.21e+06 1.0 5.2e+02 3.1e+03 1.9e+02 18 0 1 0 3 18 0 1 0 3 3 PCGAMGcoarse_AGG 4 1.0 1.1054e+01 1.0 1.28e+08 1.0 3.5e+03 1.2e+04 4.7e+02 15 5 6 12 8 15 5 6 12 8 92 PCGAMGProl_AGG 4 1.0 2.4394e+00 1.0 0.00e+00 0.0 1.1e+03 4.3e+03 2.0e+02 3 0 2 1 3 3 0 2 1 3 0 PCGAMGPOpt_AGG 4 1.0 3.3855e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 1 1 548 0 Krylov Solver 11 11 162856 0 Vector 235 235 142154440 0 Vector Scatter 26 26 26936 0 Matrix 92 92 249866276 0 Matrix Coarsen 4 4 2448 0 Distributed Mesh 2 2 2111040 0 Bipartite Graph 4 4 2736 0 Index Set 64 64 1375492 0 IS L to G Mapping 1 1 1051368 0 Preconditioner 11 11 10092 0 Viewer 2 1 712 0 PetscRandom 4 4 2432 0 ======================================================================================================================== Average time to get PetscTime(): 5.96046e-07 Average time for MPI_Barrier(): 0.000152588 Average time for zero size MPI_Send(): 7.42376e-05 #PETSc Option Table entries: -ksp_monitor -ksp_rtol 1.0e-7 -ksp_view -log_summary -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Aug 29 14:54:25 2012 Configure options: --prefix=/work/zlwei/PETSc --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich ----------------------------------------- Libraries compiled on Wed Aug 29 14:54:25 2012 on firefox.bioinfo.ittc.ku.edu Machine characteristics: Linux-2.6.18-92.1.13.el5-x86_64-with-redhat-5.2-Final Using PETSc directory: /nfs/work/zlwei/PETSc/petsc-dev Using PETSc arch: arch-linux2-c-debug ----------------------------------------- Using C compiler: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpif90 -Wall -Wno-unused-variable -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/include -I/nfs/work/zlwei/PETSc/petsc-dev/include -I/nfs/work/zlwei/PETSc/petsc-dev/include -I/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/include ----------------------------------------- Using C linker: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpicc Using Fortran linker: /nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/bin/mpif90 Using libraries: -Wl,-rpath,/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -L/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -L/nfs/work/zlwei/PETSc/petsc-dev/arch-linux2-c-debug/lib -lflapack -lfblas -lm -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl ----------------------------------------- From jedbrown at mcs.anl.gov Fri Sep 14 18:11:56 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 14 Sep 2012 18:11:56 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: <5053B8F5.2050901@gmail.com> References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <5053B26F.9050609@gmail.com> <5053B8F5.2050901@gmail.com> Message-ID: * * * ##########################################################* * # #* * # WARNING!!! #* * # #* * # This code was compiled with a debugging option, #* * # To get timing results run ./configure #* * # using --with-debugging=no, the performance will #* * # be generally two or three times faster. #* * # #* * ##########################################################* On Fri, Sep 14, 2012 at 6:08 PM, Zhenglun (Alan) Wei wrote: > I'm sorry about that. I attached the output files here with ' > -ksp_monitor -ksp_view -log_summary'. They are named after the grid size > and pc-type. > > cheers, > Alan > > On 9/14/2012 5:51 PM, Jed Brown wrote: > > On Fri, Sep 14, 2012 at 5:49 PM, Matthew Knepley wrote: > >> On Fri, Sep 14, 2012 at 5:40 PM, Zhenglun (Alan) Wei < >> zhenglun.wei at gmail.com> wrote: >> >>> Dear folks, >>> I did some test with -pc_type gamg with >>> /src/ksp/ksp/example/tutorial/ex45.c. It is not as good as default -pc_type >>> when my mesh (Cartisian) is 100*50*50; while it is a little bit better than >>> the default one when the mesh is 200*100*100. Therefore, I guess this type >>> of pc is good for larger problem. Is that ture? or is there any rule of >>> thumb for this type of preconditioner? BTW, I tested it with 8 processes. >>> >> >> When asking questions about convergence, always always ALWAYS send the >> output of -ksp_monitor -ksp_view. If >> you don't, we are just guessing blindly. >> > > And -log_summary because this is about performance. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Sep 14 18:14:40 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 14 Sep 2012 18:14:40 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <5053B26F.9050609@gmail.com> <5053B8F5.2050901@gmail.com> Message-ID: Also, add -pc_gamg_agg_nsmooths 1 as Mark suggested. It will reduce the iteration count significantly at the expense of somewhat higher setup cost. On Fri, Sep 14, 2012 at 6:11 PM, Jed Brown wrote: > * > * > * ##########################################################* > * # #* > * # WARNING!!! #* > * # #* > * # This code was compiled with a debugging option, #* > * # To get timing results run ./configure #* > * # using --with-debugging=no, the performance will #* > * # be generally two or three times faster. #* > * # #* > * ##########################################################* > > On Fri, Sep 14, 2012 at 6:08 PM, Zhenglun (Alan) Wei < > zhenglun.wei at gmail.com> wrote: > >> I'm sorry about that. I attached the output files here with ' >> -ksp_monitor -ksp_view -log_summary'. They are named after the grid size >> and pc-type. >> >> cheers, >> Alan >> >> On 9/14/2012 5:51 PM, Jed Brown wrote: >> >> On Fri, Sep 14, 2012 at 5:49 PM, Matthew Knepley wrote: >> >>> On Fri, Sep 14, 2012 at 5:40 PM, Zhenglun (Alan) Wei < >>> zhenglun.wei at gmail.com> wrote: >>> >>>> Dear folks, >>>> I did some test with -pc_type gamg with >>>> /src/ksp/ksp/example/tutorial/ex45.c. It is not as good as default -pc_type >>>> when my mesh (Cartisian) is 100*50*50; while it is a little bit better than >>>> the default one when the mesh is 200*100*100. Therefore, I guess this type >>>> of pc is good for larger problem. Is that ture? or is there any rule of >>>> thumb for this type of preconditioner? BTW, I tested it with 8 processes. >>>> >>> >>> When asking questions about convergence, always always ALWAYS send the >>> output of -ksp_monitor -ksp_view. If >>> you don't, we are just guessing blindly. >>> >> >> And -log_summary because this is about performance. >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Sep 14 18:17:25 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 14 Sep 2012 18:17:25 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <5053B26F.9050609@gmail.com> <5053B8F5.2050901@gmail.com> Message-ID: On Fri, Sep 14, 2012 at 6:14 PM, Jed Brown wrote: > Also, add -pc_gamg_agg_nsmooths 1 as Mark suggested. It will reduce the > iteration count significantly at the expense of somewhat higher setup cost. Unless I am reading this wrong, you have a bizarre coarse grid solver. It should be redundant LU but instead you have GMRES/BJacobi? Matt > > On Fri, Sep 14, 2012 at 6:11 PM, Jed Brown wrote: > >> * >> * >> * ##########################################################* >> * # #* >> * # WARNING!!! #* >> * # #* >> * # This code was compiled with a debugging option, #* >> * # To get timing results run ./configure #* >> * # using --with-debugging=no, the performance will #* >> * # be generally two or three times faster. #* >> * # #* >> * ##########################################################* >> >> On Fri, Sep 14, 2012 at 6:08 PM, Zhenglun (Alan) Wei < >> zhenglun.wei at gmail.com> wrote: >> >>> I'm sorry about that. I attached the output files here with ' >>> -ksp_monitor -ksp_view -log_summary'. They are named after the grid size >>> and pc-type. >>> >>> cheers, >>> Alan >>> >>> On 9/14/2012 5:51 PM, Jed Brown wrote: >>> >>> On Fri, Sep 14, 2012 at 5:49 PM, Matthew Knepley wrote: >>> >>>> On Fri, Sep 14, 2012 at 5:40 PM, Zhenglun (Alan) Wei < >>>> zhenglun.wei at gmail.com> wrote: >>>> >>>>> Dear folks, >>>>> I did some test with -pc_type gamg with >>>>> /src/ksp/ksp/example/tutorial/ex45.c. It is not as good as default -pc_type >>>>> when my mesh (Cartisian) is 100*50*50; while it is a little bit better than >>>>> the default one when the mesh is 200*100*100. Therefore, I guess this type >>>>> of pc is good for larger problem. Is that ture? or is there any rule of >>>>> thumb for this type of preconditioner? BTW, I tested it with 8 processes. >>>>> >>>> >>>> When asking questions about convergence, always always ALWAYS send >>>> the output of -ksp_monitor -ksp_view. If >>>> you don't, we are just guessing blindly. >>>> >>> >>> And -log_summary because this is about performance. >>> >>> >>> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Sep 14 18:29:14 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 14 Sep 2012 18:29:14 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <5053B26F.9050609@gmail.com> <5053B8F5.2050901@gmail.com> Message-ID: On Fri, Sep 14, 2012 at 6:17 PM, Matthew Knepley wrote: > Unless I am reading this wrong, you have a bizarre coarse grid solver. It > should be redundant LU but instead > you have GMRES/BJacobi? > That is Mark's default. The proper coarse grid solve ends up being pretty expensive. Mark set the GAMG default to just do bjacobi because it was scaling better in practice. I don't know where this GMRES is coming from because it doesn't show up in my tests with 3.3. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Fri Sep 14 18:33:42 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Fri, 14 Sep 2012 19:33:42 -0400 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <5053B26F.9050609@gmail.com> <5053B8F5.2050901@gmail.com> Message-ID: <1BE2291F-0F6E-4367-B841-0380CEA83049@columbia.edu> Well this is probably my "bizarre coarse grid solver". I explicitly create the solver after reducing the processor to one proc. PETSc seems to make it a GMRES solver in PCSetUp_MG and I explicitly set it back to PREONLY but it looks like GMRES slipped in anyway. Mark On Sep 14, 2012, at 7:17 PM, Matthew Knepley wrote: > On Fri, Sep 14, 2012 at 6:14 PM, Jed Brown wrote: > Also, add -pc_gamg_agg_nsmooths 1 as Mark suggested. It will reduce the iteration count significantly at the expense of somewhat higher setup cost. > > Unless I am reading this wrong, you have a bizarre coarse grid solver. It should be redundant LU but instead > you have GMRES/BJacobi? > > Matt > > > On Fri, Sep 14, 2012 at 6:11 PM, Jed Brown wrote: > > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > > On Fri, Sep 14, 2012 at 6:08 PM, Zhenglun (Alan) Wei wrote: > I'm sorry about that. I attached the output files here with ' -ksp_monitor -ksp_view -log_summary'. They are named after the grid size and pc-type. > > cheers, > Alan > > On 9/14/2012 5:51 PM, Jed Brown wrote: >> On Fri, Sep 14, 2012 at 5:49 PM, Matthew Knepley wrote: >> On Fri, Sep 14, 2012 at 5:40 PM, Zhenglun (Alan) Wei wrote: >> Dear folks, >> I did some test with -pc_type gamg with /src/ksp/ksp/example/tutorial/ex45.c. It is not as good as default -pc_type when my mesh (Cartisian) is 100*50*50; while it is a little bit better than the default one when the mesh is 200*100*100. Therefore, I guess this type of pc is good for larger problem. Is that ture? or is there any rule of thumb for this type of preconditioner? BTW, I tested it with 8 processes. >> >> When asking questions about convergence, always always ALWAYS send the output of -ksp_monitor -ksp_view. If >> you don't, we are just guessing blindly. >> >> And -log_summary because this is about performance. > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Fri Sep 14 18:36:40 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Fri, 14 Sep 2012 19:36:40 -0400 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <5053B26F.9050609@gmail.com> <5053B8F5.2050901@gmail.com> Message-ID: <751FB2A5-535B-4D37-83E6-0B90D6C6ABC6@columbia.edu> On Sep 14, 2012, at 7:29 PM, Jed Brown wrote: > On Fri, Sep 14, 2012 at 6:17 PM, Matthew Knepley wrote: > Unless I am reading this wrong, you have a bizarre coarse grid solver. It should be redundant LU but instead > you have GMRES/BJacobi? > > That is Mark's default. The proper coarse grid solve ends up being pretty expensive. Mark set the GAMG default to just do bjacobi because it was scaling better in practice. I don't know where this GMRES is coming from because it doesn't show up in my tests with 3.3. FYI, The proper coarse grid solve should not be expensive. Redundant coarse grid solves do not make any sense to me unless you do it right and PETSc does not do it right. Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sat Sep 15 02:16:40 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 15 Sep 2012 09:16:40 +0200 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> Message-ID: <50542B58.50103@gmail.com> On 14/9/2012 4:38 PM, Mark F. Adams wrote: > > On Sep 14, 2012, at 8:47 AM, Jed Brown > wrote: > >> On Fri, Sep 14, 2012 at 3:47 AM, TAY wee-beng > > wrote: >> >> Hi, >> >> I need to solve the Poisson eqn on my win7 machine. I'm currently >> using BCGS without preconditioner. I can't use HYPRE since I'm >> using Fortran and win7. It's rather slow. >> >> Is there a recommended solver and preconditioner to solve the >> Poisson eqn to get me started? >> >> >> What discretization? >> >> Run with -pc_type gamg to start. > > and > > -pc_gamg_agg_nsmooths 1 I am using non-uniform Cartesian grid to solve. What do you mean by the discretization? I 'm using finite volume method to obtain a system of linear eqns. I tried both -pc_type gamg and -pc_gamg_agg_nsmooths 1 but the solver breaks down. The error msg is: /[0]PETSC ERROR: ------------------------------------------------------------------------// //[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero// //[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger// //[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors// //[0]PETSC ERROR: likely location of problem given in stack below// //[0]PETSC ERROR: --------------------- Stack Frames ------------------------------------// //[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,// //[0]PETSC ERROR: INSTEAD the line number of the start of the function// //[0]PETSC ERROR: is given.// //[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues_GMRES line 24 src/ksp/ksp/impls/gmres/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\impls\gmres\gmreig.c// //[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues line 40 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c// //[0]PETSC ERROR: [0] PCGAMGOptprol_AGG line 1294 src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\agg.c// //[0]PETSC ERROR: [0] PCSetUp_GAMG line 559 src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\gamg.c// //[0]PETSC ERROR: [0] PCSetUp line 810 src/ksp/pc/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\INTERF~1\precon.c// //[0]PETSC ERROR: [0] KSPSetUp line 182 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c// //[0]PETSC ERROR: [0] KSPSolve line 351 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c// //[0]PETSC ERROR: --------------------- Error Message ------------------------------------// //[0]PETSC ERROR: Signal received!// //[0]PETSC ERROR: ------------------------------------------------------------------------// //[0]PETSC ERROR: Petsc Development HG revision: d560f272ff1b6e7f4e28667ab8b55f0c7408979e HG Date: Wed Jul 25 14:42:03 2012 -0500// //[0]PETSC ERROR: See docs/changes/index.html for recent updates.// //[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.// //[0]PETSC ERROR: See docs/index.html for manual pages.// //[0]PETSC ERROR: ------------------------------------------------------------------------// //[0]PETSC ERROR: C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe on a petsc-3.3 named USER-PC by User Sat Sep 15 09:12:58 2012// //[0]PETSC ERROR: Libraries linked from /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib// //[0]PETSC ERROR: Configure run at Thu Jul 26 11:01:27 2012// //[0]PETSC ERROR: Configure options --with-cc="win32fe cl" --with-fc="win32fe ifort" --with-cxx="win32fe cl" --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 --with-debugging=1 --useThreads=0// //[0]PETSC ERROR: ------------------------------------------------------------------------// //[0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file// //application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0/ Btw, I've used both GMRES and BCGS as the ksp. What other options can I try? Thanks > > (I should make this the default) > > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Sep 15 05:57:18 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 15 Sep 2012 05:57:18 -0500 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: <50542B58.50103@gmail.com> References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <50542B58.50103@gmail.com> Message-ID: On Sat, Sep 15, 2012 at 2:16 AM, TAY wee-beng wrote: > I am using non-uniform Cartesian grid to solve. What do you mean by the > discretization? I 'm using finite volume method to obtain a system of > linear eqns. > Is it symmetric? Since you are using petsc-dev, you should pull the latest. Then if the error below persists, run in a debugger with -fp_trap to find what causes the floating point exception. > > I tried both -pc_type gamg and -pc_gamg_agg_nsmooths 1 but the solver > breaks down. The error msg is: > > *[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero** > **[0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger** > **[0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find > memory corruption errors** > **[0]PETSC ERROR: likely location of problem given in stack below** > **[0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------** > **[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,** > **[0]PETSC ERROR: INSTEAD the line number of the start of the > function** > **[0]PETSC ERROR: is given.** > **[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues_GMRES line 24 > src/ksp/ksp/impls/gmres/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\impls\gmres\gmreig.c > ** > **[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues line 40 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > ** > **[0]PETSC ERROR: [0] PCGAMGOptprol_AGG line 1294 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\agg.c > ** > **[0]PETSC ERROR: [0] PCSetUp_GAMG line 559 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\gamg.c > ** > **[0]PETSC ERROR: [0] PCSetUp line 810 > src/ksp/pc/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\INTERF~1\precon.c > ** > **[0]PETSC ERROR: [0] KSPSetUp line 182 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > ** > **[0]PETSC ERROR: [0] KSPSolve line 351 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > ** > **[0]PETSC ERROR: --------------------- Error Message > ------------------------------------** > **[0]PETSC ERROR: Signal received!** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: Petsc Development HG revision: > d560f272ff1b6e7f4e28667ab8b55f0c7408979e HG Date: Wed Jul 25 14:42:03 2012 > -0500** > **[0]PETSC ERROR: See docs/changes/index.html for recent updates.** > **[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.** > **[0]PETSC ERROR: See docs/index.html for manual pages.** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: > C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe > on a petsc-3.3 named USER-PC by User Sat Sep 15 09:12:58 2012** > **[0]PETSC ERROR: Libraries linked from > /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib** > **[0]PETSC ERROR: Configure run at Thu Jul 26 11:01:27 2012** > **[0]PETSC ERROR: Configure options --with-cc="win32fe cl" > --with-fc="win32fe ifort" --with-cxx="win32fe cl" > --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 > --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 --with-debugging=1 > --useThreads=0** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file** > **application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0* > > Btw, I've used both GMRES and BCGS as the ksp. What other options can I > try? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Sat Sep 15 09:32:55 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Sat, 15 Sep 2012 10:32:55 -0400 Subject: [petsc-users] Recommended solver and preconditioner to solve Poisson eqn on win7 In-Reply-To: <50542B58.50103@gmail.com> References: <5052EF1F.9090004@gmail.com> <8BEF4C19-0C76-454E-800A-10265A272A25@columbia.edu> <50542B58.50103@gmail.com> Message-ID: <9B0AFBA2-ABB6-4071-8CC7-49F6B7A8EE74@columbia.edu> This is in GMRES, I would check for a zero on the diagonal. On Sep 15, 2012, at 3:16 AM, TAY wee-beng wrote: > On 14/9/2012 4:38 PM, Mark F. Adams wrote: >> >> On Sep 14, 2012, at 8:47 AM, Jed Brown wrote: >> >>> On Fri, Sep 14, 2012 at 3:47 AM, TAY wee-beng wrote: >>> Hi, >>> >>> I need to solve the Poisson eqn on my win7 machine. I'm currently using BCGS without preconditioner. I can't use HYPRE since I'm using Fortran and win7. It's rather slow. >>> >>> Is there a recommended solver and preconditioner to solve the Poisson eqn to get me started? >>> >>> What discretization? >>> >>> Run with -pc_type gamg to start. >> >> and >> >> -pc_gamg_agg_nsmooths 1 > > I am using non-uniform Cartesian grid to solve. What do you mean by the discretization? I 'm using finite volume method to obtain a system of linear eqns. > > I tried both -pc_type gamg and -pc_gamg_agg_nsmooths 1 but the solver breaks down. The error msg is: > > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] KSPComputeExtremeSingularValues_GMRES line 24 src/ksp/ksp/impls/gmres/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\impls\gmres\gmreig.c > [0]PETSC ERROR: [0] KSPComputeExtremeSingularValues line 40 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > [0]PETSC ERROR: [0] PCGAMGOptprol_AGG line 1294 src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\agg.c > [0]PETSC ERROR: [0] PCSetUp_GAMG line 559 src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\gamg.c > [0]PETSC ERROR: [0] PCSetUp line 810 src/ksp/pc/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\INTERF~1\precon.c > [0]PETSC ERROR: [0] KSPSetUp line 182 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > [0]PETSC ERROR: [0] KSPSolve line 351 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development HG revision: d560f272ff1b6e7f4e28667ab8b55f0c7408979e HG Date: Wed Jul 25 14:42:03 2012 -0500 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe on a petsc-3.3 named USER-PC by User Sat Sep 15 09:12:58 2012 > [0]PETSC ERROR: Libraries linked from /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib > [0]PETSC ERROR: Configure run at Thu Jul 26 11:01:27 2012 > [0]PETSC ERROR: Configure options --with-cc="win32fe cl" --with-fc="win32fe ifort" --with-cxx="win32fe cl" --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 --with-debugging=1 --useThreads=0 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > Btw, I've used both GMRES and BCGS as the ksp. What other options can I try? > > Thanks >> >> (I should make this the default) >> >> Mark >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mendez8 at gmail.com Mon Sep 17 06:03:19 2012 From: mendez8 at gmail.com (=?ISO-8859-1?Q?Miguel_M=E9ndez?=) Date: Mon, 17 Sep 2012 13:03:19 +0200 Subject: [petsc-users] relocation truncated to fit: R_X86_64_PC32 error Message-ID: Hi everybody, I'm getting the infamous "relocation truncated to fit: R_X86_64_PC32" error when compiling a fortran program. I've tried many things, the last one of them being to recompile Petsc and Slepc libraries. This has solved all of the compile errors related to this relocation except for one: > /usr/local/software/slepc-3.0.0-p7-fpic/complex/build/lib/libslepc.a(cross.o): > In function `ShellMatGetDiagonal_CROSS': > cross.c:(.text+0x3ca): relocation truncated to fit: R_X86_64_PC32 against > symbol `MPIU_COMPLEX' defined in COMMON section in > /usr/local/software/petsc-3.0.0-p12-fpic/complex/build/lib/libpetsc.a(init.o) I don't even think I know where the error is, in Petsc or in Slepc? I've been trough Intel forums and done as they recommend: recompile the libraries with "--mcmodel=medium" and "shared-intel". I've tried too combinations with "-fpic", but I always end having the same error. I've searched the logs for messages about "libpetsc.a" building, but there's nothing about it. Who and when builds this exact library? Why it seems that compile options I set are not applying in the building of this file? Thanks, Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon Sep 17 06:09:46 2012 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 17 Sep 2012 13:09:46 +0200 Subject: [petsc-users] relocation truncated to fit: R_X86_64_PC32 error In-Reply-To: References: Message-ID: <25E94FC2-6F1E-4462-8521-061BB5E7DB9F@dsic.upv.es> El 17/09/2012, a las 13:03, Miguel M?ndez escribi?: > Hi everybody, > > I'm getting the infamous "relocation truncated to fit: R_X86_64_PC32" error when compiling a fortran program. I've tried many things, the last one of them being to recompile Petsc and Slepc libraries. This has solved all of the compile errors related to this relocation except for one: > > /usr/local/software/slepc-3.0.0-p7-fpic/complex/build/lib/libslepc.a(cross.o): In function `ShellMatGetDiagonal_CROSS': > cross.c:(.text+0x3ca): relocation truncated to fit: R_X86_64_PC32 against symbol `MPIU_COMPLEX' defined in COMMON section in /usr/local/software/petsc-3.0.0-p12-fpic/complex/build/lib/libpetsc.a(init.o) > > I don't even think I know where the error is, in Petsc or in Slepc? I've been trough Intel forums and done as they recommend: recompile the libraries with "--mcmodel=medium" and "shared-intel". I've tried too combinations with "-fpic", but I always end having the same error. > > I've searched the logs for messages about "libpetsc.a" building, but there's nothing about it. Who and when builds this exact library? Why it seems that compile options I set are not applying in the building of this file? > > Thanks, > > Miguel slepc-3.0.0 is more than 3 years old. I would suggest you to upgrade to the latest version (3.3) and see if the problem persists. Jose From mendez8 at gmail.com Mon Sep 17 06:28:54 2012 From: mendez8 at gmail.com (=?ISO-8859-1?Q?Miguel_M=E9ndez?=) Date: Mon, 17 Sep 2012 13:28:54 +0200 Subject: [petsc-users] relocation truncated to fit: R_X86_64_PC32 error Message-ID: Hi, We have petsc and slepc 3.3 on the machine. The problem is that a user needs to compile and run an old code that uses slepc 3.0. If we were to invest a lost of time in this code (it's a big one), it would be in making the arrays allocatable rather than upgrading the use of slepc/petsc routines from 3.0 to 3.3. Regards, Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Sep 17 07:23:19 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 17 Sep 2012 07:23:19 -0500 Subject: [petsc-users] relocation truncated to fit: R_X86_64_PC32 error In-Reply-To: References: Message-ID: On Mon, Sep 17, 2012 at 6:03 AM, Miguel M?ndez wrote: > Hi everybody, > > I'm getting the infamous "relocation truncated to fit: R_X86_64_PC32" > error when compiling a fortran program. I've tried many things, the last > one of them being to recompile Petsc and Slepc libraries. This has solved > all of the compile errors related to this relocation except for one: > > > >> /usr/local/software/slepc-3.0.0-p7-fpic/complex/build/lib/libslepc.a(cross.o): >> In function `ShellMatGetDiagonal_CROSS': >> cross.c:(.text+0x3ca): relocation truncated to fit: R_X86_64_PC32 against >> symbol `MPIU_COMPLEX' defined in COMMON section in >> /usr/local/software/petsc-3.0.0-p12-fpic/complex/build/lib/libpetsc.a(init.o) > > > I don't even think I know where the error is, in Petsc or in Slepc? I've > been trough Intel forums and done as they recommend: recompile the > libraries with "--mcmodel=medium" and "shared-intel". I've tried too > combinations with "-fpic", but I always end having the same error. > > I've searched the logs for messages about "libpetsc.a" building, but > there's nothing about it. Who and when builds this exact library? Why it > seems that compile options I set are not applying in the building of this > file? > This comes from not building with -fPIC (or what ever the PIC flag is for your compiler). Find that source file, build it with PIC, and archive it to libslepc.a. Of course, upgrading would make this automatic. Matt > Thanks, > > Miguel > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.witkowski at tu-dresden.de Mon Sep 17 09:08:19 2012 From: thomas.witkowski at tu-dresden.de (Thomas Witkowski) Date: Mon, 17 Sep 2012 16:08:19 +0200 Subject: [petsc-users] Estimate the condition number of a matrix free operator Message-ID: <50572ED3.8030701@tu-dresden.de> I would like to estimate the condition number of a matrix free operator. Is there any good way to this with PETSc? Thomas From bsmith at mcs.anl.gov Mon Sep 17 09:13:27 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 17 Sep 2012 09:13:27 -0500 Subject: [petsc-users] Estimate the condition number of a matrix free operator In-Reply-To: <50572ED3.8030701@tu-dresden.de> References: <50572ED3.8030701@tu-dresden.de> Message-ID: http://www.mcs.anl.gov/petsc/documentation/faq.html#conditionnumber On Sep 17, 2012, at 9:08 AM, Thomas Witkowski wrote: > I would like to estimate the condition number of a matrix free operator. Is there any good way to this with PETSc? > > Thomas > From jedbrown at mcs.anl.gov Mon Sep 17 09:15:25 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 17 Sep 2012 09:15:25 -0500 Subject: [petsc-users] Estimate the condition number of a matrix free operator In-Reply-To: <50572ED3.8030701@tu-dresden.de> References: <50572ED3.8030701@tu-dresden.de> Message-ID: On Mon, Sep 17, 2012 at 9:08 AM, Thomas Witkowski < thomas.witkowski at tu-dresden.de> wrote: > I would like to estimate the condition number of a matrix free operator. > Is there any good way to this with PETSc? http://scicomp.stackexchange.com/questions/34/how-can-i-estimate-the-condition-number-of-a-large-sparse-matrix-using-petsc -------------- next part -------------- An HTML attachment was scrubbed... URL: From mendez8 at gmail.com Mon Sep 17 11:00:22 2012 From: mendez8 at gmail.com (=?ISO-8859-1?Q?Miguel_M=E9ndez?=) Date: Mon, 17 Sep 2012 18:00:22 +0200 Subject: [petsc-users] relocation truncated to fit: R_X86_64_PC32 error In-Reply-To: References: Message-ID: It was that! Thank you Matthew. I though I had tested that one, it seems I hadn't. Cheers, Miguel On Mon, Sep 17, 2012 at 2:23 PM, Matthew Knepley wrote: > On Mon, Sep 17, 2012 at 6:03 AM, Miguel M?ndez wrote: > >> Hi everybody, >> >> I'm getting the infamous "relocation truncated to fit: R_X86_64_PC32" >> error when compiling a fortran program. I've tried many things, the last >> one of them being to recompile Petsc and Slepc libraries. This has solved >> all of the compile errors related to this relocation except for one: >> >> >> >>> /usr/local/software/slepc-3.0.0-p7-fpic/complex/build/lib/libslepc.a(cross.o): >>> In function `ShellMatGetDiagonal_CROSS': >>> cross.c:(.text+0x3ca): relocation truncated to fit: R_X86_64_PC32 >>> against symbol `MPIU_COMPLEX' defined in COMMON section in >>> /usr/local/software/petsc-3.0.0-p12-fpic/complex/build/lib/libpetsc.a(init.o) >> >> >> I don't even think I know where the error is, in Petsc or in Slepc? I've >> been trough Intel forums and done as they recommend: recompile the >> libraries with "--mcmodel=medium" and "shared-intel". I've tried too >> combinations with "-fpic", but I always end having the same error. >> >> I've searched the logs for messages about "libpetsc.a" building, but >> there's nothing about it. Who and when builds this exact library? Why it >> seems that compile options I set are not applying in the building of this >> file? >> > > This comes from not building with -fPIC (or what ever the PIC flag is for > your compiler). Find that source file, build > it with PIC, and archive it to libslepc.a. Of course, upgrading would make > this automatic. > > Matt > > >> Thanks, >> >> Miguel >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erocha.ssa at gmail.com Mon Sep 17 14:45:15 2012 From: erocha.ssa at gmail.com (Eduardo) Date: Mon, 17 Sep 2012 16:45:15 -0300 Subject: [petsc-users] Cannot convert error Message-ID: Does anyone know the reason for the following errors: error: cannot convert ?_p_KSP**? to ?KSP {aka _p_KSP*}? for argument ?1? to ?PetscErrorCode KSPDestroy(KSP)? error: cannot convert ?_p_VecScatter**? to ?VecScatter {aka _p_VecScatter*}? for argument ?1? to ?PetscErrorCode VecScatterDestroy(VecScatter)? Thanks in advance, Eduardo From errodrigues at inf.ufrgs.br Mon Sep 17 14:48:12 2012 From: errodrigues at inf.ufrgs.br (Eduardo Rocha Rodrigues) Date: Mon, 17 Sep 2012 16:48:12 -0300 Subject: [petsc-users] Cannot convert error Message-ID: Does anyone know the reason for the following errors: error: cannot convert ?_p_KSP**? to ?KSP {aka _p_KSP*}? for argument ?1? to ?PetscErrorCode KSPDestroy(KSP)? error: cannot convert ?_p_VecScatter**? to ?VecScatter {aka _p_VecScatter*}? for argument ?1? to ?PetscErrorCode VecScatterDestroy(VecScatter)? Thanks in advance, Eduardo From jedbrown at mcs.anl.gov Mon Sep 17 14:49:26 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 17 Sep 2012 14:49:26 -0500 Subject: [petsc-users] Cannot convert error In-Reply-To: References: Message-ID: On Mon, Sep 17, 2012 at 2:45 PM, Eduardo wrote: > Does anyone know the reason for the following errors: > > error: cannot convert ?_p_KSP**? to ?KSP {aka _p_KSP*}? for argument > ?1? to ?PetscErrorCode KSPDestroy(KSP)? > Sounds like you are updating some old code. The destroy methods were changed to take a reference a couple releases ago. Use KSPDestroy(&ksp) and VecScatterDestroy(&scatter). > error: cannot convert ?_p_VecScatter**? to ?VecScatter {aka > _p_VecScatter*}? for argument ?1? to ?PetscErrorCode > VecScatterDestroy(VecScatter)? > > Thanks in advance, > Eduardo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Tue Sep 18 11:30:23 2012 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Tue, 18 Sep 2012 18:30:23 +0200 Subject: [petsc-users] MatMatSolve for MUMPS disappeared? Message-ID: <5058A19F.3020503@gfz-potsdam.de> Hello, Trying to use MatMatSolve in petsc-3.3-p3 results in: [0]PETSC ERROR: No support for this operation for this object type! [0]PETSC ERROR: MatMatSolve_MUMPS() is not implemented yet! The point is that in petsc-3.3-p2 it works like clocks. There was one issue with it for sequential mode that has been fixed here: http://petsc.cs.iit.edu/petsc/releases/petsc-3.3/rev/8badc49a596e I'm wondering if it is really not supported anymore? Thanks. -- Regards, Alexander From zhenglun.wei at gmail.com Tue Sep 18 11:45:01 2012 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Tue, 18 Sep 2012 11:45:01 -0500 Subject: [petsc-users] How to check my current PETSc configuration? Message-ID: <5058A50D.9090201@gmail.com> Dear folks, I was trying to check my current PETSc configuration. Is there any command or file I can refer to? Specially, I want to know if my PETSc is configured for distributed-memory or shared memory machine. thanks, Alan From knepley at gmail.com Tue Sep 18 11:50:14 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 18 Sep 2012 11:50:14 -0500 Subject: [petsc-users] How to check my current PETSc configuration? In-Reply-To: <5058A50D.9090201@gmail.com> References: <5058A50D.9090201@gmail.com> Message-ID: On Tue, Sep 18, 2012 at 11:45 AM, Zhenglun (Alan) Wei < zhenglun.wei at gmail.com> wrote: > Dear folks, > I was trying to check my current PETSc configuration. Is there any > command or file I can refer to? Specially, I want to know if my PETSc is > configured for distributed-memory or shared memory machine. > $PETSC_DIR/$PETSC_ARCH/conf/configure.log The configuration information is at the end. Matt > thanks, > Alan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Sep 18 11:50:38 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 18 Sep 2012 11:50:38 -0500 Subject: [petsc-users] How to check my current PETSc configuration? In-Reply-To: <5058A50D.9090201@gmail.com> References: <5058A50D.9090201@gmail.com> Message-ID: $PETSC_ARCH/conf/configure.log. On Sep 18, 2012 11:45 AM, "Zhenglun (Alan) Wei" wrote: > Dear folks, > I was trying to check my current PETSc configuration. Is there any > command or file I can refer to? Specially, I want to know if my PETSc is > configured for distributed-memory or shared memory machine. > > thanks, > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Sep 18 13:14:41 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 18 Sep 2012 13:14:41 -0500 Subject: [petsc-users] MatMatSolve for MUMPS disappeared? In-Reply-To: <5058A19F.3020503@gfz-potsdam.de> References: <5058A19F.3020503@gfz-potsdam.de> Message-ID: MatMatSolve_MUMPS() is never been supported. It seems mumps support multiple rhs now. If you need it, we can implement MatSolve_MUMPS(). > > Trying to use MatMatSolve in petsc-3.3-p3 results in: > > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: MatMatSolve_MUMPS() is not implemented yet! > > The point is that in petsc-3.3-p2 it works like clocks. > It likely calls MatMatSolve_Basic(), which calls MatSolve_MUMPS() in a loop, not an efficient implementation. There was one issue with it for sequential mode that has been fixed here: > http://petsc.cs.iit.edu/petsc/**releases/petsc-3.3/rev/**8badc49a596e This change prevents calling of MatMatSolve_Basic(). How many rhs vectors (or number of columns in your rhs matrix)? MUMPS only supports centralized rhs. Scattering many rhs vectors to a sequential dense matrix is non-scalable. Hong > > I'm wondering if it is really not supported anymore? > Thanks. > > -- > Regards, > Alexander > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Lukasz.Kaczmarczyk at glasgow.ac.uk Tue Sep 18 17:19:42 2012 From: Lukasz.Kaczmarczyk at glasgow.ac.uk (Lukasz Kaczmarczyk) Date: Tue, 18 Sep 2012 23:19:42 +0100 Subject: [petsc-users] compilation problem, zoltan & parmetis Message-ID: <33D9B1A7-55B4-47F6-86D7-2291C1CEA41F@glasgow.ac.uk> Hallo, I have following proble with compilation (is the same error on MacOS and Ubuntu). Compilers: gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00) gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ./configure --with-fortran=0 --with-cc=/opt/build_for_gcc-mp-4.4/local/bin/mpicc --with-cxx=/opt/build_for_gcc-mp-4.4/local/bin/mpicxx --download-superlu_dist=1 --download-parmetis=1 -download-umfpack=1 -download-zoltan=1 --with-shared-libraries=0 =============================================================================== Configuring PETSc to compile on your system =============================================================================== =============================================================================== Compiling UMFPACK; this may take several minutes =============================================================================== TESTING: configureLibrary from PETSc.packages.parmetis(config/BuildSystem/config/package.py:433) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Did not find package METIS needed by parmetis. Enable the package using --with-metis or --download-metis ******************************************************************************* unimackiss:petsc-3.3-p3 likask$ ./configure --with-fortran=0 --with-cc=/opt/build_for_gcc-mp-4.4/local/bin/mpicc --with-cxx=/opt/build_for_gcc-mp-4.4/local/bin/mpicxx --download-superlu_dist=1 --download-metis=1 --download-parmetis=1 -download-umfpack=1 -download-zoltan=1 --with-shared-libraries=0 =============================================================================== Configuring PETSc to compile on your system =============================================================================== =============================================================================== Configuring METIS; this may take several minutes =============================================================================== =============================================================================== Compiling METIS; this may take several minutes =============================================================================== =============================================================================== Configuring ParMETIS; this may take several minutes =============================================================================== =============================================================================== Compiling ParMETIS; this may take several minutes =============================================================================== =============================================================================== Compiling superlu_dist; this may take several minutes =============================================================================== ************************************************************************************************** Please register to use Zoltan at http://www.cs.sandia.gov/Zoltan/Zoltan.html ************************************************************************************************** =============================================================================== Compiling zoltan; this may take several minutes =============================================================================== ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Error running make on ZOLTAN: Could not execute "cd /opt/build_for_gcc-mp-4.4/petsc-3.3-p3/externalpackages/Zoltan && make clean && make ZOLTAN_ARCH="darwin10.2.0-c-debug" CC="/opt/build_for_gcc-mp-4.4/local/bin/mpicc" CFLAGS=" -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 " AR="/usr/bin/ar cr" RANLIB="/usr/bin/ranlib -c" X_LIBS="['-L/opt/local/lib', '-lX11']" MPI_INCPATH="-I/opt/build_for_gcc-mp-4.4/local/include" PARMETIS_INCPATH="-I/opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/include" PARMETIS_LIBPATH="-L/opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/lib -L/opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/lib -lparmetis" zoltan": driver ch zz all lb order par rcb coloring oct phg util hsfc parmetis params timer ha reftree include Memory Communication DDirectory Timer shared Obj_generic exit 0 rm -f *.o libexzoltan.a rm -f *.o zoltanSimple zoltanExample1 subDirs phgExample rm -f *.o ex1 rm -f *.o zCPPExample1 zCPPExample2 Obj_generic fort fdriver fdriver_old exit 0 make libzoltan_mem.a (Re)Building dependency for ../Memory/mem.c... Compiling ../Memory/mem.c... creating library libzoltan_mem.a make libzoltan_comm.a (Re)Building dependency for ../Communication/comm_sort_ints.c... (Re)Building dependency for ../Communication/comm_resize.c... (Re)Building dependency for ../Communication/comm_exchange_sizes.c... (Re)Building dependency for ../Communication/comm_invert_map.c... (Re)Building dependency for ../Communication/comm_invert_plan.c... (Re)Building dependency for ../Communication/comm_info.c... (Re)Building dependency for ../Communication/comm_destroy.c... (Re)Building dependency for ../Communication/comm_do_reverse.c... (Re)Building dependency for ../Communication/comm_do.c... (Re)Building dependency for ../Communication/comm_create.c... Compiling ../Communication/comm_create.c... Compiling ../Communication/comm_do.c... Compiling ../Communication/comm_do_reverse.c... Compiling ../Communication/comm_destroy.c... Compiling ../Communication/comm_info.c... Compiling ../Communication/comm_invert_plan.c... Compiling ../Communication/comm_invert_map.c... Compiling ../Communication/comm_exchange_sizes.c... Compiling ../Communication/comm_resize.c... Compiling ../Communication/comm_sort_ints.c... creating library libzoltan_comm.a make libzoltan_dd.a (Re)Building dependency for ../shared/zoltan_align.c... (Re)Building dependency for ../shared/zoltan_id.c... (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c... (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c... (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c... (Re)Building dependency for ../DDirectory/DD_Print.c... (Re)Building dependency for ../DDirectory/DD_Stats.c... (Re)Building dependency for ../DDirectory/DD_Hash2.c... (Re)Building dependency for ../DDirectory/DD_Set_Hash_Fn.c... (Re)Building dependency for ../DDirectory/DD_Update.c... (Re)Building dependency for ../DDirectory/DD_Remove.c... (Re)Building dependency for ../DDirectory/DD_Find.c... (Re)Building dependency for ../DDirectory/DD_Destroy.c... (Re)Building dependency for ../DDirectory/DD_Create.c... Compiling ../DDirectory/DD_Create.c... Compiling ../DDirectory/DD_Destroy.c... Compiling ../DDirectory/DD_Find.c... Compiling ../DDirectory/DD_Remove.c... Compiling ../DDirectory/DD_Update.c... Compiling ../DDirectory/DD_Set_Hash_Fn.c... Compiling ../DDirectory/DD_Hash2.c... Compiling ../DDirectory/DD_Stats.c... Compiling ../DDirectory/DD_Print.c... Compiling ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c... Compiling ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c... Compiling ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c... Compiling ../shared/zoltan_id.c... Compiling ../shared/zoltan_align.c... creating library libzoltan_dd.a make libzoltan_timer.a (Re)Building dependency for ../Timer/timer.c... (Re)Building dependency for ../Timer/zoltan_timer.c... Compiling ../Timer/zoltan_timer.c... Compiling ../Timer/timer.c... creating library libzoltan_timer.a creating library libzoltan.a (Re)Building dependency for ../reftree/reftree_coarse_path.c... (Re)Building dependency for ../reftree/reftree_hash.c... (Re)Building dependency for ../reftree/reftree_part.c... (Re)Building dependency for ../reftree/reftree_build.c... (Re)Building dependency for ../ha/build_machine_desc.c... (Re)Building dependency for ../ha/get_processor_name.c... (Re)Building dependency for ../ha/divide_machine.c... (Re)Building dependency for ../timer/timer_params.c... (Re)Building dependency for ../phg/phg_patoh.c... (Re)Building dependency for ../phg/phg_parkway.c... (Re)Building dependency for ../phg/phg_order.c... (Re)Building dependency for ../phg/phg_comm.c... (Re)Building dependency for ../phg/phg_scale.c... (Re)Building dependency for ../phg/phg_util.c... (Re)Building dependency for ../phg/phg_rdivide.c... (Re)Building dependency for ../phg/phg_Vcycle.c... (Re)Building dependency for ../phg/phg_serialpartition.c... (Re)Building dependency for ../phg/phg_refinement.c... (Re)Building dependency for ../phg/phg_plot.c... (Re)Building dependency for ../phg/phg_match.c... (Re)Building dependency for ../phg/phg_gather.c... (Re)Building dependency for ../phg/phg_distrib.c... (Re)Building dependency for ../phg/phg_coarse.c... (Re)Building dependency for ../phg/phg_build_calls.c... (Re)Building dependency for ../phg/phg_build.c... (Re)Building dependency for ../phg/phg_hypergraph.c... (Re)Building dependency for ../phg/phg.c... (Re)Building dependency for ../parmetis/scatter_graph.c... (Re)Building dependency for ../parmetis/verify_graph.c... (Re)Building dependency for ../parmetis/build_graph.c... (Re)Building dependency for ../parmetis/parmetis_jostle.c... (Re)Building dependency for ../params/bind_param.c... (Re)Building dependency for ../params/free_params.c... (Re)Building dependency for ../params/key_params.c... (Re)Building dependency for ../params/print_params.c... (Re)Building dependency for ../params/check_param.c... (Re)Building dependency for ../params/assign_param_vals.c... (Re)Building dependency for ../params/set_param.c... (Re)Building dependency for ../hsfc/hsfc_point_assign.c... (Re)Building dependency for ../hsfc/hsfc_box_assign.c... (Re)Building dependency for ../hsfc/hsfc.c... (Re)Building dependency for ../hsfc/hsfc_hilbert.c... (Re)Building dependency for ../oct/oct_plot.c... (Re)Building dependency for ../oct/rootlist.c... (Re)Building dependency for ../oct/octree.c... (Re)Building dependency for ../oct/migtags.c... (Re)Building dependency for ../oct/migreg.c... (Re)Building dependency for ../oct/output.c... (Re)Building dependency for ../oct/migoct.c... (Re)Building dependency for ../oct/costs.c... (Re)Building dependency for ../oct/dfs.c... (Re)Building dependency for ../oct/octupdate.c... (Re)Building dependency for ../oct/oct_util.c... (Re)Building dependency for ../oct/octant.c... (Re)Building dependency for ../oct/msg.c... (Re)Building dependency for ../order/perm.c... (Re)Building dependency for ../order/order_struct.c... (Re)Building dependency for ../order/order.c... (Re)Building dependency for ../par/par_tflops_special.c... (Re)Building dependency for ../par/par_stats.c... (Re)Building dependency for ../par/par_sync.c... (Re)Building dependency for ../par/par_median.c... (Re)Building dependency for ../par/par_bisect.c... (Re)Building dependency for ../par/par_average.c... (Re)Building dependency for ../coloring/color_test.c... (Re)Building dependency for ../coloring/g2l_hash.c... (Re)Building dependency for ../coloring/coloring.c... (Re)Building dependency for ../rcb/shared.c... (Re)Building dependency for ../rcb/inertial3d.c... (Re)Building dependency for ../rcb/inertial2d.c... (Re)Building dependency for ../rcb/inertial1d.c... (Re)Building dependency for ../rcb/rib_util.c... (Re)Building dependency for ../rcb/rib.c... (Re)Building dependency for ../rcb/create_proc_list.c... (Re)Building dependency for ../rcb/point_assign.c... (Re)Building dependency for ../rcb/box_assign.c... (Re)Building dependency for ../rcb/rcb_box.c... (Re)Building dependency for ../rcb/rcb_util.c... (Re)Building dependency for ../rcb/rcb.c... (Re)Building dependency for ../all/all_allo.c... (Re)Building dependency for ../lb/lb_remap.c... (Re)Building dependency for ../lb/lb_set_part_sizes.c... (Re)Building dependency for ../lb/lb_part2proc.c... (Re)Building dependency for ../lb/lb_box_assign.c... (Re)Building dependency for ../lb/lb_point_assign.c... (Re)Building dependency for ../lb/lb_set_method.c... (Re)Building dependency for ../lb/lb_set_fn.c... (Re)Building dependency for ../lb/lb_migrate.c... (Re)Building dependency for ../lb/lb_invert.c... (Re)Building dependency for ../lb/lb_init.c... (Re)Building dependency for ../lb/lb_copy.c... (Re)Building dependency for ../lb/lb_free.c... (Re)Building dependency for ../lb/lb_eval.c... (Re)Building dependency for ../lb/lb_balance.c... (Re)Building dependency for ../zz/zz_rand.c... (Re)Building dependency for ../zz/zz_sort.c... (Re)Building dependency for ../zz/zz_heap.c... (Re)Building dependency for ../zz/zz_hash.c... (Re)Building dependency for ../zz/zz_gen_files.c... (Re)Building dependency for ../zz/zz_util.c... (Re)Building dependency for ../zz/zz_set_fn.c... (Re)Building dependency for ../zz/zz_init.c... (Re)Building dependency for ../zz/zz_struct.c... (Re)Building dependency for ../zz/zz_obj_list.c... (Re)Building dependency for ../zz/zz_coord.c... Compiling ../zz/zz_coord.c... Compiling ../zz/zz_obj_list.c... Compiling ../zz/zz_struct.c... Compiling ../zz/zz_init.c... Compiling ../zz/zz_set_fn.c... Compiling ../zz/zz_util.c... Compiling ../zz/zz_gen_files.c... Makefile:28: mem.d: No such file or directory ../Memory/mem.c: In function 'Zoltan_Array_Alloc': ../Memory/mem.c:162: warning: initialization discards qualifiers from pointer target type ../Memory/mem.c: In function 'Zoltan_Malloc': ../Memory/mem.c:280: warning: initialization discards qualifiers from pointer target type ../Memory/mem.c: In function 'Zoltan_Realloc': ../Memory/mem.c:345: warning: initialization discards qualifiers from pointer target type Makefile:28: comm_create.d: No such file or directory Makefile:28: comm_do.d: No such file or directory Makefile:28: comm_do_reverse.d: No such file or directory Makefile:28: comm_destroy.d: No such file or directory Makefile:28: comm_info.d: No such file or directory Makefile:28: comm_invert_plan.d: No such file or directory Makefile:28: comm_invert_map.d: No such file or directory Makefile:28: comm_exchange_sizes.d: No such file or directory Makefile:28: comm_resize.d: No such file or directory Makefile:28: comm_sort_ints.d: No such file or directory ../Communication/comm_create.c: In function 'Zoltan_Comm_Create': ../Communication/comm_create.c:63: warning: initialization discards qualifiers from pointer target type ../Communication/comm_create.c:79: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:123: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:124: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:125: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:170: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:195: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:196: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:197: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:222: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:232: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:248: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:249: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:250: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:251: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:252: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:253: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:261: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:292: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:293: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c: In function 'Zoltan_Comm_Copy_To': ../Communication/comm_create.c:326: warning: initialization discards qualifiers from pointer target type ../Communication/comm_create.c:342: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:348: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:349: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:350: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:351: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:352: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:353: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:354: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:355: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:356: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:357: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:358: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:359: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:360: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:361: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:362: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:364: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_create.c:365: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do.c: In function 'Zoltan_Comm_Do_Post': ../Communication/comm_do.c:85: warning: initialization discards qualifiers from pointer target type ../Communication/comm_do.c:134: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do.c:180: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do.c:193: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do.c:195: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do.c:262: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do.c:334: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do.c: In function 'Zoltan_Comm_Do_Wait': ../Communication/comm_do.c:398: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c: In function 'Zoltan_Comm_Do_Reverse_Post': ../Communication/comm_do_reverse.c:66: warning: initialization discards qualifiers from pointer target type ../Communication/comm_do_reverse.c:87: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:115: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:117: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:122: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:124: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:125: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:132: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:133: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:134: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:140: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:141: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:142: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c: In function 'Zoltan_Comm_Do_Reverse_Wait': ../Communication/comm_do_reverse.c:168: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:169: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:170: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:171: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:172: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:173: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:174: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:176: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:177: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_do_reverse.c:178: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c: In function 'Zoltan_Comm_Destroy': ../Communication/comm_destroy.c:32: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:33: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:34: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:35: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:36: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:37: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:38: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:39: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:40: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:41: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:42: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:43: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:44: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:45: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:46: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:47: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:48: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_destroy.c:51: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_info.c: In function 'Zoltan_Comm_Info': ../Communication/comm_info.c:56: warning: initialization discards qualifiers from pointer target type ../Communication/comm_invert_plan.c: In function 'Zoltan_Comm_Invert_Plan': ../Communication/comm_invert_plan.c:42: warning: initialization discards qualifiers from pointer target type ../Communication/comm_invert_plan.c:65: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:98: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:99: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:108: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:109: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:110: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:111: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:112: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:113: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:114: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:115: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:116: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:117: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:118: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:123: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:124: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_plan.c:125: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_map.c: In function 'Zoltan_Comm_Invert_Map': ../Communication/comm_invert_map.c:62: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_map.c:63: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_map.c:69: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_map.c:70: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_map.c:94: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_map.c:95: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_map.c:97: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_invert_map.c:98: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c: In function 'Zoltan_Comm_Resize': ../Communication/comm_resize.c:53: warning: initialization discards qualifiers from pointer target type ../Communication/comm_resize.c:70: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:71: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:72: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:73: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:74: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:75: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:76: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:99: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:107: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:113: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:130: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:139: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:140: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:169: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:170: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:174: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:175: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:202: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:214: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:224: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:225: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:243: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:244: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:256: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:257: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:258: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:259: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:260: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:261: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:262: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:263: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:264: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_resize.c:265: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_sort_ints.c: In function 'Zoltan_Comm_Sort_Ints': ../Communication/comm_sort_ints.c:52: warning: passing argument 3 of 'Zoltan_Calloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:62: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_sort_ints.c:53: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_sort_ints.c:54: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Communication/comm_sort_ints.c:78: warning: passing argument 1 of 'Zoltan_Multifree' discards qualifiers from pointer target type ../../include/zoltan_mem.h:69: note: expected 'char *' but argument is of type 'const char *' Makefile:28: DD_Create.d: No such file or directory Makefile:28: DD_Destroy.d: No such file or directory Makefile:28: DD_Find.d: No such file or directory Makefile:28: DD_Remove.d: No such file or directory Makefile:28: DD_Update.d: No such file or directory Makefile:28: DD_Set_Hash_Fn.d: No such file or directory Makefile:28: DD_Hash2.d: No such file or directory Makefile:28: DD_Stats.d: No such file or directory Makefile:28: DD_Print.d: No such file or directory Makefile:28: DD_Set_Neighbor_Hash_Fn1.d: No such file or directory Makefile:28: DD_Set_Neighbor_Hash_Fn2.d: No such file or directory Makefile:28: DD_Set_Neighbor_Hash_Fn3.d: No such file or directory Makefile:28: zoltan_id.d: No such file or directory Makefile:28: zoltan_align.d: No such file or directory ../DDirectory/DD_Create.c: In function 'Zoltan_DD_Create': ../DDirectory/DD_Create.c:45: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Create.c:80: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Create.c: In function 'Zoltan_DD_Copy_To': ../DDirectory/DD_Create.c:151: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Create.c:167: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Create.c: In function 'allocate_copy_list': ../DDirectory/DD_Create.c:200: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Destroy.c: In function 'Zoltan_DD_Destroy': ../DDirectory/DD_Destroy.c:46: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Destroy.c:63: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Destroy.c:75: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Find.c: In function 'Zoltan_DD_Find': ../DDirectory/DD_Find.c:58: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Find.c:77: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Find.c:90: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Find.c:93: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Find.c:129: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Find.c:197: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Find.c:198: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Find.c:199: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Find.c: In function 'DD_Find_Local': ../DDirectory/DD_Find.c:240: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Remove.c: In function 'Zoltan_DD_Remove': ../DDirectory/DD_Remove.c:55: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Remove.c:75: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Remove.c:88: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Remove.c:125: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Remove.c:161: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Remove.c:162: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Remove.c:163: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Remove.c: In function 'DD_Remove_Local': ../DDirectory/DD_Remove.c:196: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Remove.c:220: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Update.c: In function 'Zoltan_DD_Update': ../DDirectory/DD_Update.c:63: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Update.c:91: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Update.c:104: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Update.c:155: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Update.c:200: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Update.c:201: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Update.c:202: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Update.c: In function 'DD_Update_Local': ../DDirectory/DD_Update.c:239: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Update.c:303: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Set_Hash_Fn.c: In function 'Zoltan_DD_Set_Hash_Fn': ../DDirectory/DD_Set_Hash_Fn.c:42: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Stats.c: In function 'Zoltan_DD_Stats': ../DDirectory/DD_Stats.c:48: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Print.c: In function 'Zoltan_DD_Print': ../DDirectory/DD_Print.c:42: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c: In function 'Zoltan_DD_Set_Neighbor_Hash_Fn1': ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c:52: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c: In function 'Zoltan_DD_Set_Neighbor_Hash_Fn2': ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:75: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:88: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c: In function 'dd_nh2': ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:119: warning: initialization discards qualifiers from pointer target type ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c: In function 'dd_nh2_cleanup': ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:169: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c: In function 'Zoltan_DD_Set_Neighbor_Hash_Fn3': ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c:55: warning: initialization discards qualifiers from pointer target type ../shared/zoltan_id.c: In function 'ZOLTAN_Malloc_ID': ../shared/zoltan_id.c:51: warning: initialization discards qualifiers from pointer target type Makefile:28: zoltan_timer.d: No such file or directory Makefile:28: timer.d: No such file or directory ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Copy_To': ../Timer/zoltan_timer.c:129: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Timer/zoltan_timer.c:136: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Create': ../Timer/zoltan_timer.c:158: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Timer/zoltan_timer.c:159: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Init': ../Timer/zoltan_timer.c:181: warning: initialization discards qualifiers from pointer target type ../Timer/zoltan_timer.c:190: warning: passing argument 3 of 'Zoltan_Realloc' discards qualifiers from pointer target type ../../include/zoltan_mem.h:64: note: expected 'char *' but argument is of type 'const char *' ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Reset': ../Timer/zoltan_timer.c:216: warning: initialization discards qualifiers from pointer target type ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_ChangeFlag': ../Timer/zoltan_timer.c:245: warning: initialization discards qualifiers from pointer target type ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Start': ../Timer/zoltan_timer.c:263: warning: initialization discards qualifiers from pointer target type ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Stop': ../Timer/zoltan_timer.c:306: warning: initialization discards qualifiers from pointer target type ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Print': ../Timer/zoltan_timer.c:358: warning: initialization discards qualifiers from pointer target type ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_PrintAll': ../Timer/zoltan_timer.c:393: warning: initialization discards qualifiers from pointer target type ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Destroy': ../Timer/zoltan_timer.c:411: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../Timer/zoltan_timer.c:412: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' Makefile:28: zz_coord.d: No such file or directory Makefile:28: zz_obj_list.d: No such file or directory Makefile:28: zz_struct.d: No such file or directory Makefile:28: zz_init.d: No such file or directory Makefile:28: zz_set_fn.d: No such file or directory Makefile:28: zz_util.d: No such file or directory Makefile:28: zz_gen_files.d: No such file or directory Makefile:28: zz_hash.d: No such file or directory Makefile:28: zz_heap.d: No such file or directory Makefile:28: zz_sort.d: No such file or directory Makefile:28: zz_rand.d: No such file or directory Makefile:28: lb_balance.d: No such file or directory Makefile:28: lb_eval.d: No such file or directory Makefile:28: lb_free.d: No such file or directory Makefile:28: lb_copy.d: No such file or directory Makefile:28: lb_init.d: No such file or directory Makefile:28: lb_invert.d: No such file or directory Makefile:28: lb_migrate.d: No such file or directory Makefile:28: lb_set_fn.d: No such file or directory Makefile:28: lb_set_method.d: No such file or directory Makefile:28: lb_point_assign.d: No such file or directory Makefile:28: lb_box_assign.d: No such file or directory Makefile:28: lb_part2proc.d: No such file or directory Makefile:28: lb_set_part_sizes.d: No such file or directory Makefile:28: lb_remap.d: No such file or directory Makefile:28: all_allo.d: No such file or directory Makefile:28: rcb.d: No such file or directory Makefile:28: rcb_util.d: No such file or directory Makefile:28: rcb_box.d: No such file or directory Makefile:28: box_assign.d: No such file or directory Makefile:28: point_assign.d: No such file or directory Makefile:28: create_proc_list.d: No such file or directory Makefile:28: rib.d: No such file or directory Makefile:28: rib_util.d: No such file or directory Makefile:28: inertial1d.d: No such file or directory Makefile:28: inertial2d.d: No such file or directory Makefile:28: inertial3d.d: No such file or directory Makefile:28: shared.d: No such file or directory Makefile:28: par_average.d: No such file or directory Makefile:28: par_bisect.d: No such file or directory Makefile:28: par_median.d: No such file or directory Makefile:28: par_sync.d: No such file or directory Makefile:28: par_stats.d: No such file or directory Makefile:28: par_tflops_special.d: No such file or directory Makefile:28: coloring.d: No such file or directory Makefile:28: g2l_hash.d: No such file or directory Makefile:28: color_test.d: No such file or directory Makefile:28: par_average.d: No such file or directory Makefile:28: par_bisect.d: No such file or directory Makefile:28: par_median.d: No such file or directory Makefile:28: par_sync.d: No such file or directory Makefile:28: par_stats.d: No such file or directory Makefile:28: par_tflops_special.d: No such file or directory Makefile:28: order.d: No such file or directory Makefile:28: order_struct.d: No such file or directory Makefile:28: perm.d: No such file or directory Makefile:28: msg.d: No such file or directory Makefile:28: octant.d: No such file or directory Makefile:28: oct_util.d: No such file or directory Makefile:28: octupdate.d: No such file or directory Makefile:28: dfs.d: No such file or directory Makefile:28: costs.d: No such file or directory Makefile:28: migoct.d: No such file or directory Makefile:28: output.d: No such file or directory Makefile:28: migreg.d: No such file or directory Makefile:28: migtags.d: No such file or directory Makefile:28: octree.d: No such file or directory Makefile:28: rootlist.d: No such file or directory Makefile:28: oct_plot.d: No such file or directory Makefile:28: hsfc_hilbert.d: No such file or directory Makefile:28: hsfc.d: No such file or directory Makefile:28: hsfc_box_assign.d: No such file or directory Makefile:28: hsfc_point_assign.d: No such file or directory Makefile:28: set_param.d: No such file or directory Makefile:28: assign_param_vals.d: No such file or directory Makefile:28: check_param.d: No such file or directory Makefile:28: print_params.d: No such file or directory Makefile:28: key_params.d: No such file or directory Makefile:28: free_params.d: No such file or directory Makefile:28: bind_param.d: No such file or directory Makefile:28: parmetis_jostle.d: No such file or directory Makefile:28: build_graph.d: No such file or directory Makefile:28: verify_graph.d: No such file or directory Makefile:28: scatter_graph.d: No such file or directory Makefile:28: phg.d: No such file or directory Makefile:28: phg_hypergraph.d: No such file or directory Makefile:28: phg_build.d: No such file or directory Makefile:28: phg_build_calls.d: No such file or directory Makefile:28: phg_coarse.d: No such file or directory Makefile:28: phg_distrib.d: No such file or directory Makefile:28: phg_gather.d: No such file or directory Makefile:28: phg_match.d: No such file or directory Makefile:28: phg_plot.d: No such file or directory Makefile:28: phg_refinement.d: No such file or directory Makefile:28: phg_serialpartition.d: No such file or directory Makefile:28: phg_Vcycle.d: No such file or directory Makefile:28: phg_rdivide.d: No such file or directory Makefile:28: phg_util.d: No such file or directory Makefile:28: phg_scale.d: No such file or directory Makefile:28: phg_comm.d: No such file or directory Makefile:28: phg_order.d: No such file or directory Makefile:28: phg_parkway.d: No such file or directory Makefile:28: phg_patoh.d: No such file or directory Makefile:28: timer_params.d: No such file or directory Makefile:28: divide_machine.d: No such file or directory Makefile:28: get_processor_name.d: No such file or directory Makefile:28: build_machine_desc.d: No such file or directory Makefile:28: reftree_build.d: No such file or directory Makefile:28: reftree_part.d: No such file or directory Makefile:28: reftree_hash.d: No such file or directory Makefile:28: reftree_coarse_path.d: No such file or directory ../zz/zz_coord.c:44: warning: initialization discards qualifiers from pointer target type ../zz/zz_coord.c:44: warning: initialization discards qualifiers from pointer target type ../zz/zz_coord.c:45: warning: initialization discards qualifiers from pointer target type ../zz/zz_coord.c:45: warning: initialization discards qualifiers from pointer target type ../zz/zz_coord.c:46: warning: initialization discards qualifiers from pointer target type ../zz/zz_coord.c:46: warning: initialization discards qualifiers from pointer target type ../zz/zz_coord.c: In function 'Zoltan_Get_Coordinates': ../zz/zz_coord.c:76: warning: initialization discards qualifiers from pointer target type ../zz/zz_coord.c:125: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_coord.c:177: warning: passing argument 2 of 'Zoltan_Bind_Param' discards qualifiers from pointer target type ../params/params_const.h:81: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_coord.c:179: warning: passing argument 2 of 'Zoltan_Bind_Param' discards qualifiers from pointer target type ../params/params_const.h:81: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_coord.c:180: warning: passing argument 2 of 'Zoltan_Bind_Param' discards qualifiers from pointer target type ../params/params_const.h:81: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_coord.c:395: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_obj_list.c: In function 'Zoltan_Get_Obj_List': ../zz/zz_obj_list.c:44: warning: initialization discards qualifiers from pointer target type ../zz/zz_obj_list.c:83: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_obj_list.c:87: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_obj_list.c:92: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_obj_list.c:142: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_obj_list.c:180: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_obj_list.c:181: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_obj_list.c:182: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_struct.c: In function 'Zoltan_Create': ../zz/zz_struct.c:57: warning: initialization discards qualifiers from pointer target type ../zz/zz_struct.c:64: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_struct.c: In function 'Zoltan_Destroy': ../zz/zz_struct.c:176: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_set_fn.c: In function 'Zoltan_Set_Fn': ../zz/zz_set_fn.c:55: warning: initialization discards qualifiers from pointer target type ../zz/zz_util.c: In function 'Zoltan_Clean_String': ../zz/zz_util.c:56: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_util.c: In function 'Zoltan_Strdup': ../zz/zz_util.c:83: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' In file included from ../zz/zz_gen_files.c:21: ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:35: error: conflicting types for 'METIS_NodeND' /opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/include/metis.h:224: note: previous declaration of 'METIS_NodeND' was here ../parmetis/parmetis_jostle.h:134: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:134: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:135: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:135: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:135: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:138: error: expected ')' before '*' token ../parmetis/parmetis_jostle.h:144: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:144: error: expected declaration specifiers or '...' before 'idxtype' ../parmetis/parmetis_jostle.h:144: error: expected declaration specifiers or '...' before 'idxtype' ../zz/zz_gen_files.c: In function 'Zoltan_Generate_Files': ../zz/zz_gen_files.c:93: warning: initialization discards qualifiers from pointer target type ../zz/zz_gen_files.c:107: warning: assignment discards qualifiers from pointer target type ../zz/zz_gen_files.c:123: warning: passing argument 9 of 'Zoltan_Build_Graph' from incompatible pointer type ../parmetis/parmetis_jostle.h:141: note: expected 'float **' but argument is of type 'int **' ../zz/zz_gen_files.c:123: error: too many arguments to function 'Zoltan_Build_Graph' ../zz/zz_gen_files.c:488: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:489: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:490: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:491: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:492: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:493: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:494: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:495: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:496: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:497: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:500: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:501: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:502: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:503: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:504: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c: In function 'turn_off_reduce_dimensions': ../zz/zz_gen_files.c:515: warning: initialization discards qualifiers from pointer target type ../zz/zz_gen_files.c:515: warning: initialization discards qualifiers from pointer target type ../zz/zz_gen_files.c:518: warning: passing argument 2 of 'Zoltan_Bind_Param' discards qualifiers from pointer target type ../params/params_const.h:81: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c: In function 'Zoltan_HG_Get_Pins': ../zz/zz_gen_files.c:533: warning: initialization discards qualifiers from pointer target type ../zz/zz_gen_files.c:553: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:554: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:556: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:559: warning: passing argument 1 of 'Zoltan_Multifree' discards qualifiers from pointer target type ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:568: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:583: warning: passing argument 1 of 'Zoltan_Multifree' discards qualifiers from pointer target type ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:602: warning: passing argument 1 of 'Zoltan_Multifree' discards qualifiers from pointer target type ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c: In function 'fan_in_edge_global_ids': ../zz/zz_gen_files.c:661: warning: passing argument 3 of 'Zoltan_Calloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:62: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:664: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:700: warning: passing argument 3 of 'Zoltan_Realloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:64: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:725: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:730: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:731: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c:732: warning: passing argument 2 of 'Zoltan_Free' discards qualifiers from pointer target type ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c: In function 'augment_search_structure': ../zz/zz_gen_files.c:767: warning: passing argument 2 of 'Zoltan_Malloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of type 'const char *' ../zz/zz_gen_files.c: In function 'merge_gids': ../zz/zz_gen_files.c:796: warning: passing argument 3 of 'Zoltan_Realloc' discards qualifiers from pointer target type ../include/zoltan_mem.h:64: note: expected 'char *' but argument is of type 'const char *' make[1]: *** [zz_gen_files.o] Error 1 make: *** [zoltan] Error 2 ******************************************************************************* From knepley at gmail.com Tue Sep 18 17:22:12 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 18 Sep 2012 17:22:12 -0500 Subject: [petsc-users] compilation problem, zoltan & parmetis In-Reply-To: <33D9B1A7-55B4-47F6-86D7-2291C1CEA41F@glasgow.ac.uk> References: <33D9B1A7-55B4-47F6-86D7-2291C1CEA41F@glasgow.ac.uk> Message-ID: On Tue, Sep 18, 2012 at 5:19 PM, Lukasz Kaczmarczyk < Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote: > Hallo, > > I have following proble with compilation (is the same error on MacOS and > Ubuntu). > This is my fault. Zoltan is no longer used. I should have removed it before the release. Matt > Compilers: > gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00) > gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) > > ./configure --with-fortran=0 > --with-cc=/opt/build_for_gcc-mp-4.4/local/bin/mpicc > --with-cxx=/opt/build_for_gcc-mp-4.4/local/bin/mpicxx > --download-superlu_dist=1 --download-parmetis=1 -download-umfpack=1 > -download-zoltan=1 --with-shared-libraries=0 > > =============================================================================== > Configuring PETSc to compile on your system > > =============================================================================== > =============================================================================== > > Compiling UMFPACK; this > may take several minutes > > > =============================================================================== > > TESTING: configureLibrary from > PETSc.packages.parmetis(config/BuildSystem/config/package.py:433) > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > ------------------------------------------------------------------------------- > Did not find package METIS needed by parmetis. > Enable the package using --with-metis or --download-metis > > ******************************************************************************* > > unimackiss:petsc-3.3-p3 likask$ ./configure --with-fortran=0 > --with-cc=/opt/build_for_gcc-mp-4.4/local/bin/mpicc > --with-cxx=/opt/build_for_gcc-mp-4.4/local/bin/mpicxx > --download-superlu_dist=1 --download-metis=1 --download-parmetis=1 > -download-umfpack=1 -download-zoltan=1 --with-shared-libraries=0 > > =============================================================================== > Configuring PETSc to compile on your system > > =============================================================================== > =============================================================================== > > Configuring METIS; this > may take several minutes > > > =============================================================================== > > > =============================================================================== > > Compiling METIS; this > may take several minutes > > > =============================================================================== > > > =============================================================================== > > Configuring ParMETIS; > this may take several minutes > > > =============================================================================== > > > =============================================================================== > > Compiling ParMETIS; this > may take several minutes > > > =============================================================================== > > > =============================================================================== > > Compiling superlu_dist; > this may take several minutes > > > =============================================================================== > > > ************************************************************************************************** > > Please register to use Zoltan at > http://www.cs.sandia.gov/Zoltan/Zoltan.html > > > ************************************************************************************************** > > > =============================================================================== > > Compiling zoltan; this > may take several minutes > > > =============================================================================== > > > > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > ------------------------------------------------------------------------------- > Error running make on ZOLTAN: Could not execute "cd > /opt/build_for_gcc-mp-4.4/petsc-3.3-p3/externalpackages/Zoltan && make > clean && make ZOLTAN_ARCH="darwin10.2.0-c-debug" > CC="/opt/build_for_gcc-mp-4.4/local/bin/mpicc" CFLAGS=" -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline > -O0 " AR="/usr/bin/ar cr" RANLIB="/usr/bin/ranlib -c" > X_LIBS="['-L/opt/local/lib', '-lX11']" > MPI_INCPATH="-I/opt/build_for_gcc-mp-4.4/local/include" > PARMETIS_INCPATH="-I/opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/include" > PARMETIS_LIBPATH="-L/opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/lib > -L/opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/lib > -lparmetis" zoltan": > driver > ch > zz > all > lb > order > par > rcb > coloring > oct > phg > util > hsfc > parmetis > params > timer > ha > reftree > include > Memory > Communication > DDirectory > Timer > shared > Obj_generic > exit 0 > rm -f *.o libexzoltan.a > rm -f *.o zoltanSimple zoltanExample1 subDirs phgExample > rm -f *.o ex1 > rm -f *.o zCPPExample1 zCPPExample2 > Obj_generic > fort > fdriver > fdriver_old > exit 0 > make libzoltan_mem.a > (Re)Building dependency for ../Memory/mem.c... > Compiling ../Memory/mem.c... > creating library libzoltan_mem.a > make libzoltan_comm.a > (Re)Building dependency for ../Communication/comm_sort_ints.c... > (Re)Building dependency for ../Communication/comm_resize.c... > (Re)Building dependency for ../Communication/comm_exchange_sizes.c... > (Re)Building dependency for ../Communication/comm_invert_map.c... > (Re)Building dependency for ../Communication/comm_invert_plan.c... > (Re)Building dependency for ../Communication/comm_info.c... > (Re)Building dependency for ../Communication/comm_destroy.c... > (Re)Building dependency for ../Communication/comm_do_reverse.c... > (Re)Building dependency for ../Communication/comm_do.c... > (Re)Building dependency for ../Communication/comm_create.c... > Compiling ../Communication/comm_create.c... > Compiling ../Communication/comm_do.c... > Compiling ../Communication/comm_do_reverse.c... > Compiling ../Communication/comm_destroy.c... > Compiling ../Communication/comm_info.c... > Compiling ../Communication/comm_invert_plan.c... > Compiling ../Communication/comm_invert_map.c... > Compiling ../Communication/comm_exchange_sizes.c... > Compiling ../Communication/comm_resize.c... > Compiling ../Communication/comm_sort_ints.c... > creating library libzoltan_comm.a > make libzoltan_dd.a > (Re)Building dependency for ../shared/zoltan_align.c... > (Re)Building dependency for ../shared/zoltan_id.c... > (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c... > (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c... > (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c... > (Re)Building dependency for ../DDirectory/DD_Print.c... > (Re)Building dependency for ../DDirectory/DD_Stats.c... > (Re)Building dependency for ../DDirectory/DD_Hash2.c... > (Re)Building dependency for ../DDirectory/DD_Set_Hash_Fn.c... > (Re)Building dependency for ../DDirectory/DD_Update.c... > (Re)Building dependency for ../DDirectory/DD_Remove.c... > (Re)Building dependency for ../DDirectory/DD_Find.c... > (Re)Building dependency for ../DDirectory/DD_Destroy.c... > (Re)Building dependency for ../DDirectory/DD_Create.c... > Compiling ../DDirectory/DD_Create.c... > Compiling ../DDirectory/DD_Destroy.c... > Compiling ../DDirectory/DD_Find.c... > Compiling ../DDirectory/DD_Remove.c... > Compiling ../DDirectory/DD_Update.c... > Compiling ../DDirectory/DD_Set_Hash_Fn.c... > Compiling ../DDirectory/DD_Hash2.c... > Compiling ../DDirectory/DD_Stats.c... > Compiling ../DDirectory/DD_Print.c... > Compiling ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c... > Compiling ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c... > Compiling ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c... > Compiling ../shared/zoltan_id.c... > Compiling ../shared/zoltan_align.c... > creating library libzoltan_dd.a > make libzoltan_timer.a > (Re)Building dependency for ../Timer/timer.c... > (Re)Building dependency for ../Timer/zoltan_timer.c... > Compiling ../Timer/zoltan_timer.c... > Compiling ../Timer/timer.c... > creating library libzoltan_timer.a > creating library libzoltan.a > (Re)Building dependency for ../reftree/reftree_coarse_path.c... > (Re)Building dependency for ../reftree/reftree_hash.c... > (Re)Building dependency for ../reftree/reftree_part.c... > (Re)Building dependency for ../reftree/reftree_build.c... > (Re)Building dependency for ../ha/build_machine_desc.c... > (Re)Building dependency for ../ha/get_processor_name.c... > (Re)Building dependency for ../ha/divide_machine.c... > (Re)Building dependency for ../timer/timer_params.c... > (Re)Building dependency for ../phg/phg_patoh.c... > (Re)Building dependency for ../phg/phg_parkway.c... > (Re)Building dependency for ../phg/phg_order.c... > (Re)Building dependency for ../phg/phg_comm.c... > (Re)Building dependency for ../phg/phg_scale.c... > (Re)Building dependency for ../phg/phg_util.c... > (Re)Building dependency for ../phg/phg_rdivide.c... > (Re)Building dependency for ../phg/phg_Vcycle.c... > (Re)Building dependency for ../phg/phg_serialpartition.c... > (Re)Building dependency for ../phg/phg_refinement.c... > (Re)Building dependency for ../phg/phg_plot.c... > (Re)Building dependency for ../phg/phg_match.c... > (Re)Building dependency for ../phg/phg_gather.c... > (Re)Building dependency for ../phg/phg_distrib.c... > (Re)Building dependency for ../phg/phg_coarse.c... > (Re)Building dependency for ../phg/phg_build_calls.c... > (Re)Building dependency for ../phg/phg_build.c... > (Re)Building dependency for ../phg/phg_hypergraph.c... > (Re)Building dependency for ../phg/phg.c... > (Re)Building dependency for ../parmetis/scatter_graph.c... > (Re)Building dependency for ../parmetis/verify_graph.c... > (Re)Building dependency for ../parmetis/build_graph.c... > (Re)Building dependency for ../parmetis/parmetis_jostle.c... > (Re)Building dependency for ../params/bind_param.c... > (Re)Building dependency for ../params/free_params.c... > (Re)Building dependency for ../params/key_params.c... > (Re)Building dependency for ../params/print_params.c... > (Re)Building dependency for ../params/check_param.c... > (Re)Building dependency for ../params/assign_param_vals.c... > (Re)Building dependency for ../params/set_param.c... > (Re)Building dependency for ../hsfc/hsfc_point_assign.c... > (Re)Building dependency for ../hsfc/hsfc_box_assign.c... > (Re)Building dependency for ../hsfc/hsfc.c... > (Re)Building dependency for ../hsfc/hsfc_hilbert.c... > (Re)Building dependency for ../oct/oct_plot.c... > (Re)Building dependency for ../oct/rootlist.c... > (Re)Building dependency for ../oct/octree.c... > (Re)Building dependency for ../oct/migtags.c... > (Re)Building dependency for ../oct/migreg.c... > (Re)Building dependency for ../oct/output.c... > (Re)Building dependency for ../oct/migoct.c... > (Re)Building dependency for ../oct/costs.c... > (Re)Building dependency for ../oct/dfs.c... > (Re)Building dependency for ../oct/octupdate.c... > (Re)Building dependency for ../oct/oct_util.c... > (Re)Building dependency for ../oct/octant.c... > (Re)Building dependency for ../oct/msg.c... > (Re)Building dependency for ../order/perm.c... > (Re)Building dependency for ../order/order_struct.c... > (Re)Building dependency for ../order/order.c... > (Re)Building dependency for ../par/par_tflops_special.c... > (Re)Building dependency for ../par/par_stats.c... > (Re)Building dependency for ../par/par_sync.c... > (Re)Building dependency for ../par/par_median.c... > (Re)Building dependency for ../par/par_bisect.c... > (Re)Building dependency for ../par/par_average.c... > (Re)Building dependency for ../coloring/color_test.c... > (Re)Building dependency for ../coloring/g2l_hash.c... > (Re)Building dependency for ../coloring/coloring.c... > (Re)Building dependency for ../rcb/shared.c... > (Re)Building dependency for ../rcb/inertial3d.c... > (Re)Building dependency for ../rcb/inertial2d.c... > (Re)Building dependency for ../rcb/inertial1d.c... > (Re)Building dependency for ../rcb/rib_util.c... > (Re)Building dependency for ../rcb/rib.c... > (Re)Building dependency for ../rcb/create_proc_list.c... > (Re)Building dependency for ../rcb/point_assign.c... > (Re)Building dependency for ../rcb/box_assign.c... > (Re)Building dependency for ../rcb/rcb_box.c... > (Re)Building dependency for ../rcb/rcb_util.c... > (Re)Building dependency for ../rcb/rcb.c... > (Re)Building dependency for ../all/all_allo.c... > (Re)Building dependency for ../lb/lb_remap.c... > (Re)Building dependency for ../lb/lb_set_part_sizes.c... > (Re)Building dependency for ../lb/lb_part2proc.c... > (Re)Building dependency for ../lb/lb_box_assign.c... > (Re)Building dependency for ../lb/lb_point_assign.c... > (Re)Building dependency for ../lb/lb_set_method.c... > (Re)Building dependency for ../lb/lb_set_fn.c... > (Re)Building dependency for ../lb/lb_migrate.c... > (Re)Building dependency for ../lb/lb_invert.c... > (Re)Building dependency for ../lb/lb_init.c... > (Re)Building dependency for ../lb/lb_copy.c... > (Re)Building dependency for ../lb/lb_free.c... > (Re)Building dependency for ../lb/lb_eval.c... > (Re)Building dependency for ../lb/lb_balance.c... > (Re)Building dependency for ../zz/zz_rand.c... > (Re)Building dependency for ../zz/zz_sort.c... > (Re)Building dependency for ../zz/zz_heap.c... > (Re)Building dependency for ../zz/zz_hash.c... > (Re)Building dependency for ../zz/zz_gen_files.c... > (Re)Building dependency for ../zz/zz_util.c... > (Re)Building dependency for ../zz/zz_set_fn.c... > (Re)Building dependency for ../zz/zz_init.c... > (Re)Building dependency for ../zz/zz_struct.c... > (Re)Building dependency for ../zz/zz_obj_list.c... > (Re)Building dependency for ../zz/zz_coord.c... > Compiling ../zz/zz_coord.c... > Compiling ../zz/zz_obj_list.c... > Compiling ../zz/zz_struct.c... > Compiling ../zz/zz_init.c... > Compiling ../zz/zz_set_fn.c... > Compiling ../zz/zz_util.c... > Compiling ../zz/zz_gen_files.c... > Makefile:28: mem.d: No such file or directory > ../Memory/mem.c: In function 'Zoltan_Array_Alloc': > ../Memory/mem.c:162: warning: initialization discards qualifiers from > pointer target type > ../Memory/mem.c: In function 'Zoltan_Malloc': > ../Memory/mem.c:280: warning: initialization discards qualifiers from > pointer target type > ../Memory/mem.c: In function 'Zoltan_Realloc': > ../Memory/mem.c:345: warning: initialization discards qualifiers from > pointer target type > Makefile:28: comm_create.d: No such file or directory > Makefile:28: comm_do.d: No such file or directory > Makefile:28: comm_do_reverse.d: No such file or directory > Makefile:28: comm_destroy.d: No such file or directory > Makefile:28: comm_info.d: No such file or directory > Makefile:28: comm_invert_plan.d: No such file or directory > Makefile:28: comm_invert_map.d: No such file or directory > Makefile:28: comm_exchange_sizes.d: No such file or directory > Makefile:28: comm_resize.d: No such file or directory > Makefile:28: comm_sort_ints.d: No such file or directory > ../Communication/comm_create.c: In function 'Zoltan_Comm_Create': > ../Communication/comm_create.c:63: warning: initialization discards > qualifiers from pointer target type > ../Communication/comm_create.c:79: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:123: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:124: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:125: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:170: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:195: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:196: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:197: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:222: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:232: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:248: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:249: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:250: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:251: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:252: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:253: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:261: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:292: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:293: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c: In function 'Zoltan_Comm_Copy_To': > ../Communication/comm_create.c:326: warning: initialization discards > qualifiers from pointer target type > ../Communication/comm_create.c:342: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:348: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:349: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:350: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:351: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:352: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:353: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:354: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:355: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:356: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:357: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:358: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:359: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:360: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:361: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:362: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:364: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_create.c:365: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do.c: In function 'Zoltan_Comm_Do_Post': > ../Communication/comm_do.c:85: warning: initialization discards qualifiers > from pointer target type > ../Communication/comm_do.c:134: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do.c:180: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do.c:193: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do.c:195: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do.c:262: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do.c:334: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do.c: In function 'Zoltan_Comm_Do_Wait': > ../Communication/comm_do.c:398: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c: In function > 'Zoltan_Comm_Do_Reverse_Post': > ../Communication/comm_do_reverse.c:66: warning: initialization discards > qualifiers from pointer target type > ../Communication/comm_do_reverse.c:87: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:115: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:117: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:122: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:124: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:125: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:132: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:133: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:134: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:140: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:141: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:142: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c: In function > 'Zoltan_Comm_Do_Reverse_Wait': > ../Communication/comm_do_reverse.c:168: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:169: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:170: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:171: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:172: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:173: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:174: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:176: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:177: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_do_reverse.c:178: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c: In function 'Zoltan_Comm_Destroy': > ../Communication/comm_destroy.c:32: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:33: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:34: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:35: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:36: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:37: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:38: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:39: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:40: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:41: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:42: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:43: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:44: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:45: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:46: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:47: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:48: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_destroy.c:51: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_info.c: In function 'Zoltan_Comm_Info': > ../Communication/comm_info.c:56: warning: initialization discards > qualifiers from pointer target type > ../Communication/comm_invert_plan.c: In function 'Zoltan_Comm_Invert_Plan': > ../Communication/comm_invert_plan.c:42: warning: initialization discards > qualifiers from pointer target type > ../Communication/comm_invert_plan.c:65: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:98: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:99: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:108: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:109: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:110: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:111: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:112: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:113: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:114: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:115: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:116: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:117: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:118: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:123: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:124: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_plan.c:125: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_map.c: In function 'Zoltan_Comm_Invert_Map': > ../Communication/comm_invert_map.c:62: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_map.c:63: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_map.c:69: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_map.c:70: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_map.c:94: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_map.c:95: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_map.c:97: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_invert_map.c:98: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c: In function 'Zoltan_Comm_Resize': > ../Communication/comm_resize.c:53: warning: initialization discards > qualifiers from pointer target type > ../Communication/comm_resize.c:70: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:71: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:72: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:73: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:74: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:75: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:76: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:99: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:107: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:113: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:130: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:139: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:140: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:169: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:170: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:174: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:175: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:202: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:214: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:224: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:225: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:243: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:244: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:256: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:257: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:258: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:259: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:260: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:261: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:262: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:263: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:264: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_resize.c:265: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_sort_ints.c: In function 'Zoltan_Comm_Sort_Ints': > ../Communication/comm_sort_ints.c:52: warning: passing argument 3 of > 'Zoltan_Calloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:62: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_sort_ints.c:53: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_sort_ints.c:54: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Communication/comm_sort_ints.c:78: warning: passing argument 1 of > 'Zoltan_Multifree' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:69: note: expected 'char *' but argument is of > type 'const char *' > Makefile:28: DD_Create.d: No such file or directory > Makefile:28: DD_Destroy.d: No such file or directory > Makefile:28: DD_Find.d: No such file or directory > Makefile:28: DD_Remove.d: No such file or directory > Makefile:28: DD_Update.d: No such file or directory > Makefile:28: DD_Set_Hash_Fn.d: No such file or directory > Makefile:28: DD_Hash2.d: No such file or directory > Makefile:28: DD_Stats.d: No such file or directory > Makefile:28: DD_Print.d: No such file or directory > Makefile:28: DD_Set_Neighbor_Hash_Fn1.d: No such file or directory > Makefile:28: DD_Set_Neighbor_Hash_Fn2.d: No such file or directory > Makefile:28: DD_Set_Neighbor_Hash_Fn3.d: No such file or directory > Makefile:28: zoltan_id.d: No such file or directory > Makefile:28: zoltan_align.d: No such file or directory > ../DDirectory/DD_Create.c: In function 'Zoltan_DD_Create': > ../DDirectory/DD_Create.c:45: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Create.c:80: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Create.c: In function 'Zoltan_DD_Copy_To': > ../DDirectory/DD_Create.c:151: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Create.c:167: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Create.c: In function 'allocate_copy_list': > ../DDirectory/DD_Create.c:200: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Destroy.c: In function 'Zoltan_DD_Destroy': > ../DDirectory/DD_Destroy.c:46: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Destroy.c:63: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Destroy.c:75: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Find.c: In function 'Zoltan_DD_Find': > ../DDirectory/DD_Find.c:58: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Find.c:77: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Find.c:90: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Find.c:93: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Find.c:129: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Find.c:197: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Find.c:198: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Find.c:199: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Find.c: In function 'DD_Find_Local': > ../DDirectory/DD_Find.c:240: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Remove.c: In function 'Zoltan_DD_Remove': > ../DDirectory/DD_Remove.c:55: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Remove.c:75: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Remove.c:88: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Remove.c:125: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Remove.c:161: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Remove.c:162: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Remove.c:163: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Remove.c: In function 'DD_Remove_Local': > ../DDirectory/DD_Remove.c:196: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Remove.c:220: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Update.c: In function 'Zoltan_DD_Update': > ../DDirectory/DD_Update.c:63: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Update.c:91: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Update.c:104: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Update.c:155: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Update.c:200: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Update.c:201: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Update.c:202: warning: passing argument 2 of > 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Update.c: In function 'DD_Update_Local': > ../DDirectory/DD_Update.c:239: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Update.c:303: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Set_Hash_Fn.c: In function 'Zoltan_DD_Set_Hash_Fn': > ../DDirectory/DD_Set_Hash_Fn.c:42: warning: initialization discards > qualifiers from pointer target type > ../DDirectory/DD_Stats.c: In function 'Zoltan_DD_Stats': > ../DDirectory/DD_Stats.c:48: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Print.c: In function 'Zoltan_DD_Print': > ../DDirectory/DD_Print.c:42: warning: initialization discards qualifiers > from pointer target type > ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c: In function > 'Zoltan_DD_Set_Neighbor_Hash_Fn1': > ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c:52: warning: initialization > discards qualifiers from pointer target type > ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c: In function > 'Zoltan_DD_Set_Neighbor_Hash_Fn2': > ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:75: warning: initialization > discards qualifiers from pointer target type > ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:88: warning: passing argument 2 > of 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c: In function 'dd_nh2': > ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:119: warning: initialization > discards qualifiers from pointer target type > ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c: In function 'dd_nh2_cleanup': > ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:169: warning: passing argument 2 > of 'Zoltan_Free' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c: In function > 'Zoltan_DD_Set_Neighbor_Hash_Fn3': > ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c:55: warning: initialization > discards qualifiers from pointer target type > ../shared/zoltan_id.c: In function 'ZOLTAN_Malloc_ID': > ../shared/zoltan_id.c:51: warning: initialization discards qualifiers from > pointer target type > Makefile:28: zoltan_timer.d: No such file or directory > Makefile:28: timer.d: No such file or directory > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Copy_To': > ../Timer/zoltan_timer.c:129: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Timer/zoltan_timer.c:136: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Create': > ../Timer/zoltan_timer.c:158: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Timer/zoltan_timer.c:159: warning: passing argument 2 of > 'Zoltan_Malloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Init': > ../Timer/zoltan_timer.c:181: warning: initialization discards qualifiers > from pointer target type > ../Timer/zoltan_timer.c:190: warning: passing argument 3 of > 'Zoltan_Realloc' discards qualifiers from pointer target type > ../../include/zoltan_mem.h:64: note: expected 'char *' but argument is of > type 'const char *' > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Reset': > ../Timer/zoltan_timer.c:216: warning: initialization discards qualifiers > from pointer target type > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_ChangeFlag': > ../Timer/zoltan_timer.c:245: warning: initialization discards qualifiers > from pointer target type > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Start': > ../Timer/zoltan_timer.c:263: warning: initialization discards qualifiers > from pointer target type > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Stop': > ../Timer/zoltan_timer.c:306: warning: initialization discards qualifiers > from pointer target type > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Print': > ../Timer/zoltan_timer.c:358: warning: initialization discards qualifiers > from pointer target type > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_PrintAll': > ../Timer/zoltan_timer.c:393: warning: initialization discards qualifiers > from pointer target type > ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Destroy': > ../Timer/zoltan_timer.c:411: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../Timer/zoltan_timer.c:412: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > Makefile:28: zz_coord.d: No such file or directory > Makefile:28: zz_obj_list.d: No such file or directory > Makefile:28: zz_struct.d: No such file or directory > Makefile:28: zz_init.d: No such file or directory > Makefile:28: zz_set_fn.d: No such file or directory > Makefile:28: zz_util.d: No such file or directory > Makefile:28: zz_gen_files.d: No such file or directory > Makefile:28: zz_hash.d: No such file or directory > Makefile:28: zz_heap.d: No such file or directory > Makefile:28: zz_sort.d: No such file or directory > Makefile:28: zz_rand.d: No such file or directory > Makefile:28: lb_balance.d: No such file or directory > Makefile:28: lb_eval.d: No such file or directory > Makefile:28: lb_free.d: No such file or directory > Makefile:28: lb_copy.d: No such file or directory > Makefile:28: lb_init.d: No such file or directory > Makefile:28: lb_invert.d: No such file or directory > Makefile:28: lb_migrate.d: No such file or directory > Makefile:28: lb_set_fn.d: No such file or directory > Makefile:28: lb_set_method.d: No such file or directory > Makefile:28: lb_point_assign.d: No such file or directory > Makefile:28: lb_box_assign.d: No such file or directory > Makefile:28: lb_part2proc.d: No such file or directory > Makefile:28: lb_set_part_sizes.d: No such file or directory > Makefile:28: lb_remap.d: No such file or directory > Makefile:28: all_allo.d: No such file or directory > Makefile:28: rcb.d: No such file or directory > Makefile:28: rcb_util.d: No such file or directory > Makefile:28: rcb_box.d: No such file or directory > Makefile:28: box_assign.d: No such file or directory > Makefile:28: point_assign.d: No such file or directory > Makefile:28: create_proc_list.d: No such file or directory > Makefile:28: rib.d: No such file or directory > Makefile:28: rib_util.d: No such file or directory > Makefile:28: inertial1d.d: No such file or directory > Makefile:28: inertial2d.d: No such file or directory > Makefile:28: inertial3d.d: No such file or directory > Makefile:28: shared.d: No such file or directory > Makefile:28: par_average.d: No such file or directory > Makefile:28: par_bisect.d: No such file or directory > Makefile:28: par_median.d: No such file or directory > Makefile:28: par_sync.d: No such file or directory > Makefile:28: par_stats.d: No such file or directory > Makefile:28: par_tflops_special.d: No such file or directory > Makefile:28: coloring.d: No such file or directory > Makefile:28: g2l_hash.d: No such file or directory > Makefile:28: color_test.d: No such file or directory > Makefile:28: par_average.d: No such file or directory > Makefile:28: par_bisect.d: No such file or directory > Makefile:28: par_median.d: No such file or directory > Makefile:28: par_sync.d: No such file or directory > Makefile:28: par_stats.d: No such file or directory > Makefile:28: par_tflops_special.d: No such file or directory > Makefile:28: order.d: No such file or directory > Makefile:28: order_struct.d: No such file or directory > Makefile:28: perm.d: No such file or directory > Makefile:28: msg.d: No such file or directory > Makefile:28: octant.d: No such file or directory > Makefile:28: oct_util.d: No such file or directory > Makefile:28: octupdate.d: No such file or directory > Makefile:28: dfs.d: No such file or directory > Makefile:28: costs.d: No such file or directory > Makefile:28: migoct.d: No such file or directory > Makefile:28: output.d: No such file or directory > Makefile:28: migreg.d: No such file or directory > Makefile:28: migtags.d: No such file or directory > Makefile:28: octree.d: No such file or directory > Makefile:28: rootlist.d: No such file or directory > Makefile:28: oct_plot.d: No such file or directory > Makefile:28: hsfc_hilbert.d: No such file or directory > Makefile:28: hsfc.d: No such file or directory > Makefile:28: hsfc_box_assign.d: No such file or directory > Makefile:28: hsfc_point_assign.d: No such file or directory > Makefile:28: set_param.d: No such file or directory > Makefile:28: assign_param_vals.d: No such file or directory > Makefile:28: check_param.d: No such file or directory > Makefile:28: print_params.d: No such file or directory > Makefile:28: key_params.d: No such file or directory > Makefile:28: free_params.d: No such file or directory > Makefile:28: bind_param.d: No such file or directory > Makefile:28: parmetis_jostle.d: No such file or directory > Makefile:28: build_graph.d: No such file or directory > Makefile:28: verify_graph.d: No such file or directory > Makefile:28: scatter_graph.d: No such file or directory > Makefile:28: phg.d: No such file or directory > Makefile:28: phg_hypergraph.d: No such file or directory > Makefile:28: phg_build.d: No such file or directory > Makefile:28: phg_build_calls.d: No such file or directory > Makefile:28: phg_coarse.d: No such file or directory > Makefile:28: phg_distrib.d: No such file or directory > Makefile:28: phg_gather.d: No such file or directory > Makefile:28: phg_match.d: No such file or directory > Makefile:28: phg_plot.d: No such file or directory > Makefile:28: phg_refinement.d: No such file or directory > Makefile:28: phg_serialpartition.d: No such file or directory > Makefile:28: phg_Vcycle.d: No such file or directory > Makefile:28: phg_rdivide.d: No such file or directory > Makefile:28: phg_util.d: No such file or directory > Makefile:28: phg_scale.d: No such file or directory > Makefile:28: phg_comm.d: No such file or directory > Makefile:28: phg_order.d: No such file or directory > Makefile:28: phg_parkway.d: No such file or directory > Makefile:28: phg_patoh.d: No such file or directory > Makefile:28: timer_params.d: No such file or directory > Makefile:28: divide_machine.d: No such file or directory > Makefile:28: get_processor_name.d: No such file or directory > Makefile:28: build_machine_desc.d: No such file or directory > Makefile:28: reftree_build.d: No such file or directory > Makefile:28: reftree_part.d: No such file or directory > Makefile:28: reftree_hash.d: No such file or directory > Makefile:28: reftree_coarse_path.d: No such file or directory > ../zz/zz_coord.c:44: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_coord.c:44: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_coord.c:45: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_coord.c:45: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_coord.c:46: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_coord.c:46: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_coord.c: In function 'Zoltan_Get_Coordinates': > ../zz/zz_coord.c:76: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_coord.c:125: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_coord.c:177: warning: passing argument 2 of 'Zoltan_Bind_Param' > discards qualifiers from pointer target type > ../params/params_const.h:81: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_coord.c:179: warning: passing argument 2 of 'Zoltan_Bind_Param' > discards qualifiers from pointer target type > ../params/params_const.h:81: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_coord.c:180: warning: passing argument 2 of 'Zoltan_Bind_Param' > discards qualifiers from pointer target type > ../params/params_const.h:81: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_coord.c:395: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_obj_list.c: In function 'Zoltan_Get_Obj_List': > ../zz/zz_obj_list.c:44: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_obj_list.c:83: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' > discards qualifiers from pointer target type > ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument > is of type 'const char *' > ../zz/zz_obj_list.c:87: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' > discards qualifiers from pointer target type > ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument > is of type 'const char *' > ../zz/zz_obj_list.c:92: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_obj_list.c:142: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_obj_list.c:180: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_obj_list.c:181: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_obj_list.c:182: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_struct.c: In function 'Zoltan_Create': > ../zz/zz_struct.c:57: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_struct.c:64: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_struct.c: In function 'Zoltan_Destroy': > ../zz/zz_struct.c:176: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_set_fn.c: In function 'Zoltan_Set_Fn': > ../zz/zz_set_fn.c:55: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_util.c: In function 'Zoltan_Clean_String': > ../zz/zz_util.c:56: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_util.c: In function 'Zoltan_Strdup': > ../zz/zz_util.c:83: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > In file included from ../zz/zz_gen_files.c:21: > ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:35: error: conflicting types for > 'METIS_NodeND' > /opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/include/metis.h:224: > note: previous declaration of 'METIS_NodeND' was here > ../parmetis/parmetis_jostle.h:134: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:134: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:135: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:135: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:135: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:138: error: expected ')' before '*' token > ../parmetis/parmetis_jostle.h:144: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:144: error: expected declaration specifiers > or '...' before 'idxtype' > ../parmetis/parmetis_jostle.h:144: error: expected declaration specifiers > or '...' before 'idxtype' > ../zz/zz_gen_files.c: In function 'Zoltan_Generate_Files': > ../zz/zz_gen_files.c:93: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_gen_files.c:107: warning: assignment discards qualifiers from > pointer target type > ../zz/zz_gen_files.c:123: warning: passing argument 9 of > 'Zoltan_Build_Graph' from incompatible pointer type > ../parmetis/parmetis_jostle.h:141: note: expected 'float **' but argument > is of type 'int **' > ../zz/zz_gen_files.c:123: error: too many arguments to function > 'Zoltan_Build_Graph' > ../zz/zz_gen_files.c:488: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:489: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:490: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:491: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:492: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:493: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:494: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:495: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:496: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:497: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:500: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:501: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:502: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:503: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:504: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c: In function 'turn_off_reduce_dimensions': > ../zz/zz_gen_files.c:515: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_gen_files.c:515: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_gen_files.c:518: warning: passing argument 2 of > 'Zoltan_Bind_Param' discards qualifiers from pointer target type > ../params/params_const.h:81: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c: In function 'Zoltan_HG_Get_Pins': > ../zz/zz_gen_files.c:533: warning: initialization discards qualifiers from > pointer target type > ../zz/zz_gen_files.c:553: warning: passing argument 2 of > 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type > ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument > is of type 'const char *' > ../zz/zz_gen_files.c:554: warning: passing argument 2 of > 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type > ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument > is of type 'const char *' > ../zz/zz_gen_files.c:556: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:559: warning: passing argument 1 of > 'Zoltan_Multifree' discards qualifiers from pointer target type > ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:568: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:583: warning: passing argument 1 of > 'Zoltan_Multifree' discards qualifiers from pointer target type > ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:602: warning: passing argument 1 of > 'Zoltan_Multifree' discards qualifiers from pointer target type > ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c: In function 'fan_in_edge_global_ids': > ../zz/zz_gen_files.c:661: warning: passing argument 3 of 'Zoltan_Calloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:62: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:664: warning: passing argument 2 of > 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type > ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument > is of type 'const char *' > ../zz/zz_gen_files.c:700: warning: passing argument 3 of 'Zoltan_Realloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:64: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:725: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:730: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:731: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c:732: warning: passing argument 2 of 'Zoltan_Free' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c: In function 'augment_search_structure': > ../zz/zz_gen_files.c:767: warning: passing argument 2 of 'Zoltan_Malloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of > type 'const char *' > ../zz/zz_gen_files.c: In function 'merge_gids': > ../zz/zz_gen_files.c:796: warning: passing argument 3 of 'Zoltan_Realloc' > discards qualifiers from pointer target type > ../include/zoltan_mem.h:64: note: expected 'char *' but argument is of > type 'const char *' > make[1]: *** [zz_gen_files.o] Error 1 > make: *** [zoltan] Error 2 > > ******************************************************************************* > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Wed Sep 19 02:52:51 2012 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 19 Sep 2012 09:52:51 +0200 Subject: [petsc-users] MatMatSolve for MUMPS disappeared? In-Reply-To: References: <5058A19F.3020503@gfz-potsdam.de> Message-ID: <505979D3.3080500@gfz-potsdam.de> On 18.09.2012 20:14, Hong Zhang wrote: > MatMatSolve_MUMPS() is never been supported. > It seems mumps support multiple rhs now. > If you need it, we can implement MatSolve_MUMPS(). Hi Hong, Thanks for reply. Is MatSolve in any way more efficient than KSPSolve? As far as I understand for both cases one ends up with a loop over RHS, but KSPSolve allows to use iterative solvers (I don't need them at the moment though). I would then use latter option and do this loop myself. > > There was one issue with it for sequential mode that has been > fixed here: > http://petsc.cs.iit.edu/petsc/releases/petsc-3.3/rev/8badc49a596e > > > This change prevents calling of MatMatSolve_Basic(). > How many rhs vectors (or number of columns in your rhs matrix)? > MUMPS only supports centralized rhs. Scattering many rhs vectors to > a sequential dense matrix is non-scalable. That is right. The request on that has recently appeared in the mumps mailing list and they replied there are no plans to develop it. I have usually < 10^3 and this is not a bottleneck, so I'm fine with a "inefficient" loop over RHS. But, for those who are chasing performance and have many sprase RHS (e.g., this is the case when one needs to solve many adjoint problems with delta function as a RHS) I can say that using MUMPS' multiple RHS mode is a way more efficient than solving them in a loop. This is probably related to some internals where MUMPS is able to take advantage of getting many RHS at once. -- Regards, Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Wed Sep 19 03:07:04 2012 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 19 Sep 2012 10:07:04 +0200 Subject: [petsc-users] Problem with -pc_type gamg In-Reply-To: References: <418C5B02-3BE5-4AE3-B794-41DD7DBBB440@columbia.edu> <16E864F0-6EB6-4187-8204-C42ACC83B524@gmail.com> <55A171C2-5721-4788-BFF2-3582D162A517@columbia.edu> <7F28E5D0-7DCC-46A2-A96C-DCB84B011249@gmail.com> Message-ID: <50597D28.1090402@gfz-potsdam.de> Randall, I can add few more practical notes on EM problem. GAMG doesn't work efficiently on curl curl problems because of mentioned (Jed's message) reasons. One would need to consider null-space explicitly as in [1] or [2]. Furthermore, even mild grid stretching slows down convergence thus semi-coarsening is another essential thing to consider. Here you can find some tests: http://proceedings.fyper.com/eccomascfd2006/documents/560.pdf 1. Hiptmair R. 1998. Multigrid method for Maxwell?s equations. SIAM Journal on Numerical Analysis 36, 204?225 2. Arnold D.N., Falk R.S. and Winther R. 2000. Multigrid in H(div) and H(curl). Numerische Mathematik 85, 197?217. On 15.09.2012 00:18, Jed Brown wrote: > On Fri, Sep 14, 2012 at 5:11 PM, Randall Mackie > wrote: > > If you would be interested, I could dump the matrix and send it to > you to see if you can figure out a fix. > I have no idea if GAMG would even be a good preconditioner (this > is a ill-conditioned EM > > > This is very important information. What specific EM system are you > solving? Is the shift, positive, negative or complex? What > discretization. What scale do you need to solve and how > performance-sensitive is the application. > > These problems can be huge rabbit holes for multilevel methods, > depending on the parameter range and necessary scale. Efficient > solvers will require extra work since black-box methods cannot cheaply > determine things like the large curl-curl null space. > > PETSc folks: We should make an example using auxiliary space > preconditioning so that we can have an FAQ on this. > > problem), but > I have reasons to believe that MG in general, if done right, would > work. I was hoping to test this > with the GAMG preconditioner, without having to do too much work > on interpolation operators, etc. > > -- Regards, Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 19 06:17:27 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 19 Sep 2012 06:17:27 -0500 Subject: [petsc-users] MatMatSolve for MUMPS disappeared? In-Reply-To: <505979D3.3080500@gfz-potsdam.de> References: <5058A19F.3020503@gfz-potsdam.de> <505979D3.3080500@gfz-potsdam.de> Message-ID: On Wed, Sep 19, 2012 at 2:52 AM, Alexander Grayver wrote: > Is MatSolve in any way more efficient than KSPSolve? > As far as I understand for both cases one ends up with a loop over RHS, > but KSPSolve allows to use iterative solvers (I don't need them at the > moment though). > I would then use latter option and do this loop myself. > The loop over KSPSolve configured with -ksp_type preonly -pc_type lu will do the same thing as MatMatSolve_Basic. If you think you might want to use iterative methods, this is definitely the way to go. Adding support for block Krylov methods (that inherently handle multiple right hand sides) is something we've been discussing for a while and will eventually happen, but I can't promise a time. > That is right. The request on that has recently appeared in the mumps > mailing list and they replied there are no plans to develop it. > I have usually < 10^3 and this is not a bottleneck, so I'm fine with a > "inefficient" loop over RHS. > How large are the vectors? It's easy for a small number of right hand sides to overflow memory on rank 0. > > But, for those who are chasing performance and have many sprase RHS (e.g., > this is the case when one needs to solve many adjoint problems with delta > function as a RHS) I can say that using MUMPS' multiple RHS mode is a way > more efficient than solving them in a loop. This is probably related to > some internals where MUMPS is able to take advantage of getting many RHS at > once. > There are significant efficiencies to handling many RHS at once, but even if the right hand sides are sparse, the solutions are dense. The savings from doing special things for the sparse RHS are thus only incremental. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erocha.ssa at gmail.com Wed Sep 19 06:25:02 2012 From: erocha.ssa at gmail.com (Eduardo) Date: Wed, 19 Sep 2012 08:25:02 -0300 Subject: [petsc-users] Cannot convert error In-Reply-To: References: Message-ID: Is there a macro I can get the Petsc release? I'd like to link my code with libraries that are using a old version of Petsc. However, in the future these libraries may change to a newer version. So I want to write my code with ifdefs so that I can change from one version to the other easily. Thanks in advance, Eduardo On Mon, Sep 17, 2012 at 4:49 PM, Jed Brown wrote: > On Mon, Sep 17, 2012 at 2:45 PM, Eduardo wrote: >> >> Does anyone know the reason for the following errors: >> >> error: cannot convert ?_p_KSP**? to ?KSP {aka _p_KSP*}? for argument >> ?1? to ?PetscErrorCode KSPDestroy(KSP)? > > > Sounds like you are updating some old code. The destroy methods were changed > to take a reference a couple releases ago. Use KSPDestroy(&ksp) and > VecScatterDestroy(&scatter). > >> >> error: cannot convert ?_p_VecScatter**? to ?VecScatter {aka >> _p_VecScatter*}? for argument ?1? to ?PetscErrorCode >> VecScatterDestroy(VecScatter)? >> >> Thanks in advance, >> Eduardo > > From agrayver at gfz-potsdam.de Wed Sep 19 06:43:02 2012 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 19 Sep 2012 13:43:02 +0200 Subject: [petsc-users] MatMatSolve for MUMPS disappeared? In-Reply-To: References: <5058A19F.3020503@gfz-potsdam.de> <505979D3.3080500@gfz-potsdam.de> Message-ID: <5059AFC6.7070103@gfz-potsdam.de> On 19.09.2012 13:17, Jed Brown wrote: > On Wed, Sep 19, 2012 at 2:52 AM, Alexander Grayver > > wrote: > > Is MatSolve in any way more efficient than KSPSolve? > As far as I understand for both cases one ends up with a loop over > RHS, > but KSPSolve allows to use iterative solvers (I don't need them at > the moment though). > I would then use latter option and do this loop myself. > > > The loop over KSPSolve configured with -ksp_type preonly -pc_type lu > will do the same thing as MatMatSolve_Basic. If you think you might > want to use iterative methods, this is definitely the way to go. > Adding support for block Krylov methods (that inherently handle > multiple right hand sides) is something we've been discussing for a > while and will eventually happen, but I can't promise a time. Jed, I will stick to that option then and implement that loop myself. Thanks. > That is right. The request on that has recently appeared in the > mumps mailing list and they replied there are no plans to develop it. > I have usually < 10^3 and this is not a bottleneck, so I'm fine > with a "inefficient" loop over RHS. > > > How large are the vectors? It's easy for a small number of right hand > sides to overflow memory on rank 0. On the order of 10^6 so far. I guess memory has never been a problem because internally PETSc used MatMatSolve_Basic for MatMatSolve with MUMPS. Thus you never gather more than one rhs. > > > But, for those who are chasing performance and have many sprase > RHS (e.g., this is the case when one needs to solve many adjoint > problems with delta function as a RHS) I can say that using MUMPS' > multiple RHS mode is a way more efficient than solving them in a > loop. This is probably related to some internals where MUMPS is > able to take advantage of getting many RHS at once. > > > There are significant efficiencies to handling many RHS at once, but > even if the right hand sides are sparse, the solutions are dense. The > savings from doing special things for the sparse RHS are thus only > incremental. -- Regards, Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 19 06:49:36 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 19 Sep 2012 06:49:36 -0500 Subject: [petsc-users] Cannot convert error In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 6:25 AM, Eduardo wrote: > Is there a macro I can get the Petsc release? I'd like to link my code > with libraries that are using a old version of Petsc. However, in the > future these libraries may change to a newer version. So I want to > write my code with ifdefs so that I can change from one version to the > other easily. > See include/petscversion.h Starting in petsc-3.3, there are convenience macros PETSC_VERSION_LT() and PETSC_VERSION_LE(). You can add to your headers: #if !defined(PETSC_VERSION_LT) # define PETSC_VERSION_LT (... copy from petscversion.h) #endif then in your source, #if PETSC_VERSION_LE(3,3,0) /* last version doing it the old way */ the old way #else the new way #endif -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 19 07:00:13 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 19 Sep 2012 07:00:13 -0500 Subject: [petsc-users] MatMatSolve for MUMPS disappeared? In-Reply-To: <5059AFC6.7070103@gfz-potsdam.de> References: <5058A19F.3020503@gfz-potsdam.de> <505979D3.3080500@gfz-potsdam.de> <5059AFC6.7070103@gfz-potsdam.de> Message-ID: On Wed, Sep 19, 2012 at 6:43 AM, Alexander Grayver wrote: > On the order of 10^6 so far. > So 10^3 vectors each of size 10^6 is already several GB which is larger than local (NUMA) memory for a cluster node (and larger than the entire memory of some nodes). The spill on rank 0 will cause lots memory allocated by rank 0 later to spill into the memory bus/NUMA region of other sockets/dies of the shared memory node (think of a four socket system, for example). This can easily slow the rest of your program down by a factor of 3 or more. > I guess memory has never been a problem because internally PETSc used > MatMatSolve_Basic for MatMatSolve with MUMPS. Thus you never gather more > than one rhs. > Yup, likely. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Sep 19 07:55:55 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 19 Sep 2012 07:55:55 -0500 Subject: [petsc-users] Cannot convert error In-Reply-To: References: Message-ID: <9E1B4D71-E1FA-4EF1-96FC-7B059D2ADAB1@mcs.anl.gov> On Sep 19, 2012, at 6:49 AM, Jed Brown wrote: > On Wed, Sep 19, 2012 at 6:25 AM, Eduardo wrote: > Is there a macro I can get the Petsc release? I'd like to link my code > with libraries that are using a old version of Petsc. However, in the > future these libraries may change to a newer version. So I want to > write my code with ifdefs so that I can change from one version to the > other easily. > > See include/petscversion.h > > Starting in petsc-3.3, there are convenience macros PETSC_VERSION_LT() and PETSC_VERSION_LE(). You can add to your headers: > > #if !defined(PETSC_VERSION_LT) > # define PETSC_VERSION_LT (... copy from petscversion.h) > #endif > > then in your source, > > #if PETSC_VERSION_LE(3,3,0) /* last version doing it the old way */ > the old way > #else > the new way > #endif Since PETSc is user installable we don't recommend taking this route. We recommend just always developing and user your code with the latest PETSc release. Barry From erocha.ssa at gmail.com Wed Sep 19 08:12:34 2012 From: erocha.ssa at gmail.com (Eduardo) Date: Wed, 19 Sep 2012 10:12:34 -0300 Subject: [petsc-users] Cannot convert error In-Reply-To: <9E1B4D71-E1FA-4EF1-96FC-7B059D2ADAB1@mcs.anl.gov> References: <9E1B4D71-E1FA-4EF1-96FC-7B059D2ADAB1@mcs.anl.gov> Message-ID: Well, that is true if you develop the whole code yourself. But what if you have to use some third-party code that uses an older version of Petsc?? The problem is that my own code uses Petsc and some library that also uses Petsc (but an older version). You cannot rule out this possibility. Eduardo On Wed, Sep 19, 2012 at 9:55 AM, Barry Smith wrote: > > On Sep 19, 2012, at 6:49 AM, Jed Brown wrote: > >> On Wed, Sep 19, 2012 at 6:25 AM, Eduardo wrote: >> Is there a macro I can get the Petsc release? I'd like to link my code >> with libraries that are using a old version of Petsc. However, in the >> future these libraries may change to a newer version. So I want to >> write my code with ifdefs so that I can change from one version to the >> other easily. >> >> See include/petscversion.h >> >> Starting in petsc-3.3, there are convenience macros PETSC_VERSION_LT() and PETSC_VERSION_LE(). You can add to your headers: >> >> #if !defined(PETSC_VERSION_LT) >> # define PETSC_VERSION_LT (... copy from petscversion.h) >> #endif >> >> then in your source, >> >> #if PETSC_VERSION_LE(3,3,0) /* last version doing it the old way */ >> the old way >> #else >> the new way >> #endif > > Since PETSc is user installable we don't recommend taking this route. We recommend just always developing and user your code with the latest PETSc release. > > Barry > From bsmith at mcs.anl.gov Wed Sep 19 09:49:08 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 19 Sep 2012 09:49:08 -0500 Subject: [petsc-users] Cannot convert error In-Reply-To: References: <9E1B4D71-E1FA-4EF1-96FC-7B059D2ADAB1@mcs.anl.gov> Message-ID: On Sep 19, 2012, at 8:12 AM, Eduardo wrote: > Well, that is true if you develop the whole code yourself. But what if > you have to use some third-party code that uses an older version of > Petsc?? The problem is that my own code uses Petsc and some library > that also uses Petsc (but an older version). You cannot rule out this > possibility. Yes, that is a problem. Hence we do provide the macros to make it as easy as possible to support multiple versions. But we only recommend doing it when you have to. Barry > > Eduardo > > On Wed, Sep 19, 2012 at 9:55 AM, Barry Smith wrote: >> >> On Sep 19, 2012, at 6:49 AM, Jed Brown wrote: >> >>> On Wed, Sep 19, 2012 at 6:25 AM, Eduardo wrote: >>> Is there a macro I can get the Petsc release? I'd like to link my code >>> with libraries that are using a old version of Petsc. However, in the >>> future these libraries may change to a newer version. So I want to >>> write my code with ifdefs so that I can change from one version to the >>> other easily. >>> >>> See include/petscversion.h >>> >>> Starting in petsc-3.3, there are convenience macros PETSC_VERSION_LT() and PETSC_VERSION_LE(). You can add to your headers: >>> >>> #if !defined(PETSC_VERSION_LT) >>> # define PETSC_VERSION_LT (... copy from petscversion.h) >>> #endif >>> >>> then in your source, >>> >>> #if PETSC_VERSION_LE(3,3,0) /* last version doing it the old way */ >>> the old way >>> #else >>> the new way >>> #endif >> >> Since PETSc is user installable we don't recommend taking this route. We recommend just always developing and user your code with the latest PETSc release. >> >> Barry >> From C.Klaij at marin.nl Wed Sep 19 10:01:36 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Wed, 19 Sep 2012 15:01:36 +0000 Subject: [petsc-users] difference between left and right pc Message-ID: I'm solving a system with GMRES using the same preconditioner either on the left or on the right. For left preconditioning I get two orders of reduction for the preconditioned residual in 20 its: 0 KSP preconditioned resid norm 2.980694554053e+01 true resid norm 7.058057578378e-05 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 1.121717063399e+01 true resid norm 2.239445995669e+00 ||r(i)||/||b|| 3.172892783603e+04 2 KSP preconditioned resid norm 8.419094257245e+00 true resid norm 4.482707776056e+00 ||r(i)||/||b|| 6.351191848857e+04 3 KSP preconditioned resid norm 6.113655853636e+00 true resid norm 1.759217056899e+00 ||r(i)||/||b|| 2.492494623858e+04 4 KSP preconditioned resid norm 4.949889403847e+00 true resid norm 3.572848898052e-01 ||r(i)||/||b|| 5.062085224406e+03 5 KSP preconditioned resid norm 4.187220822242e+00 true resid norm 7.071876172117e-01 ||r(i)||/||b|| 1.001957846558e+04 6 KSP preconditioned resid norm 3.598699773848e+00 true resid norm 6.751395101318e-01 ||r(i)||/||b|| 9.565514344910e+03 7 KSP preconditioned resid norm 3.024026574700e+00 true resid norm 5.529170377011e-01 ||r(i)||/||b|| 7.833841415447e+03 8 KSP preconditioned resid norm 2.609636515722e+00 true resid norm 5.689992696649e-01 ||r(i)||/||b|| 8.061697759565e+03 9 KSP preconditioned resid norm 2.254221020819e+00 true resid norm 4.949259965429e-01 ||r(i)||/||b|| 7.012212510975e+03 10 KSP preconditioned resid norm 1.873529244708e+00 true resid norm 6.109183824231e-01 ||r(i)||/||b|| 8.655616302912e+03 11 KSP preconditioned resid norm 1.505474576580e+00 true resid norm 4.363808762555e-01 ||r(i)||/||b|| 6.182733300340e+03 12 KSP preconditioned resid norm 1.273391808351e+00 true resid norm 5.799473619663e-01 ||r(i)||/||b|| 8.216812565299e+03 13 KSP preconditioned resid norm 1.092596045026e+00 true resid norm 5.341297417537e-01 ||r(i)||/||b|| 7.567659172829e+03 14 KSP preconditioned resid norm 9.145639963916e-01 true resid norm 4.424631524670e-01 ||r(i)||/||b|| 6.268908230820e+03 15 KSP preconditioned resid norm 7.619506249149e-01 true resid norm 4.154893466277e-01 ||r(i)||/||b|| 5.886737845558e+03 16 KSP preconditioned resid norm 6.305034569873e-01 true resid norm 4.166530590059e-01 ||r(i)||/||b|| 5.903225559994e+03 17 KSP preconditioned resid norm 5.020718919136e-01 true resid norm 3.542538361268e-01 ||r(i)||/||b|| 5.019140637390e+03 18 KSP preconditioned resid norm 4.099172843566e-01 true resid norm 2.942812083953e-01 ||r(i)||/||b|| 4.169436209997e+03 19 KSP preconditioned resid norm 3.456791256934e-01 true resid norm 2.474858759247e-01 ||r(i)||/||b|| 3.506430390747e+03 20 KSP preconditioned resid norm 2.730195605094e-01 true resid norm 2.641558094323e-01 ||r(i)||/||b|| 3.742613410260e+03 For right preconditioning I do not get any reduction: 0 KSP unpreconditioned resid norm 7.058057578378e-05 true resid norm 7.058057578378e-05 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 7.054747142321e-05 true resid norm 7.054747142321e-05 ||r(i)||/||b|| 9.995309706643e-01 2 KSP unpreconditioned resid norm 7.020651831374e-05 true resid norm 7.020651657757e-05 ||r(i)||/||b|| 9.947002528379e-01 3 KSP unpreconditioned resid norm 7.006225380599e-05 true resid norm 7.006225529373e-05 ||r(i)||/||b|| 9.926563295312e-01 4 KSP unpreconditioned resid norm 7.004188290578e-05 true resid norm 7.004188381810e-05 ||r(i)||/||b|| 9.923677023076e-01 5 KSP unpreconditioned resid norm 7.004130975499e-05 true resid norm 7.004131048416e-05 ||r(i)||/||b|| 9.923595791954e-01 6 KSP unpreconditioned resid norm 7.002915081650e-05 true resid norm 7.002915219093e-05 ||r(i)||/||b|| 9.921873180158e-01 7 KSP unpreconditioned resid norm 6.992906439247e-05 true resid norm 6.992905409646e-05 ||r(i)||/||b|| 9.907691077879e-01 8 KSP unpreconditioned resid norm 6.992498998319e-05 true resid norm 6.992497553136e-05 ||r(i)||/||b|| 9.907113218454e-01 9 KSP unpreconditioned resid norm 6.992334551935e-05 true resid norm 6.992333044667e-05 ||r(i)||/||b|| 9.906880139498e-01 10 KSP unpreconditioned resid norm 6.992269976389e-05 true resid norm 6.992268439725e-05 ||r(i)||/||b|| 9.906788605898e-01 11 KSP unpreconditioned resid norm 6.992074987133e-05 true resid norm 6.992073650172e-05 ||r(i)||/||b|| 9.906512624085e-01 12 KSP unpreconditioned resid norm 6.991044260131e-05 true resid norm 6.991042957959e-05 ||r(i)||/||b|| 9.905052318325e-01 13 KSP unpreconditioned resid norm 6.990672948921e-05 true resid norm 6.990672691791e-05 ||r(i)||/||b|| 9.904527717663e-01 14 KSP unpreconditioned resid norm 6.990672944080e-05 true resid norm 6.990672690106e-05 ||r(i)||/||b|| 9.904527715275e-01 15 KSP unpreconditioned resid norm 6.990484339200e-05 true resid norm 6.990483829521e-05 ||r(i)||/||b|| 9.904260133744e-01 16 KSP unpreconditioned resid norm 6.990392558763e-05 true resid norm 6.990391880630e-05 ||r(i)||/||b|| 9.904129858679e-01 17 KSP unpreconditioned resid norm 6.990024258014e-05 true resid norm 6.990024073952e-05 ||r(i)||/||b|| 9.903608742673e-01 18 KSP unpreconditioned resid norm 6.989684197988e-05 true resid norm 6.989683754968e-05 ||r(i)||/||b|| 9.903126571793e-01 19 KSP unpreconditioned resid norm 6.985738628710e-05 true resid norm 6.985745310045e-05 ||r(i)||/||b|| 9.897546502659e-01 20 KSP unpreconditioned resid norm 6.984955654109e-05 true resid norm 6.984951941662e-05 ||r(i)||/||b|| 9.896422442146e-01 The solution is ok for left preconditioning but stuck to zero (the initial guess) for right preconditioning. Shouldn't left and right preconditiong give similar results, what could be the reason for this behaviour? dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From knepley at gmail.com Wed Sep 19 10:09:54 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 19 Sep 2012 10:09:54 -0500 Subject: [petsc-users] difference between left and right pc In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 10:01 AM, Klaij, Christiaan wrote: > I'm solving a system with GMRES using the same preconditioner either on > the left or on the right. > For left preconditioning I get two orders of reduction for the > preconditioned residual in 20 its: > Notice here that your "preconditioner" is far from one. It manages to blow up the true residual by 5 orders of magnitude, from which it never recovers. The right preconditioning just avoids being so screwed up. Matt > 0 KSP preconditioned resid norm 2.980694554053e+01 true resid norm > 7.058057578378e-05 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 1.121717063399e+01 true resid norm > 2.239445995669e+00 ||r(i)||/||b|| 3.172892783603e+04 > 2 KSP preconditioned resid norm 8.419094257245e+00 true resid norm > 4.482707776056e+00 ||r(i)||/||b|| 6.351191848857e+04 > 3 KSP preconditioned resid norm 6.113655853636e+00 true resid norm > 1.759217056899e+00 ||r(i)||/||b|| 2.492494623858e+04 > 4 KSP preconditioned resid norm 4.949889403847e+00 true resid norm > 3.572848898052e-01 ||r(i)||/||b|| 5.062085224406e+03 > 5 KSP preconditioned resid norm 4.187220822242e+00 true resid norm > 7.071876172117e-01 ||r(i)||/||b|| 1.001957846558e+04 > 6 KSP preconditioned resid norm 3.598699773848e+00 true resid norm > 6.751395101318e-01 ||r(i)||/||b|| 9.565514344910e+03 > 7 KSP preconditioned resid norm 3.024026574700e+00 true resid norm > 5.529170377011e-01 ||r(i)||/||b|| 7.833841415447e+03 > 8 KSP preconditioned resid norm 2.609636515722e+00 true resid norm > 5.689992696649e-01 ||r(i)||/||b|| 8.061697759565e+03 > 9 KSP preconditioned resid norm 2.254221020819e+00 true resid norm > 4.949259965429e-01 ||r(i)||/||b|| 7.012212510975e+03 > 10 KSP preconditioned resid norm 1.873529244708e+00 true resid norm > 6.109183824231e-01 ||r(i)||/||b|| 8.655616302912e+03 > 11 KSP preconditioned resid norm 1.505474576580e+00 true resid norm > 4.363808762555e-01 ||r(i)||/||b|| 6.182733300340e+03 > 12 KSP preconditioned resid norm 1.273391808351e+00 true resid norm > 5.799473619663e-01 ||r(i)||/||b|| 8.216812565299e+03 > 13 KSP preconditioned resid norm 1.092596045026e+00 true resid norm > 5.341297417537e-01 ||r(i)||/||b|| 7.567659172829e+03 > 14 KSP preconditioned resid norm 9.145639963916e-01 true resid norm > 4.424631524670e-01 ||r(i)||/||b|| 6.268908230820e+03 > 15 KSP preconditioned resid norm 7.619506249149e-01 true resid norm > 4.154893466277e-01 ||r(i)||/||b|| 5.886737845558e+03 > 16 KSP preconditioned resid norm 6.305034569873e-01 true resid norm > 4.166530590059e-01 ||r(i)||/||b|| 5.903225559994e+03 > 17 KSP preconditioned resid norm 5.020718919136e-01 true resid norm > 3.542538361268e-01 ||r(i)||/||b|| 5.019140637390e+03 > 18 KSP preconditioned resid norm 4.099172843566e-01 true resid norm > 2.942812083953e-01 ||r(i)||/||b|| 4.169436209997e+03 > 19 KSP preconditioned resid norm 3.456791256934e-01 true resid norm > 2.474858759247e-01 ||r(i)||/||b|| 3.506430390747e+03 > 20 KSP preconditioned resid norm 2.730195605094e-01 true resid norm > 2.641558094323e-01 ||r(i)||/||b|| 3.742613410260e+03 > > For right preconditioning I do not get any reduction: > > 0 KSP unpreconditioned resid norm 7.058057578378e-05 true resid norm > 7.058057578378e-05 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 7.054747142321e-05 true resid norm > 7.054747142321e-05 ||r(i)||/||b|| 9.995309706643e-01 > 2 KSP unpreconditioned resid norm 7.020651831374e-05 true resid norm > 7.020651657757e-05 ||r(i)||/||b|| 9.947002528379e-01 > 3 KSP unpreconditioned resid norm 7.006225380599e-05 true resid norm > 7.006225529373e-05 ||r(i)||/||b|| 9.926563295312e-01 > 4 KSP unpreconditioned resid norm 7.004188290578e-05 true resid norm > 7.004188381810e-05 ||r(i)||/||b|| 9.923677023076e-01 > 5 KSP unpreconditioned resid norm 7.004130975499e-05 true resid norm > 7.004131048416e-05 ||r(i)||/||b|| 9.923595791954e-01 > 6 KSP unpreconditioned resid norm 7.002915081650e-05 true resid norm > 7.002915219093e-05 ||r(i)||/||b|| 9.921873180158e-01 > 7 KSP unpreconditioned resid norm 6.992906439247e-05 true resid norm > 6.992905409646e-05 ||r(i)||/||b|| 9.907691077879e-01 > 8 KSP unpreconditioned resid norm 6.992498998319e-05 true resid norm > 6.992497553136e-05 ||r(i)||/||b|| 9.907113218454e-01 > 9 KSP unpreconditioned resid norm 6.992334551935e-05 true resid norm > 6.992333044667e-05 ||r(i)||/||b|| 9.906880139498e-01 > 10 KSP unpreconditioned resid norm 6.992269976389e-05 true resid norm > 6.992268439725e-05 ||r(i)||/||b|| 9.906788605898e-01 > 11 KSP unpreconditioned resid norm 6.992074987133e-05 true resid norm > 6.992073650172e-05 ||r(i)||/||b|| 9.906512624085e-01 > 12 KSP unpreconditioned resid norm 6.991044260131e-05 true resid norm > 6.991042957959e-05 ||r(i)||/||b|| 9.905052318325e-01 > 13 KSP unpreconditioned resid norm 6.990672948921e-05 true resid norm > 6.990672691791e-05 ||r(i)||/||b|| 9.904527717663e-01 > 14 KSP unpreconditioned resid norm 6.990672944080e-05 true resid norm > 6.990672690106e-05 ||r(i)||/||b|| 9.904527715275e-01 > 15 KSP unpreconditioned resid norm 6.990484339200e-05 true resid norm > 6.990483829521e-05 ||r(i)||/||b|| 9.904260133744e-01 > 16 KSP unpreconditioned resid norm 6.990392558763e-05 true resid norm > 6.990391880630e-05 ||r(i)||/||b|| 9.904129858679e-01 > 17 KSP unpreconditioned resid norm 6.990024258014e-05 true resid norm > 6.990024073952e-05 ||r(i)||/||b|| 9.903608742673e-01 > 18 KSP unpreconditioned resid norm 6.989684197988e-05 true resid norm > 6.989683754968e-05 ||r(i)||/||b|| 9.903126571793e-01 > 19 KSP unpreconditioned resid norm 6.985738628710e-05 true resid norm > 6.985745310045e-05 ||r(i)||/||b|| 9.897546502659e-01 > 20 KSP unpreconditioned resid norm 6.984955654109e-05 true resid norm > 6.984951941662e-05 ||r(i)||/||b|| 9.896422442146e-01 > > The solution is ok for left preconditioning but stuck to zero (the initial > guess) for right preconditioning. > Shouldn't left and right preconditiong give similar results, what could be > the reason for this behaviour? > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Sep 19 10:47:29 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 19 Sep 2012 10:47:29 -0500 Subject: [petsc-users] difference between left and right pc In-Reply-To: References: Message-ID: <542190C7-F9A7-4036-A471-C4BDF50F8388@mcs.anl.gov> Is the "preconditioner" singular? How about the matrix? Is the "preconditioner" a linear operator? Did you try KSPFGMRES on it? On Sep 19, 2012, at 10:01 AM, "Klaij, Christiaan" wrote: > I'm solving a system with GMRES using the same preconditioner either on the left or on the right. > For left preconditioning I get two orders of reduction for the preconditioned residual in 20 its: > > > The solution is ok for left preconditioning but stuck to zero (the initial guess) for right preconditioning. Is it really ok? The true residual norm is much much worse in the left case after 20 iterations than the right case. How are you judging the solution quality? Barry > Shouldn't left and right preconditiong give similar results, what could be the reason for this behaviour? > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > From ling.zou at inl.gov Wed Sep 19 13:54:17 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 19 Sep 2012 12:54:17 -0600 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: Dear Matt, Thanks again for helping me on the '-snes_type test' issue. This is a great tool which helped us make really good progress recently. I have couple of more questions about the '-snes_type test'. From my understanding, this finite difference method is based on a tiny perturbation from a base solution vector. This base solution vector, from my observation, seems to be [1, 1, 1, ....]. However, this solution vector sometimes is very far from real physics, which causes issue, for example a dependent pressure variable get negative value. I wonder, 1), if this [1, 1, 1, ...] base solution vector is always used during the real simulation, for example when using the finite difference preconditioner. Or this base solution vector will change as solution gets updated? 2), is it possible to choose a different base solution, for example, [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution vector when using the '-snes_type test' option to test my hand-coded Jacobian. Best Regards, Ling Ling On Wed, Sep 5, 2012 at 11:35 AM, Matthew Knepley wrote: > On Wed, Sep 5, 2012 at 12:24 PM, Zou (Non-US), Ling wrote: > >> Dear All, >> >> I am trying to use the option '-snes_type test' to test my coded >> Jacobian. I tested with different snes options and it gives me >> different answers. I wonder if someone could give me a hint what is >> wrong with my settings. >> >> The command line looks like this: >> ./my-code-opt -i test.i -snes_type test -snes_test_display >> >> 1), when using 'petsc_option = -snes' in my input file, it says the >> Finite difference Jacobian is very different than the Hand-coded >> Jacobian >> > > This means your hand-coded routine is likely wrong. > > >> 2), when using 'petsc_option = -snes_fd' in my input file, it says the >> Finite difference Jacobian is idential to the Hand-coded Jacobian >> > > snes_fd replaces your hand-coded routine with our FD routine, so it > is of course the same as our FD routine. > > >> 3), when using 'petsc_option = -snes_mf_operator', it gives error >> messages like: >> "Invalid argument! Cannot test with alternative preconditioner!" >> > > This is inappropriate for testing. > > Matt > > >> Thanks in advance. >> >> Ling >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 19 14:03:10 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 19 Sep 2012 14:03:10 -0500 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 1:54 PM, Zou (Non-US), Ling wrote: > Dear Matt, > > Thanks again for helping me on the '-snes_type test' issue. This is a > great tool which helped us make really good progress recently. > > I have couple of more questions about the '-snes_type test'. From my > understanding, this finite difference method is based on a tiny > perturbation from a base solution vector. This base solution vector, from > my observation, seems to be [1, 1, 1, ....]. However, this solution vector > sometimes is very far from real physics, which causes issue, for example a > dependent pressure variable get negative value. > > I wonder, > 1), if this [1, 1, 1, ...] base solution vector is always used during the > real simulation, for example when using the finite difference > preconditioner. Or this base solution vector will change as solution gets > updated? > -snes_type test tries three different states: your user-defined state, then constant -1.0, then constant +1.0. It does not solve the system so you can't continue stepping, but you can just look at the result from the first test. > 2), is it possible to choose a different base solution, for example, > [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution > vector when using the '-snes_type test' option to test my hand-coded > Jacobian. > You can also use -snes_compare_explicit which shows the matrix (only linearized around your actual state) and actually solves the system. There is also -snes_compare_explicit_draw. If you know that you preallocated correctly, -snes_compare_coloring is better. Details here. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobian.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Wed Sep 19 14:12:54 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 19 Sep 2012 13:12:54 -0600 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 1:03 PM, Jed Brown wrote: > On Wed, Sep 19, 2012 at 1:54 PM, Zou (Non-US), Ling wrote: > >> Dear Matt, >> >> Thanks again for helping me on the '-snes_type test' issue. This is a >> great tool which helped us make really good progress recently. >> >> I have couple of more questions about the '-snes_type test'. From my >> understanding, this finite difference method is based on a tiny >> perturbation from a base solution vector. This base solution vector, from >> my observation, seems to be [1, 1, 1, ....]. However, this solution vector >> sometimes is very far from real physics, which causes issue, for example a >> dependent pressure variable get negative value. >> >> I wonder, >> 1), if this [1, 1, 1, ...] base solution vector is always used during the >> real simulation, for example when using the finite difference >> preconditioner. Or this base solution vector will change as solution gets >> updated? >> > > -snes_type test tries three different states: your user-defined state, > then constant -1.0, then constant +1.0. It does not solve the system so you > can't continue stepping, but you can just look at the result from the first > test. > Ahhh....... I see, that's why I always see three comparisons there. The first one should help me better when dealing with real simulation case. > > >> 2), is it possible to choose a different base solution, for example, >> [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution >> vector when using the '-snes_type test' option to test my hand-coded >> Jacobian. >> > > You can also use -snes_compare_explicit which shows the matrix (only > linearized around your actual state) and actually solves the system. There > is also -snes_compare_explicit_draw. If you know that you preallocated > correctly, -snes_compare_coloring is better. Details here. > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobian.html > This is something new to me. I'd take a look at the link. Thanks a lot, Jed. Best, Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Wed Sep 19 14:56:26 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 19 Sep 2012 13:56:26 -0600 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: No Jacobian output showed when I did this, ./my-project-opt -i input.i -snes_compare_explicit Any suggestion? Ling On Wed, Sep 19, 2012 at 1:12 PM, Zou (Non-US), Ling wrote: > > > On Wed, Sep 19, 2012 at 1:03 PM, Jed Brown wrote: > >> On Wed, Sep 19, 2012 at 1:54 PM, Zou (Non-US), Ling wrote: >> >>> Dear Matt, >>> >>> Thanks again for helping me on the '-snes_type test' issue. This is a >>> great tool which helped us make really good progress recently. >>> >>> I have couple of more questions about the '-snes_type test'. From my >>> understanding, this finite difference method is based on a tiny >>> perturbation from a base solution vector. This base solution vector, from >>> my observation, seems to be [1, 1, 1, ....]. However, this solution vector >>> sometimes is very far from real physics, which causes issue, for example a >>> dependent pressure variable get negative value. >>> >>> I wonder, >>> 1), if this [1, 1, 1, ...] base solution vector is always used during >>> the real simulation, for example when using the finite difference >>> preconditioner. Or this base solution vector will change as solution gets >>> updated? >>> >> >> -snes_type test tries three different states: your user-defined state, >> then constant -1.0, then constant +1.0. It does not solve the system so you >> can't continue stepping, but you can just look at the result from the first >> test. >> > > Ahhh....... I see, that's why I always see three comparisons there. The > first one should help me better when dealing with real simulation case. > >> >> >>> 2), is it possible to choose a different base solution, for example, >>> [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution >>> vector when using the '-snes_type test' option to test my hand-coded >>> Jacobian. >>> >> >> You can also use -snes_compare_explicit which shows the matrix (only >> linearized around your actual state) and actually solves the system. There >> is also -snes_compare_explicit_draw. If you know that you preallocated >> correctly, -snes_compare_coloring is better. Details here. >> >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobian.html >> > > This is something new to me. I'd take a look at the link. Thanks a lot, > Jed. > > Best, > > Ling > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 19 15:31:30 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 19 Sep 2012 15:31:30 -0500 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: Are you using an older version of PETSc? On Wed, Sep 19, 2012 at 2:56 PM, Zou (Non-US), Ling wrote: > No Jacobian output showed when I did this, > > ./my-project-opt -i input.i -snes_compare_explicit > > Any suggestion? > > Ling > > > > On Wed, Sep 19, 2012 at 1:12 PM, Zou (Non-US), Ling wrote: > >> >> >> On Wed, Sep 19, 2012 at 1:03 PM, Jed Brown wrote: >> >>> On Wed, Sep 19, 2012 at 1:54 PM, Zou (Non-US), Ling wrote: >>> >>>> Dear Matt, >>>> >>>> Thanks again for helping me on the '-snes_type test' issue. This is a >>>> great tool which helped us make really good progress recently. >>>> >>>> I have couple of more questions about the '-snes_type test'. From my >>>> understanding, this finite difference method is based on a tiny >>>> perturbation from a base solution vector. This base solution vector, from >>>> my observation, seems to be [1, 1, 1, ....]. However, this solution vector >>>> sometimes is very far from real physics, which causes issue, for example a >>>> dependent pressure variable get negative value. >>>> >>>> I wonder, >>>> 1), if this [1, 1, 1, ...] base solution vector is always used during >>>> the real simulation, for example when using the finite difference >>>> preconditioner. Or this base solution vector will change as solution gets >>>> updated? >>>> >>> >>> -snes_type test tries three different states: your user-defined state, >>> then constant -1.0, then constant +1.0. It does not solve the system so you >>> can't continue stepping, but you can just look at the result from the first >>> test. >>> >> >> Ahhh....... I see, that's why I always see three comparisons there. The >> first one should help me better when dealing with real simulation case. >> >>> >>> >>>> 2), is it possible to choose a different base solution, for example, >>>> [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution >>>> vector when using the '-snes_type test' option to test my hand-coded >>>> Jacobian. >>>> >>> >>> You can also use -snes_compare_explicit which shows the matrix (only >>> linearized around your actual state) and actually solves the system. There >>> is also -snes_compare_explicit_draw. If you know that you preallocated >>> correctly, -snes_compare_coloring is better. Details here. >>> >>> >>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobian.html >>> >> >> This is something new to me. I'd take a look at the link. Thanks a lot, >> Jed. >> >> Best, >> >> Ling >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Wed Sep 19 15:34:16 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 19 Sep 2012 14:34:16 -0600 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: ================================================== Framework Information: SVN Revision: 13656 PETSc Version: 3.1.0 Current Time: Wed Sep 19 14:17:01 2012 Executable Timestamp: Wed Sep 19 14:14:46 2012 ================================================== I believe my version is 3.1.0. Ling On Wed, Sep 19, 2012 at 2:31 PM, Jed Brown wrote: > Are you using an older version of PETSc? > > > On Wed, Sep 19, 2012 at 2:56 PM, Zou (Non-US), Ling wrote: > >> No Jacobian output showed when I did this, >> >> ./my-project-opt -i input.i -snes_compare_explicit >> >> Any suggestion? >> >> Ling >> >> >> >> On Wed, Sep 19, 2012 at 1:12 PM, Zou (Non-US), Ling wrote: >> >>> >>> >>> On Wed, Sep 19, 2012 at 1:03 PM, Jed Brown wrote: >>> >>>> On Wed, Sep 19, 2012 at 1:54 PM, Zou (Non-US), Ling wrote: >>>> >>>>> Dear Matt, >>>>> >>>>> Thanks again for helping me on the '-snes_type test' issue. This is a >>>>> great tool which helped us make really good progress recently. >>>>> >>>>> I have couple of more questions about the '-snes_type test'. From my >>>>> understanding, this finite difference method is based on a tiny >>>>> perturbation from a base solution vector. This base solution vector, from >>>>> my observation, seems to be [1, 1, 1, ....]. However, this solution vector >>>>> sometimes is very far from real physics, which causes issue, for example a >>>>> dependent pressure variable get negative value. >>>>> >>>>> I wonder, >>>>> 1), if this [1, 1, 1, ...] base solution vector is always used during >>>>> the real simulation, for example when using the finite difference >>>>> preconditioner. Or this base solution vector will change as solution gets >>>>> updated? >>>>> >>>> >>>> -snes_type test tries three different states: your user-defined state, >>>> then constant -1.0, then constant +1.0. It does not solve the system so you >>>> can't continue stepping, but you can just look at the result from the first >>>> test. >>>> >>> >>> Ahhh....... I see, that's why I always see three comparisons there. The >>> first one should help me better when dealing with real simulation case. >>> >>>> >>>> >>>>> 2), is it possible to choose a different base solution, for example, >>>>> [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution >>>>> vector when using the '-snes_type test' option to test my hand-coded >>>>> Jacobian. >>>>> >>>> >>>> You can also use -snes_compare_explicit which shows the matrix (only >>>> linearized around your actual state) and actually solves the system. There >>>> is also -snes_compare_explicit_draw. If you know that you preallocated >>>> correctly, -snes_compare_coloring is better. Details here. >>>> >>>> >>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobian.html >>>> >>> >>> This is something new to me. I'd take a look at the link. Thanks a lot, >>> Jed. >>> >>> Best, >>> >>> Ling >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Sep 19 15:35:06 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 19 Sep 2012 15:35:06 -0500 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 3:34 PM, Zou (Non-US), Ling wrote: > ================================================== > Framework Information: > SVN Revision: 13656 > PETSc Version: 3.1.0 > Current Time: Wed Sep 19 14:17:01 2012 > Executable Timestamp: Wed Sep 19 14:14:46 2012 > ================================================== > > I believe my version is 3.1.0. > If you upgrade to the latest release, it will work. Matt > Ling > > > > On Wed, Sep 19, 2012 at 2:31 PM, Jed Brown wrote: > >> Are you using an older version of PETSc? >> >> >> On Wed, Sep 19, 2012 at 2:56 PM, Zou (Non-US), Ling wrote: >> >>> No Jacobian output showed when I did this, >>> >>> ./my-project-opt -i input.i -snes_compare_explicit >>> >>> Any suggestion? >>> >>> Ling >>> >>> >>> >>> On Wed, Sep 19, 2012 at 1:12 PM, Zou (Non-US), Ling wrote: >>> >>>> >>>> >>>> On Wed, Sep 19, 2012 at 1:03 PM, Jed Brown wrote: >>>> >>>>> On Wed, Sep 19, 2012 at 1:54 PM, Zou (Non-US), Ling wrote: >>>>> >>>>>> Dear Matt, >>>>>> >>>>>> Thanks again for helping me on the '-snes_type test' issue. This is a >>>>>> great tool which helped us make really good progress recently. >>>>>> >>>>>> I have couple of more questions about the '-snes_type test'. From my >>>>>> understanding, this finite difference method is based on a tiny >>>>>> perturbation from a base solution vector. This base solution vector, from >>>>>> my observation, seems to be [1, 1, 1, ....]. However, this solution vector >>>>>> sometimes is very far from real physics, which causes issue, for example a >>>>>> dependent pressure variable get negative value. >>>>>> >>>>>> I wonder, >>>>>> 1), if this [1, 1, 1, ...] base solution vector is always used during >>>>>> the real simulation, for example when using the finite difference >>>>>> preconditioner. Or this base solution vector will change as solution gets >>>>>> updated? >>>>>> >>>>> >>>>> -snes_type test tries three different states: your user-defined state, >>>>> then constant -1.0, then constant +1.0. It does not solve the system so you >>>>> can't continue stepping, but you can just look at the result from the first >>>>> test. >>>>> >>>> >>>> Ahhh....... I see, that's why I always see three comparisons there. The >>>> first one should help me better when dealing with real simulation case. >>>> >>>>> >>>>> >>>>>> 2), is it possible to choose a different base solution, for example, >>>>>> [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution >>>>>> vector when using the '-snes_type test' option to test my hand-coded >>>>>> Jacobian. >>>>>> >>>>> >>>>> You can also use -snes_compare_explicit which shows the matrix (only >>>>> linearized around your actual state) and actually solves the system. There >>>>> is also -snes_compare_explicit_draw. If you know that you preallocated >>>>> correctly, -snes_compare_coloring is better. Details here. >>>>> >>>>> >>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobian.html >>>>> >>>> >>>> This is something new to me. I'd take a look at the link. Thanks a lot, >>>> Jed. >>>> >>>> Best, >>>> >>>> Ling >>>> >>>> >>> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 19 15:37:28 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 19 Sep 2012 15:37:28 -0500 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: Upgrade to petsc-3.3 On Wed, Sep 19, 2012 at 3:34 PM, Zou (Non-US), Ling wrote: > ================================================== > Framework Information: > SVN Revision: 13656 > PETSc Version: 3.1.0 > Current Time: Wed Sep 19 14:17:01 2012 > Executable Timestamp: Wed Sep 19 14:14:46 2012 > ================================================== > > I believe my version is 3.1.0. > > Ling > > > > On Wed, Sep 19, 2012 at 2:31 PM, Jed Brown wrote: > >> Are you using an older version of PETSc? >> >> >> On Wed, Sep 19, 2012 at 2:56 PM, Zou (Non-US), Ling wrote: >> >>> No Jacobian output showed when I did this, >>> >>> ./my-project-opt -i input.i -snes_compare_explicit >>> >>> Any suggestion? >>> >>> Ling >>> >>> >>> >>> On Wed, Sep 19, 2012 at 1:12 PM, Zou (Non-US), Ling wrote: >>> >>>> >>>> >>>> On Wed, Sep 19, 2012 at 1:03 PM, Jed Brown wrote: >>>> >>>>> On Wed, Sep 19, 2012 at 1:54 PM, Zou (Non-US), Ling wrote: >>>>> >>>>>> Dear Matt, >>>>>> >>>>>> Thanks again for helping me on the '-snes_type test' issue. This is a >>>>>> great tool which helped us make really good progress recently. >>>>>> >>>>>> I have couple of more questions about the '-snes_type test'. From my >>>>>> understanding, this finite difference method is based on a tiny >>>>>> perturbation from a base solution vector. This base solution vector, from >>>>>> my observation, seems to be [1, 1, 1, ....]. However, this solution vector >>>>>> sometimes is very far from real physics, which causes issue, for example a >>>>>> dependent pressure variable get negative value. >>>>>> >>>>>> I wonder, >>>>>> 1), if this [1, 1, 1, ...] base solution vector is always used during >>>>>> the real simulation, for example when using the finite difference >>>>>> preconditioner. Or this base solution vector will change as solution gets >>>>>> updated? >>>>>> >>>>> >>>>> -snes_type test tries three different states: your user-defined state, >>>>> then constant -1.0, then constant +1.0. It does not solve the system so you >>>>> can't continue stepping, but you can just look at the result from the first >>>>> test. >>>>> >>>> >>>> Ahhh....... I see, that's why I always see three comparisons there. The >>>> first one should help me better when dealing with real simulation case. >>>> >>>>> >>>>> >>>>>> 2), is it possible to choose a different base solution, for example, >>>>>> [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution >>>>>> vector when using the '-snes_type test' option to test my hand-coded >>>>>> Jacobian. >>>>>> >>>>> >>>>> You can also use -snes_compare_explicit which shows the matrix (only >>>>> linearized around your actual state) and actually solves the system. There >>>>> is also -snes_compare_explicit_draw. If you know that you preallocated >>>>> correctly, -snes_compare_coloring is better. Details here. >>>>> >>>>> >>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobian.html >>>>> >>>> >>>> This is something new to me. I'd take a look at the link. Thanks a lot, >>>> Jed. >>>> >>>> Best, >>>> >>>> Ling >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Wed Sep 19 15:40:31 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 19 Sep 2012 14:40:31 -0600 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: Thank you Matt. I am working under MOOSE framework, so I am afraid I am not able to do it alone. I will seek help from them. Thanks Ling On Wed, Sep 19, 2012 at 2:35 PM, Matthew Knepley wrote: > On Wed, Sep 19, 2012 at 3:34 PM, Zou (Non-US), Ling wrote: > >> ================================================== >> Framework Information: >> SVN Revision: 13656 >> PETSc Version: 3.1.0 >> Current Time: Wed Sep 19 14:17:01 2012 >> Executable Timestamp: Wed Sep 19 14:14:46 2012 >> ================================================== >> >> I believe my version is 3.1.0. >> > > If you upgrade to the latest release, it will work. > > Matt > > >> Ling >> >> >> >> On Wed, Sep 19, 2012 at 2:31 PM, Jed Brown wrote: >> >>> Are you using an older version of PETSc? >>> >>> >>> On Wed, Sep 19, 2012 at 2:56 PM, Zou (Non-US), Ling wrote: >>> >>>> No Jacobian output showed when I did this, >>>> >>>> ./my-project-opt -i input.i -snes_compare_explicit >>>> >>>> Any suggestion? >>>> >>>> Ling >>>> >>>> >>>> >>>> On Wed, Sep 19, 2012 at 1:12 PM, Zou (Non-US), Ling wrote: >>>> >>>>> >>>>> >>>>> On Wed, Sep 19, 2012 at 1:03 PM, Jed Brown wrote: >>>>> >>>>>> On Wed, Sep 19, 2012 at 1:54 PM, Zou (Non-US), Ling >>>>> > wrote: >>>>>> >>>>>>> Dear Matt, >>>>>>> >>>>>>> Thanks again for helping me on the '-snes_type test' issue. This is >>>>>>> a great tool which helped us make really good progress recently. >>>>>>> >>>>>>> I have couple of more questions about the '-snes_type test'. From my >>>>>>> understanding, this finite difference method is based on a tiny >>>>>>> perturbation from a base solution vector. This base solution vector, from >>>>>>> my observation, seems to be [1, 1, 1, ....]. However, this solution vector >>>>>>> sometimes is very far from real physics, which causes issue, for example a >>>>>>> dependent pressure variable get negative value. >>>>>>> >>>>>>> I wonder, >>>>>>> 1), if this [1, 1, 1, ...] base solution vector is always used >>>>>>> during the real simulation, for example when using the finite difference >>>>>>> preconditioner. Or this base solution vector will change as solution gets >>>>>>> updated? >>>>>>> >>>>>> >>>>>> -snes_type test tries three different states: your user-defined >>>>>> state, then constant -1.0, then constant +1.0. It does not solve the system >>>>>> so you can't continue stepping, but you can just look at the result from >>>>>> the first test. >>>>>> >>>>> >>>>> Ahhh....... I see, that's why I always see three comparisons there. >>>>> The first one should help me better when dealing with real simulation case. >>>>> >>>>>> >>>>>> >>>>>>> 2), is it possible to choose a different base solution, for example, >>>>>>> [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution >>>>>>> vector when using the '-snes_type test' option to test my hand-coded >>>>>>> Jacobian. >>>>>>> >>>>>> >>>>>> You can also use -snes_compare_explicit which shows the matrix (only >>>>>> linearized around your actual state) and actually solves the system. There >>>>>> is also -snes_compare_explicit_draw. If you know that you preallocated >>>>>> correctly, -snes_compare_coloring is better. Details here. >>>>>> >>>>>> >>>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobian.html >>>>>> >>>>> >>>>> This is something new to me. I'd take a look at the link. Thanks a >>>>> lot, Jed. >>>>> >>>>> Best, >>>>> >>>>> Ling >>>>> >>>>> >>>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Wed Sep 19 15:40:43 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 19 Sep 2012 14:40:43 -0600 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: Thanks Jed. Ling On Wed, Sep 19, 2012 at 2:37 PM, Jed Brown wrote: > Upgrade to petsc-3.3 > > > On Wed, Sep 19, 2012 at 3:34 PM, Zou (Non-US), Ling wrote: > >> ================================================== >> Framework Information: >> SVN Revision: 13656 >> PETSc Version: 3.1.0 >> Current Time: Wed Sep 19 14:17:01 2012 >> Executable Timestamp: Wed Sep 19 14:14:46 2012 >> ================================================== >> >> I believe my version is 3.1.0. >> >> Ling >> >> >> >> On Wed, Sep 19, 2012 at 2:31 PM, Jed Brown wrote: >> >>> Are you using an older version of PETSc? >>> >>> >>> On Wed, Sep 19, 2012 at 2:56 PM, Zou (Non-US), Ling wrote: >>> >>>> No Jacobian output showed when I did this, >>>> >>>> ./my-project-opt -i input.i -snes_compare_explicit >>>> >>>> Any suggestion? >>>> >>>> Ling >>>> >>>> >>>> >>>> On Wed, Sep 19, 2012 at 1:12 PM, Zou (Non-US), Ling wrote: >>>> >>>>> >>>>> >>>>> On Wed, Sep 19, 2012 at 1:03 PM, Jed Brown wrote: >>>>> >>>>>> On Wed, Sep 19, 2012 at 1:54 PM, Zou (Non-US), Ling >>>>> > wrote: >>>>>> >>>>>>> Dear Matt, >>>>>>> >>>>>>> Thanks again for helping me on the '-snes_type test' issue. This is >>>>>>> a great tool which helped us make really good progress recently. >>>>>>> >>>>>>> I have couple of more questions about the '-snes_type test'. From my >>>>>>> understanding, this finite difference method is based on a tiny >>>>>>> perturbation from a base solution vector. This base solution vector, from >>>>>>> my observation, seems to be [1, 1, 1, ....]. However, this solution vector >>>>>>> sometimes is very far from real physics, which causes issue, for example a >>>>>>> dependent pressure variable get negative value. >>>>>>> >>>>>>> I wonder, >>>>>>> 1), if this [1, 1, 1, ...] base solution vector is always used >>>>>>> during the real simulation, for example when using the finite difference >>>>>>> preconditioner. Or this base solution vector will change as solution gets >>>>>>> updated? >>>>>>> >>>>>> >>>>>> -snes_type test tries three different states: your user-defined >>>>>> state, then constant -1.0, then constant +1.0. It does not solve the system >>>>>> so you can't continue stepping, but you can just look at the result from >>>>>> the first test. >>>>>> >>>>> >>>>> Ahhh....... I see, that's why I always see three comparisons there. >>>>> The first one should help me better when dealing with real simulation case. >>>>> >>>>>> >>>>>> >>>>>>> 2), is it possible to choose a different base solution, for example, >>>>>>> [1000, 1000, 1000, 10, 10, 10, 1.e9, 1.e9, 1.e9] as the base solution >>>>>>> vector when using the '-snes_type test' option to test my hand-coded >>>>>>> Jacobian. >>>>>>> >>>>>> >>>>>> You can also use -snes_compare_explicit which shows the matrix (only >>>>>> linearized around your actual state) and actually solves the system. There >>>>>> is also -snes_compare_explicit_draw. If you know that you preallocated >>>>>> correctly, -snes_compare_coloring is better. Details here. >>>>>> >>>>>> >>>>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESComputeJacobian.html >>>>>> >>>>> >>>>> This is something new to me. I'd take a look at the link. Thanks a >>>>> lot, Jed. >>>>> >>>>> Best, >>>>> >>>>> Ling >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 19 15:42:16 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 19 Sep 2012 15:42:16 -0500 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: On Wed, Sep 19, 2012 at 3:40 PM, Zou (Non-US), Ling wrote: > Thank you Matt. I am working under MOOSE framework, so I am afraid I am > not able to do it alone. I will seek help from them. 3.3 should be fine, we used it when I visited last month. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Wed Sep 19 15:43:36 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 19 Sep 2012 14:43:36 -0600 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: good to know, I will try it. Ling On Wed, Sep 19, 2012 at 2:42 PM, Jed Brown wrote: > On Wed, Sep 19, 2012 at 3:40 PM, Zou (Non-US), Ling wrote: > >> Thank you Matt. I am working under MOOSE framework, so I am afraid I am >> not able to do it alone. I will seek help from them. > > > 3.3 should be fine, we used it when I visited last month. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karpeev at mcs.anl.gov Wed Sep 19 15:46:53 2012 From: karpeev at mcs.anl.gov (Dmitry Karpeev) Date: Wed, 19 Sep 2012 15:46:53 -0500 Subject: [petsc-users] using '-snes_type test' to test my coded Jacobian In-Reply-To: References: Message-ID: I use Moose with petsc-3.3 essentially every day, and it works for everything I do. There shouldn't be a problem in building the whole herd stack with petsc-3.3. Dmitry. On Wed, Sep 19, 2012 at 3:43 PM, Zou (Non-US), Ling wrote: > good to know, I will try it. > > Ling > > > On Wed, Sep 19, 2012 at 2:42 PM, Jed Brown wrote: > >> On Wed, Sep 19, 2012 at 3:40 PM, Zou (Non-US), Ling wrote: >> >>> Thank you Matt. I am working under MOOSE framework, so I am afraid I am >>> not able to do it alone. I will seek help from them. >> >> >> 3.3 should be fine, we used it when I visited last month. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zyzhang at nuaa.edu.cn Wed Sep 19 22:45:01 2012 From: zyzhang at nuaa.edu.cn (Zhang) Date: Thu, 20 Sep 2012 11:45:01 +0800 (CST) Subject: [petsc-users] undefined reference to SNESDMMeshComputeJacobian Message-ID: <1088432.9bc5.139e1c758f4.Coremail.zyzhang@nuaa.edu.cn> Hi, I tried to make snes/ex12.c in petsc-dev, but SNESDMMeshComputeJacobian can not be found. It is only declared in petsc-dev/include/petscsnes.h. By the way, the configure I used for this version is ./configure --with-shared-libraries=1 --with-dynamic-loading=1 --with-x=1 --with-blas-lapack-dir=/usr/lib/lapack --with-valgrind=1 --download-openmpi --with-cc=gcc --with-fc=gfortran --with-clanguage=C++ --with-c++-support=1 --with-sieve=1 --with-opt-sieve=1 --with-fiat=1 --download-scientificpython --download-fiat --download-generator --download-triangle --with-ctetgen --download-chaco --download-boost=1 --download-ctetgen Thank you first for any suggestion, Zhenyu -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Thu Sep 20 02:11:06 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 20 Sep 2012 07:11:06 +0000 Subject: [petsc-users] difference between left and right pc Message-ID: > > I'm solving a system with GMRES using the same preconditioner either on > > the left or on the right. > > For left preconditioning I get two orders of reduction for the > > preconditioned residual in 20 its: > > > > Notice here that your "preconditioner" is far from one. It manages to blow > up the true residual by > 5 orders of magnitude, from which it never recovers. The right > preconditioning just avoids being > so screwed up. > > Matt Yes, I noticed that. It does recover 2 orders in 20 its, and it can recover 5 orders and beyond in a few hundred its. What I don't understand is how the same preconditioner applied to the right "just avoids being so screwed up". Chris dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From C.Klaij at marin.nl Thu Sep 20 02:29:03 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 20 Sep 2012 07:29:03 +0000 Subject: [petsc-users] difference between left and right pc Message-ID: > Is the "preconditioner" singular? How about the matrix? Is > the "preconditioner" a linear operator? Did you try KSPFGMRES > on it? The matrix comes from incompressible Navier-Stokes and the preconditioner is of Schur complement type (SIMPLE). Both are shells. FGMRES (and GCR) give the same result as GMRES with right PC. > > > I'm solving a system with GMRES using the same preconditioner > > either on the left or on the right. For left preconditioning I > > get two orders of reduction for the preconditioned residual in > > 20 its: The solution is ok for left preconditioning but stuck > > to zero (the initial guess) for right preconditioning. > > Is it really ok? The true residual norm is much much worse in > the left case after 20 iterations than the right case. How > are you judging the solution quality? I'm judging quality by looking at the physics. With right preconditioner the flow is standing still. With left preconditioning, even though it is only the first non-linear iteration, the flow looks more or less ok (right direction, high pressure in stagnation point etc...) > > Barry dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From knepley at gmail.com Thu Sep 20 06:19:31 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 20 Sep 2012 06:19:31 -0500 Subject: [petsc-users] difference between left and right pc In-Reply-To: References: Message-ID: On Thu, Sep 20, 2012 at 2:11 AM, Klaij, Christiaan wrote: > > > I'm solving a system with GMRES using the same preconditioner either on > > > the left or on the right. > > > For left preconditioning I get two orders of reduction for the > > > preconditioned residual in 20 its: > > > > > > > Notice here that your "preconditioner" is far from one. It manages to > blow > > up the true residual by > > 5 orders of magnitude, from which it never recovers. The right > > preconditioning just avoids being > > so screwed up. > > > > Matt > > Yes, I noticed that. It does recover 2 orders in 20 its, and it > can recover 5 orders and beyond in a few hundred its. What I > don't understand is how the same preconditioner applied to the > right "just avoids being so screwed up". > Suppose that your preconditioner has a huge null space, and b fits into it. Then right preconditioning would do nothing at all. Some tiny bit would creep through since Ab is not entirely in it, but there would be a small preconditioned residual with large true residual. Matt > Chris > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Fri Sep 21 01:44:45 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 21 Sep 2012 06:44:45 +0000 Subject: [petsc-users] difference between left and right pc Message-ID: > > > > I'm solving a system with GMRES using the same preconditioner either on > > > > the left or on the right. > > > > For left preconditioning I get two orders of reduction for the > > > > preconditioned residual in 20 its: > > > > > > > > > > Notice here that your "preconditioner" is far from one. It manages to > > blow > > > up the true residual by > > > 5 orders of magnitude, from which it never recovers. The right > > > preconditioning just avoids being > > > so screwed up. > > > > > > Matt > > > > Yes, I noticed that. It does recover 2 orders in 20 its, and it > > can recover 5 orders and beyond in a few hundred its. What I > > don't understand is how the same preconditioner applied to the > > right "just avoids being so screwed up". > > > > Suppose that your preconditioner has a huge null space, and b fits > into it. Then right preconditioning would do nothing at all. Some tiny > bit would creep through since Ab is not entirely in it, but there would > be a small preconditioned residual with large true residual. > > Matt Thanks Matt. But don't you mean *left* preconditioning would do nothing at all? If P^{-1} has a null space and b fits into it then P^{-1} A x = P^{-1} b => A x = 0 => x = 0. That's not what I'm seeing. Chris dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From C.Klaij at marin.nl Fri Sep 21 03:29:04 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 21 Sep 2012 08:29:04 +0000 Subject: [petsc-users] difference between left and right pc Message-ID: When I use zero initial guess, GMRES with left PC gives a huge jump in true resisdual between iteration 0 and 1 and GMRES with right PC is stuck, the solution remains zero, as mentioned before. When I use the Knoll trick, both issues are gone (!) and I do get similar results for left and right preconditioning, both for the iteration count and for the physics of the solution. I didn't expect such a difference, did you? If so, why? Somehow it must be related to the rhs being quite small. GMRES, left PC, initial guess zero: 0 KSP preconditioned resid norm 2.980694554053e+01 true resid norm 7.058057578378e-05 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 1.121717063399e+01 true resid norm 2.239445995669e+00 ||r(i)||/||b|| 3.172892783603e+04 2 KSP preconditioned resid norm 8.419094257245e+00 true resid norm 4.482707776056e+00 ||r(i)||/||b|| 6.351191848857e+04 3 KSP preconditioned resid norm 6.113655853636e+00 true resid norm 1.759217056899e+00 ||r(i)||/||b|| 2.492494623858e+04 4 KSP preconditioned resid norm 4.949889403847e+00 true resid norm 3.572848898052e-01 ||r(i)||/||b|| 5.062085224406e+03 5 KSP preconditioned resid norm 4.187220822242e+00 true resid norm 7.071876172117e-01 ||r(i)||/||b|| 1.001957846558e+04 6 KSP preconditioned resid norm 3.598699773848e+00 true resid norm 6.751395101318e-01 ||r(i)||/||b|| 9.565514344910e+03 7 KSP preconditioned resid norm 3.024026574700e+00 true resid norm 5.529170377011e-01 ||r(i)||/||b|| 7.833841415447e+03 8 KSP preconditioned resid norm 2.609636515722e+00 true resid norm 5.689992696649e-01 ||r(i)||/||b|| 8.061697759565e+03 9 KSP preconditioned resid norm 2.254221020819e+00 true resid norm 4.949259965429e-01 ||r(i)||/||b|| 7.012212510975e+03 10 KSP preconditioned resid norm 1.873529244708e+00 true resid norm 6.109183824231e-01 ||r(i)||/||b|| 8.655616302912e+03 11 KSP preconditioned resid norm 1.505474576580e+00 true resid norm 4.363808762555e-01 ||r(i)||/||b|| 6.182733300340e+03 12 KSP preconditioned resid norm 1.273391808351e+00 true resid norm 5.799473619663e-01 ||r(i)||/||b|| 8.216812565299e+03 13 KSP preconditioned resid norm 1.092596045026e+00 true resid norm 5.341297417537e-01 ||r(i)||/||b|| 7.567659172829e+03 14 KSP preconditioned resid norm 9.145639963916e-01 true resid norm 4.424631524670e-01 ||r(i)||/||b|| 6.268908230820e+03 15 KSP preconditioned resid norm 7.619506249149e-01 true resid norm 4.154893466277e-01 ||r(i)||/||b|| 5.886737845558e+03 16 KSP preconditioned resid norm 6.305034569873e-01 true resid norm 4.166530590059e-01 ||r(i)||/||b|| 5.903225559994e+03 17 KSP preconditioned resid norm 5.020718919136e-01 true resid norm 3.542538361268e-01 ||r(i)||/||b|| 5.019140637390e+03 18 KSP preconditioned resid norm 4.099172843566e-01 true resid norm 2.942812083953e-01 ||r(i)||/||b|| 4.169436209997e+03 19 KSP preconditioned resid norm 3.456791256934e-01 true resid norm 2.474858759247e-01 ||r(i)||/||b|| 3.506430390747e+03 20 KSP preconditioned resid norm 2.730195605094e-01 true resid norm 2.641558094323e-01 ||r(i)||/||b|| 3.742613410260e+03 GMRES, right PC, initial guess zero: 0 KSP unpreconditioned resid norm 7.058057578378e-05 true resid norm 7.058057578378e-05 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 7.054747142321e-05 true resid norm 7.054747142321e-05 ||r(i)||/||b|| 9.995309706643e-01 2 KSP unpreconditioned resid norm 7.020651831374e-05 true resid norm 7.020651831373e-05 ||r(i)||/||b|| 9.947002774360e-01 3 KSP unpreconditioned resid norm 7.006225380599e-05 true resid norm 7.006225374905e-05 ||r(i)||/||b|| 9.926563076458e-01 4 KSP unpreconditioned resid norm 7.004188290578e-05 true resid norm 7.004188287852e-05 ||r(i)||/||b|| 9.923676889953e-01 5 KSP unpreconditioned resid norm 7.004130975499e-05 true resid norm 7.004130973557e-05 ||r(i)||/||b|| 9.923595685891e-01 6 KSP unpreconditioned resid norm 7.002915081650e-05 true resid norm 7.002915072237e-05 ||r(i)||/||b|| 9.921872972090e-01 7 KSP unpreconditioned resid norm 6.992906439247e-05 true resid norm 6.992906454226e-05 ||r(i)||/||b|| 9.907692557861e-01 8 KSP unpreconditioned resid norm 6.992498998319e-05 true resid norm 6.992499016218e-05 ||r(i)||/||b|| 9.907115291379e-01 9 KSP unpreconditioned resid norm 6.992334551935e-05 true resid norm 6.992334572069e-05 ||r(i)||/||b|| 9.906882303552e-01 10 KSP unpreconditioned resid norm 6.992269976389e-05 true resid norm 6.992269995062e-05 ||r(i)||/||b|| 9.906790809531e-01 11 KSP unpreconditioned resid norm 6.992074987133e-05 true resid norm 6.992075003662e-05 ||r(i)||/||b|| 9.906514541736e-01 12 KSP unpreconditioned resid norm 6.991044260131e-05 true resid norm 6.991044279866e-05 ||r(i)||/||b|| 9.905054191230e-01 13 KSP unpreconditioned resid norm 6.990672948921e-05 true resid norm 6.990672970817e-05 ||r(i)||/||b|| 9.904528112992e-01 14 KSP unpreconditioned resid norm 6.990672944080e-05 true resid norm 6.990672965979e-05 ||r(i)||/||b|| 9.904528106138e-01 15 KSP unpreconditioned resid norm 6.990484339200e-05 true resid norm 6.990484361213e-05 ||r(i)||/||b|| 9.904260887057e-01 16 KSP unpreconditioned resid norm 6.990392558763e-05 true resid norm 6.990392579857e-05 ||r(i)||/||b|| 9.904130849359e-01 17 KSP unpreconditioned resid norm 6.990024258014e-05 true resid norm 6.990024281929e-05 ||r(i)||/||b|| 9.903609037340e-01 18 KSP unpreconditioned resid norm 6.989684197988e-05 true resid norm 6.989684218329e-05 ||r(i)||/||b|| 9.903127228292e-01 19 KSP unpreconditioned resid norm 6.985738628710e-05 true resid norm 6.985738637250e-05 ||r(i)||/||b|| 9.897537048507e-01 20 KSP unpreconditioned resid norm 6.984955654109e-05 true resid norm 6.984955670609e-05 ||r(i)||/||b|| 9.896427725393e-01 GMRES, left PC, initial guess Knoll: 0 KSP preconditioned resid norm 2.536595064974e+01 true resid norm 3.944974940985e-01 ||r(i)||/||b|| 5.589320995439e+03 1 KSP preconditioned resid norm 9.971908215661e+00 true resid norm 4.077906575518e-01 ||r(i)||/||b|| 5.777661247778e+03 2 KSP preconditioned resid norm 6.198762212035e+00 true resid norm 3.853236164713e-01 ||r(i)||/||b|| 5.459343625245e+03 3 KSP preconditioned resid norm 4.951817153938e+00 true resid norm 4.674909493321e-01 ||r(i)||/||b|| 6.623507163843e+03 4 KSP preconditioned resid norm 4.188493255513e+00 true resid norm 6.015128222440e-01 ||r(i)||/||b|| 8.522356407046e+03 5 KSP preconditioned resid norm 3.598679986172e+00 true resid norm 6.793369544706e-01 ||r(i)||/||b|| 9.624984592810e+03 6 KSP preconditioned resid norm 3.026306816528e+00 true resid norm 6.207257182597e-01 ||r(i)||/||b|| 8.794568638278e+03 7 KSP preconditioned resid norm 2.611366914822e+00 true resid norm 4.740155530307e-01 ||r(i)||/||b|| 6.715949080421e+03 8 KSP preconditioned resid norm 2.255031232002e+00 true resid norm 5.349008778459e-01 ||r(i)||/||b|| 7.578584786338e+03 9 KSP preconditioned resid norm 1.876627495146e+00 true resid norm 5.138940584869e-01 ||r(i)||/||b|| 7.280955883120e+03 10 KSP preconditioned resid norm 1.506188962283e+00 true resid norm 4.643670549998e-01 ||r(i)||/||b|| 6.579247191499e+03 11 KSP preconditioned resid norm 1.273921387013e+00 true resid norm 5.547288999142e-01 ||r(i)||/||b|| 7.859512249002e+03 12 KSP preconditioned resid norm 1.093007569456e+00 true resid norm 5.471894647153e-01 ||r(i)||/||b|| 7.752691992647e+03 13 KSP preconditioned resid norm 9.148792210071e-01 true resid norm 4.290679437670e-01 ||r(i)||/||b|| 6.079122180604e+03 14 KSP preconditioned resid norm 7.621434670141e-01 true resid norm 4.204495201638e-01 ||r(i)||/||b|| 5.957014596364e+03 15 KSP preconditioned resid norm 6.309047197329e-01 true resid norm 4.055181123700e-01 ||r(i)||/||b|| 5.745463363919e+03 16 KSP preconditioned resid norm 5.022199545208e-01 true resid norm 3.536062790063e-01 ||r(i)||/||b|| 5.009965915970e+03 17 KSP preconditioned resid norm 4.100790738955e-01 true resid norm 2.905349530225e-01 ||r(i)||/||b|| 4.116358499434e+03 18 KSP preconditioned resid norm 3.458485004841e-01 true resid norm 2.487595183782e-01 ||r(i)||/||b|| 3.524475616921e+03 19 KSP preconditioned resid norm 2.732837304652e-01 true resid norm 2.634708100274e-01 ||r(i)||/||b|| 3.732908198915e+03 20 KSP preconditioned resid norm 2.148737365384e-01 true resid norm 2.428458971510e-01 ||r(i)||/||b|| 3.440690224672e+03 GMRES, right PC, initial guess Knoll: 0 KSP unpreconditioned resid norm 3.944974940985e-01 true resid norm 3.944974940985e-01 ||r(i)||/||b|| 5.589320995439e+03 1 KSP unpreconditioned resid norm 3.940913664585e-01 true resid norm 3.940913578719e-01 ||r(i)||/||b|| 5.583566774507e+03 2 KSP unpreconditioned resid norm 3.605826275419e-01 true resid norm 3.605793788632e-01 ||r(i)||/||b|| 5.108762217636e+03 3 KSP unpreconditioned resid norm 3.525824891368e-01 true resid norm 3.525778821245e-01 ||r(i)||/||b|| 4.995395379099e+03 4 KSP unpreconditioned resid norm 3.517641531250e-01 true resid norm 3.517589349038e-01 ||r(i)||/||b|| 4.983792367766e+03 5 KSP unpreconditioned resid norm 3.325758429609e-01 true resid norm 3.325658592167e-01 ||r(i)||/||b|| 4.711860954995e+03 6 KSP unpreconditioned resid norm 3.247494282670e-01 true resid norm 3.247370694407e-01 ||r(i)||/||b|| 4.600941063948e+03 7 KSP unpreconditioned resid norm 3.189071703462e-01 true resid norm 3.188945126635e-01 ||r(i)||/||b|| 4.518162527328e+03 8 KSP unpreconditioned resid norm 3.151473946748e-01 true resid norm 3.151329240142e-01 ||r(i)||/||b|| 4.464867571775e+03 9 KSP unpreconditioned resid norm 3.051927212838e-01 true resid norm 3.051755614688e-01 ||r(i)||/||b|| 4.323789627385e+03 10 KSP unpreconditioned resid norm 3.002500146185e-01 true resid norm 3.002270739730e-01 ||r(i)||/||b|| 4.253678446783e+03 11 KSP unpreconditioned resid norm 2.901039932221e-01 true resid norm 2.900782484433e-01 ||r(i)||/||b|| 4.109887815762e+03 12 KSP unpreconditioned resid norm 2.841009118718e-01 true resid norm 2.840678386064e-01 ||r(i)||/||b|| 4.024731102742e+03 13 KSP unpreconditioned resid norm 2.688743733473e-01 true resid norm 2.688391744653e-01 ||r(i)||/||b|| 3.808968281711e+03 14 KSP unpreconditioned resid norm 2.610702842020e-01 true resid norm 2.610215351071e-01 ||r(i)||/||b|| 3.698206372058e+03 15 KSP unpreconditioned resid norm 2.446243742637e-01 true resid norm 2.445600184232e-01 ||r(i)||/||b|| 3.464976244631e+03 16 KSP unpreconditioned resid norm 2.349762947397e-01 true resid norm 2.348876685032e-01 ||r(i)||/||b|| 3.327936417276e+03 17 KSP unpreconditioned resid norm 2.114220662561e-01 true resid norm 2.113137631497e-01 ||r(i)||/||b|| 2.993936515863e+03 18 KSP unpreconditioned resid norm 1.946947209126e-01 true resid norm 1.945481276255e-01 ||r(i)||/||b|| 2.756397570650e+03 19 KSP unpreconditioned resid norm 1.769817110500e-01 true resid norm 1.767956706672e-01 ||r(i)||/||b|| 2.504877137994e+03 20 KSP unpreconditioned resid norm 1.591399642978e-01 true resid norm 1.589097747697e-01 ||r(i)||/||b|| 2.251466115217e+03 dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From xiaohu.guo at stfc.ac.uk Fri Sep 21 06:26:50 2012 From: xiaohu.guo at stfc.ac.uk (Xiaohu Guo) Date: Fri, 21 Sep 2012 12:26:50 +0100 Subject: [petsc-users] compilation problem, zoltan & parmetis In-Reply-To: References: Message-ID: <505C4EFA.3070203@stfc.ac.uk> Dear Matt, Just curious, why PETSc drop zoltan ? Thanks ! Best Regards Xiaohu On 18/09/2012 23:22, petsc-users-request at mcs.anl.gov wrote: > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: compilation problem, zoltan & parmetis (Matthew Knepley) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 18 Sep 2012 17:22:12 -0500 > From: Matthew Knepley > To: PETSc users list > Subject: Re: [petsc-users] compilation problem, zoltan & parmetis > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Tue, Sep 18, 2012 at 5:19 PM, Lukasz Kaczmarczyk < > Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote: > >> Hallo, >> >> I have following proble with compilation (is the same error on MacOS and >> Ubuntu). >> > This is my fault. Zoltan is no longer used. I should have removed it before > the release. > > Matt > > >> Compilers: >> gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00) >> gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) >> >> ./configure --with-fortran=0 >> --with-cc=/opt/build_for_gcc-mp-4.4/local/bin/mpicc >> --with-cxx=/opt/build_for_gcc-mp-4.4/local/bin/mpicxx >> --download-superlu_dist=1 --download-parmetis=1 -download-umfpack=1 >> -download-zoltan=1 --with-shared-libraries=0 >> >> =============================================================================== >> Configuring PETSc to compile on your system >> >> =============================================================================== >> =============================================================================== >> >> Compiling UMFPACK; this >> may take several minutes >> >> >> =============================================================================== >> >> TESTING: configureLibrary from >> PETSc.packages.parmetis(config/BuildSystem/config/package.py:433) >> >> >> ******************************************************************************* >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> details): >> >> ------------------------------------------------------------------------------- >> Did not find package METIS needed by parmetis. >> Enable the package using --with-metis or --download-metis >> >> ******************************************************************************* >> >> unimackiss:petsc-3.3-p3 likask$ ./configure --with-fortran=0 >> --with-cc=/opt/build_for_gcc-mp-4.4/local/bin/mpicc >> --with-cxx=/opt/build_for_gcc-mp-4.4/local/bin/mpicxx >> --download-superlu_dist=1 --download-metis=1 --download-parmetis=1 >> -download-umfpack=1 -download-zoltan=1 --with-shared-libraries=0 >> >> =============================================================================== >> Configuring PETSc to compile on your system >> >> =============================================================================== >> =============================================================================== >> >> Configuring METIS; this >> may take several minutes >> >> >> =============================================================================== >> >> >> =============================================================================== >> >> Compiling METIS; this >> may take several minutes >> >> >> =============================================================================== >> >> >> =============================================================================== >> >> Configuring ParMETIS; >> this may take several minutes >> >> >> =============================================================================== >> >> >> =============================================================================== >> >> Compiling ParMETIS; this >> may take several minutes >> >> >> =============================================================================== >> >> >> =============================================================================== >> >> Compiling superlu_dist; >> this may take several minutes >> >> >> =============================================================================== >> >> >> ************************************************************************************************** >> >> Please register to use Zoltan at >> http://www.cs.sandia.gov/Zoltan/Zoltan.html >> >> >> ************************************************************************************************** >> >> >> =============================================================================== >> >> Compiling zoltan; this >> may take several minutes >> >> >> =============================================================================== >> >> >> >> >> >> ******************************************************************************* >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> details): >> >> ------------------------------------------------------------------------------- >> Error running make on ZOLTAN: Could not execute "cd >> /opt/build_for_gcc-mp-4.4/petsc-3.3-p3/externalpackages/Zoltan && make >> clean && make ZOLTAN_ARCH="darwin10.2.0-c-debug" >> CC="/opt/build_for_gcc-mp-4.4/local/bin/mpicc" CFLAGS=" -Wall >> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline >> -O0 " AR="/usr/bin/ar cr" RANLIB="/usr/bin/ranlib -c" >> X_LIBS="['-L/opt/local/lib', '-lX11']" >> MPI_INCPATH="-I/opt/build_for_gcc-mp-4.4/local/include" >> PARMETIS_INCPATH="-I/opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/include" >> PARMETIS_LIBPATH="-L/opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/lib >> -L/opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/lib >> -lparmetis" zoltan": >> driver >> ch >> zz >> all >> lb >> order >> par >> rcb >> coloring >> oct >> phg >> util >> hsfc >> parmetis >> params >> timer >> ha >> reftree >> include >> Memory >> Communication >> DDirectory >> Timer >> shared >> Obj_generic >> exit 0 >> rm -f *.o libexzoltan.a >> rm -f *.o zoltanSimple zoltanExample1 subDirs phgExample >> rm -f *.o ex1 >> rm -f *.o zCPPExample1 zCPPExample2 >> Obj_generic >> fort >> fdriver >> fdriver_old >> exit 0 >> make libzoltan_mem.a >> (Re)Building dependency for ../Memory/mem.c... >> Compiling ../Memory/mem.c... >> creating library libzoltan_mem.a >> make libzoltan_comm.a >> (Re)Building dependency for ../Communication/comm_sort_ints.c... >> (Re)Building dependency for ../Communication/comm_resize.c... >> (Re)Building dependency for ../Communication/comm_exchange_sizes.c... >> (Re)Building dependency for ../Communication/comm_invert_map.c... >> (Re)Building dependency for ../Communication/comm_invert_plan.c... >> (Re)Building dependency for ../Communication/comm_info.c... >> (Re)Building dependency for ../Communication/comm_destroy.c... >> (Re)Building dependency for ../Communication/comm_do_reverse.c... >> (Re)Building dependency for ../Communication/comm_do.c... >> (Re)Building dependency for ../Communication/comm_create.c... >> Compiling ../Communication/comm_create.c... >> Compiling ../Communication/comm_do.c... >> Compiling ../Communication/comm_do_reverse.c... >> Compiling ../Communication/comm_destroy.c... >> Compiling ../Communication/comm_info.c... >> Compiling ../Communication/comm_invert_plan.c... >> Compiling ../Communication/comm_invert_map.c... >> Compiling ../Communication/comm_exchange_sizes.c... >> Compiling ../Communication/comm_resize.c... >> Compiling ../Communication/comm_sort_ints.c... >> creating library libzoltan_comm.a >> make libzoltan_dd.a >> (Re)Building dependency for ../shared/zoltan_align.c... >> (Re)Building dependency for ../shared/zoltan_id.c... >> (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c... >> (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c... >> (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c... >> (Re)Building dependency for ../DDirectory/DD_Print.c... >> (Re)Building dependency for ../DDirectory/DD_Stats.c... >> (Re)Building dependency for ../DDirectory/DD_Hash2.c... >> (Re)Building dependency for ../DDirectory/DD_Set_Hash_Fn.c... >> (Re)Building dependency for ../DDirectory/DD_Update.c... >> (Re)Building dependency for ../DDirectory/DD_Remove.c... >> (Re)Building dependency for ../DDirectory/DD_Find.c... >> (Re)Building dependency for ../DDirectory/DD_Destroy.c... >> (Re)Building dependency for ../DDirectory/DD_Create.c... >> Compiling ../DDirectory/DD_Create.c... >> Compiling ../DDirectory/DD_Destroy.c... >> Compiling ../DDirectory/DD_Find.c... >> Compiling ../DDirectory/DD_Remove.c... >> Compiling ../DDirectory/DD_Update.c... >> Compiling ../DDirectory/DD_Set_Hash_Fn.c... >> Compiling ../DDirectory/DD_Hash2.c... >> Compiling ../DDirectory/DD_Stats.c... >> Compiling ../DDirectory/DD_Print.c... >> Compiling ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c... >> Compiling ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c... >> Compiling ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c... >> Compiling ../shared/zoltan_id.c... >> Compiling ../shared/zoltan_align.c... >> creating library libzoltan_dd.a >> make libzoltan_timer.a >> (Re)Building dependency for ../Timer/timer.c... >> (Re)Building dependency for ../Timer/zoltan_timer.c... >> Compiling ../Timer/zoltan_timer.c... >> Compiling ../Timer/timer.c... >> creating library libzoltan_timer.a >> creating library libzoltan.a >> (Re)Building dependency for ../reftree/reftree_coarse_path.c... >> (Re)Building dependency for ../reftree/reftree_hash.c... >> (Re)Building dependency for ../reftree/reftree_part.c... >> (Re)Building dependency for ../reftree/reftree_build.c... >> (Re)Building dependency for ../ha/build_machine_desc.c... >> (Re)Building dependency for ../ha/get_processor_name.c... >> (Re)Building dependency for ../ha/divide_machine.c... >> (Re)Building dependency for ../timer/timer_params.c... >> (Re)Building dependency for ../phg/phg_patoh.c... >> (Re)Building dependency for ../phg/phg_parkway.c... >> (Re)Building dependency for ../phg/phg_order.c... >> (Re)Building dependency for ../phg/phg_comm.c... >> (Re)Building dependency for ../phg/phg_scale.c... >> (Re)Building dependency for ../phg/phg_util.c... >> (Re)Building dependency for ../phg/phg_rdivide.c... >> (Re)Building dependency for ../phg/phg_Vcycle.c... >> (Re)Building dependency for ../phg/phg_serialpartition.c... >> (Re)Building dependency for ../phg/phg_refinement.c... >> (Re)Building dependency for ../phg/phg_plot.c... >> (Re)Building dependency for ../phg/phg_match.c... >> (Re)Building dependency for ../phg/phg_gather.c... >> (Re)Building dependency for ../phg/phg_distrib.c... >> (Re)Building dependency for ../phg/phg_coarse.c... >> (Re)Building dependency for ../phg/phg_build_calls.c... >> (Re)Building dependency for ../phg/phg_build.c... >> (Re)Building dependency for ../phg/phg_hypergraph.c... >> (Re)Building dependency for ../phg/phg.c... >> (Re)Building dependency for ../parmetis/scatter_graph.c... >> (Re)Building dependency for ../parmetis/verify_graph.c... >> (Re)Building dependency for ../parmetis/build_graph.c... >> (Re)Building dependency for ../parmetis/parmetis_jostle.c... >> (Re)Building dependency for ../params/bind_param.c... >> (Re)Building dependency for ../params/free_params.c... >> (Re)Building dependency for ../params/key_params.c... >> (Re)Building dependency for ../params/print_params.c... >> (Re)Building dependency for ../params/check_param.c... >> (Re)Building dependency for ../params/assign_param_vals.c... >> (Re)Building dependency for ../params/set_param.c... >> (Re)Building dependency for ../hsfc/hsfc_point_assign.c... >> (Re)Building dependency for ../hsfc/hsfc_box_assign.c... >> (Re)Building dependency for ../hsfc/hsfc.c... >> (Re)Building dependency for ../hsfc/hsfc_hilbert.c... >> (Re)Building dependency for ../oct/oct_plot.c... >> (Re)Building dependency for ../oct/rootlist.c... >> (Re)Building dependency for ../oct/octree.c... >> (Re)Building dependency for ../oct/migtags.c... >> (Re)Building dependency for ../oct/migreg.c... >> (Re)Building dependency for ../oct/output.c... >> (Re)Building dependency for ../oct/migoct.c... >> (Re)Building dependency for ../oct/costs.c... >> (Re)Building dependency for ../oct/dfs.c... >> (Re)Building dependency for ../oct/octupdate.c... >> (Re)Building dependency for ../oct/oct_util.c... >> (Re)Building dependency for ../oct/octant.c... >> (Re)Building dependency for ../oct/msg.c... >> (Re)Building dependency for ../order/perm.c... >> (Re)Building dependency for ../order/order_struct.c... >> (Re)Building dependency for ../order/order.c... >> (Re)Building dependency for ../par/par_tflops_special.c... >> (Re)Building dependency for ../par/par_stats.c... >> (Re)Building dependency for ../par/par_sync.c... >> (Re)Building dependency for ../par/par_median.c... >> (Re)Building dependency for ../par/par_bisect.c... >> (Re)Building dependency for ../par/par_average.c... >> (Re)Building dependency for ../coloring/color_test.c... >> (Re)Building dependency for ../coloring/g2l_hash.c... >> (Re)Building dependency for ../coloring/coloring.c... >> (Re)Building dependency for ../rcb/shared.c... >> (Re)Building dependency for ../rcb/inertial3d.c... >> (Re)Building dependency for ../rcb/inertial2d.c... >> (Re)Building dependency for ../rcb/inertial1d.c... >> (Re)Building dependency for ../rcb/rib_util.c... >> (Re)Building dependency for ../rcb/rib.c... >> (Re)Building dependency for ../rcb/create_proc_list.c... >> (Re)Building dependency for ../rcb/point_assign.c... >> (Re)Building dependency for ../rcb/box_assign.c... >> (Re)Building dependency for ../rcb/rcb_box.c... >> (Re)Building dependency for ../rcb/rcb_util.c... >> (Re)Building dependency for ../rcb/rcb.c... >> (Re)Building dependency for ../all/all_allo.c... >> (Re)Building dependency for ../lb/lb_remap.c... >> (Re)Building dependency for ../lb/lb_set_part_sizes.c... >> (Re)Building dependency for ../lb/lb_part2proc.c... >> (Re)Building dependency for ../lb/lb_box_assign.c... >> (Re)Building dependency for ../lb/lb_point_assign.c... >> (Re)Building dependency for ../lb/lb_set_method.c... >> (Re)Building dependency for ../lb/lb_set_fn.c... >> (Re)Building dependency for ../lb/lb_migrate.c... >> (Re)Building dependency for ../lb/lb_invert.c... >> (Re)Building dependency for ../lb/lb_init.c... >> (Re)Building dependency for ../lb/lb_copy.c... >> (Re)Building dependency for ../lb/lb_free.c... >> (Re)Building dependency for ../lb/lb_eval.c... >> (Re)Building dependency for ../lb/lb_balance.c... >> (Re)Building dependency for ../zz/zz_rand.c... >> (Re)Building dependency for ../zz/zz_sort.c... >> (Re)Building dependency for ../zz/zz_heap.c... >> (Re)Building dependency for ../zz/zz_hash.c... >> (Re)Building dependency for ../zz/zz_gen_files.c... >> (Re)Building dependency for ../zz/zz_util.c... >> (Re)Building dependency for ../zz/zz_set_fn.c... >> (Re)Building dependency for ../zz/zz_init.c... >> (Re)Building dependency for ../zz/zz_struct.c... >> (Re)Building dependency for ../zz/zz_obj_list.c... >> (Re)Building dependency for ../zz/zz_coord.c... >> Compiling ../zz/zz_coord.c... >> Compiling ../zz/zz_obj_list.c... >> Compiling ../zz/zz_struct.c... >> Compiling ../zz/zz_init.c... >> Compiling ../zz/zz_set_fn.c... >> Compiling ../zz/zz_util.c... >> Compiling ../zz/zz_gen_files.c... >> Makefile:28: mem.d: No such file or directory >> ../Memory/mem.c: In function 'Zoltan_Array_Alloc': >> ../Memory/mem.c:162: warning: initialization discards qualifiers from >> pointer target type >> ../Memory/mem.c: In function 'Zoltan_Malloc': >> ../Memory/mem.c:280: warning: initialization discards qualifiers from >> pointer target type >> ../Memory/mem.c: In function 'Zoltan_Realloc': >> ../Memory/mem.c:345: warning: initialization discards qualifiers from >> pointer target type >> Makefile:28: comm_create.d: No such file or directory >> Makefile:28: comm_do.d: No such file or directory >> Makefile:28: comm_do_reverse.d: No such file or directory >> Makefile:28: comm_destroy.d: No such file or directory >> Makefile:28: comm_info.d: No such file or directory >> Makefile:28: comm_invert_plan.d: No such file or directory >> Makefile:28: comm_invert_map.d: No such file or directory >> Makefile:28: comm_exchange_sizes.d: No such file or directory >> Makefile:28: comm_resize.d: No such file or directory >> Makefile:28: comm_sort_ints.d: No such file or directory >> ../Communication/comm_create.c: In function 'Zoltan_Comm_Create': >> ../Communication/comm_create.c:63: warning: initialization discards >> qualifiers from pointer target type >> ../Communication/comm_create.c:79: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:123: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:124: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:125: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:170: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:195: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:196: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:197: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:222: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:232: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:248: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:249: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:250: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:251: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:252: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:253: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:261: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:292: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:293: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c: In function 'Zoltan_Comm_Copy_To': >> ../Communication/comm_create.c:326: warning: initialization discards >> qualifiers from pointer target type >> ../Communication/comm_create.c:342: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:348: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:349: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:350: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:351: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:352: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:353: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:354: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:355: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:356: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:357: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:358: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:359: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:360: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:361: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:362: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:364: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_create.c:365: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do.c: In function 'Zoltan_Comm_Do_Post': >> ../Communication/comm_do.c:85: warning: initialization discards qualifiers >> from pointer target type >> ../Communication/comm_do.c:134: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do.c:180: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do.c:193: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do.c:195: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do.c:262: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do.c:334: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do.c: In function 'Zoltan_Comm_Do_Wait': >> ../Communication/comm_do.c:398: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c: In function >> 'Zoltan_Comm_Do_Reverse_Post': >> ../Communication/comm_do_reverse.c:66: warning: initialization discards >> qualifiers from pointer target type >> ../Communication/comm_do_reverse.c:87: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:115: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:117: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:122: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:124: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:125: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:132: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:133: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:134: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:140: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:141: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:142: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c: In function >> 'Zoltan_Comm_Do_Reverse_Wait': >> ../Communication/comm_do_reverse.c:168: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:169: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:170: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:171: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:172: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:173: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:174: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:176: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:177: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_do_reverse.c:178: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c: In function 'Zoltan_Comm_Destroy': >> ../Communication/comm_destroy.c:32: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:33: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:34: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:35: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:36: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:37: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:38: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:39: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:40: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:41: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:42: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:43: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:44: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:45: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:46: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:47: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:48: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_destroy.c:51: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_info.c: In function 'Zoltan_Comm_Info': >> ../Communication/comm_info.c:56: warning: initialization discards >> qualifiers from pointer target type >> ../Communication/comm_invert_plan.c: In function 'Zoltan_Comm_Invert_Plan': >> ../Communication/comm_invert_plan.c:42: warning: initialization discards >> qualifiers from pointer target type >> ../Communication/comm_invert_plan.c:65: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:98: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:99: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:108: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:109: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:110: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:111: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:112: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:113: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:114: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:115: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:116: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:117: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:118: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:123: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:124: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_plan.c:125: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_map.c: In function 'Zoltan_Comm_Invert_Map': >> ../Communication/comm_invert_map.c:62: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_map.c:63: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_map.c:69: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_map.c:70: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_map.c:94: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_map.c:95: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_map.c:97: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_invert_map.c:98: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c: In function 'Zoltan_Comm_Resize': >> ../Communication/comm_resize.c:53: warning: initialization discards >> qualifiers from pointer target type >> ../Communication/comm_resize.c:70: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:71: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:72: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:73: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:74: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:75: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:76: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:99: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:107: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:113: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:130: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:139: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:140: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:169: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:170: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:174: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:175: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:202: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:214: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:224: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:225: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:243: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:244: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:256: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:257: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:258: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:259: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:260: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:261: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:262: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:263: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:264: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_resize.c:265: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_sort_ints.c: In function 'Zoltan_Comm_Sort_Ints': >> ../Communication/comm_sort_ints.c:52: warning: passing argument 3 of >> 'Zoltan_Calloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:62: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_sort_ints.c:53: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_sort_ints.c:54: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Communication/comm_sort_ints.c:78: warning: passing argument 1 of >> 'Zoltan_Multifree' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:69: note: expected 'char *' but argument is of >> type 'const char *' >> Makefile:28: DD_Create.d: No such file or directory >> Makefile:28: DD_Destroy.d: No such file or directory >> Makefile:28: DD_Find.d: No such file or directory >> Makefile:28: DD_Remove.d: No such file or directory >> Makefile:28: DD_Update.d: No such file or directory >> Makefile:28: DD_Set_Hash_Fn.d: No such file or directory >> Makefile:28: DD_Hash2.d: No such file or directory >> Makefile:28: DD_Stats.d: No such file or directory >> Makefile:28: DD_Print.d: No such file or directory >> Makefile:28: DD_Set_Neighbor_Hash_Fn1.d: No such file or directory >> Makefile:28: DD_Set_Neighbor_Hash_Fn2.d: No such file or directory >> Makefile:28: DD_Set_Neighbor_Hash_Fn3.d: No such file or directory >> Makefile:28: zoltan_id.d: No such file or directory >> Makefile:28: zoltan_align.d: No such file or directory >> ../DDirectory/DD_Create.c: In function 'Zoltan_DD_Create': >> ../DDirectory/DD_Create.c:45: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Create.c:80: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Create.c: In function 'Zoltan_DD_Copy_To': >> ../DDirectory/DD_Create.c:151: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Create.c:167: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Create.c: In function 'allocate_copy_list': >> ../DDirectory/DD_Create.c:200: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Destroy.c: In function 'Zoltan_DD_Destroy': >> ../DDirectory/DD_Destroy.c:46: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Destroy.c:63: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Destroy.c:75: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Find.c: In function 'Zoltan_DD_Find': >> ../DDirectory/DD_Find.c:58: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Find.c:77: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Find.c:90: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Find.c:93: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Find.c:129: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Find.c:197: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Find.c:198: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Find.c:199: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Find.c: In function 'DD_Find_Local': >> ../DDirectory/DD_Find.c:240: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Remove.c: In function 'Zoltan_DD_Remove': >> ../DDirectory/DD_Remove.c:55: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Remove.c:75: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Remove.c:88: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Remove.c:125: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Remove.c:161: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Remove.c:162: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Remove.c:163: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Remove.c: In function 'DD_Remove_Local': >> ../DDirectory/DD_Remove.c:196: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Remove.c:220: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Update.c: In function 'Zoltan_DD_Update': >> ../DDirectory/DD_Update.c:63: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Update.c:91: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Update.c:104: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Update.c:155: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Update.c:200: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Update.c:201: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Update.c:202: warning: passing argument 2 of >> 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Update.c: In function 'DD_Update_Local': >> ../DDirectory/DD_Update.c:239: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Update.c:303: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Set_Hash_Fn.c: In function 'Zoltan_DD_Set_Hash_Fn': >> ../DDirectory/DD_Set_Hash_Fn.c:42: warning: initialization discards >> qualifiers from pointer target type >> ../DDirectory/DD_Stats.c: In function 'Zoltan_DD_Stats': >> ../DDirectory/DD_Stats.c:48: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Print.c: In function 'Zoltan_DD_Print': >> ../DDirectory/DD_Print.c:42: warning: initialization discards qualifiers >> from pointer target type >> ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c: In function >> 'Zoltan_DD_Set_Neighbor_Hash_Fn1': >> ../DDirectory/DD_Set_Neighbor_Hash_Fn1.c:52: warning: initialization >> discards qualifiers from pointer target type >> ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c: In function >> 'Zoltan_DD_Set_Neighbor_Hash_Fn2': >> ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:75: warning: initialization >> discards qualifiers from pointer target type >> ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:88: warning: passing argument 2 >> of 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c: In function 'dd_nh2': >> ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:119: warning: initialization >> discards qualifiers from pointer target type >> ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c: In function 'dd_nh2_cleanup': >> ../DDirectory/DD_Set_Neighbor_Hash_Fn2.c:169: warning: passing argument 2 >> of 'Zoltan_Free' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c: In function >> 'Zoltan_DD_Set_Neighbor_Hash_Fn3': >> ../DDirectory/DD_Set_Neighbor_Hash_Fn3.c:55: warning: initialization >> discards qualifiers from pointer target type >> ../shared/zoltan_id.c: In function 'ZOLTAN_Malloc_ID': >> ../shared/zoltan_id.c:51: warning: initialization discards qualifiers from >> pointer target type >> Makefile:28: zoltan_timer.d: No such file or directory >> Makefile:28: timer.d: No such file or directory >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Copy_To': >> ../Timer/zoltan_timer.c:129: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Timer/zoltan_timer.c:136: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Create': >> ../Timer/zoltan_timer.c:158: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Timer/zoltan_timer.c:159: warning: passing argument 2 of >> 'Zoltan_Malloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Init': >> ../Timer/zoltan_timer.c:181: warning: initialization discards qualifiers >> from pointer target type >> ../Timer/zoltan_timer.c:190: warning: passing argument 3 of >> 'Zoltan_Realloc' discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:64: note: expected 'char *' but argument is of >> type 'const char *' >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Reset': >> ../Timer/zoltan_timer.c:216: warning: initialization discards qualifiers >> from pointer target type >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_ChangeFlag': >> ../Timer/zoltan_timer.c:245: warning: initialization discards qualifiers >> from pointer target type >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Start': >> ../Timer/zoltan_timer.c:263: warning: initialization discards qualifiers >> from pointer target type >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Stop': >> ../Timer/zoltan_timer.c:306: warning: initialization discards qualifiers >> from pointer target type >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Print': >> ../Timer/zoltan_timer.c:358: warning: initialization discards qualifiers >> from pointer target type >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_PrintAll': >> ../Timer/zoltan_timer.c:393: warning: initialization discards qualifiers >> from pointer target type >> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Destroy': >> ../Timer/zoltan_timer.c:411: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../Timer/zoltan_timer.c:412: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> Makefile:28: zz_coord.d: No such file or directory >> Makefile:28: zz_obj_list.d: No such file or directory >> Makefile:28: zz_struct.d: No such file or directory >> Makefile:28: zz_init.d: No such file or directory >> Makefile:28: zz_set_fn.d: No such file or directory >> Makefile:28: zz_util.d: No such file or directory >> Makefile:28: zz_gen_files.d: No such file or directory >> Makefile:28: zz_hash.d: No such file or directory >> Makefile:28: zz_heap.d: No such file or directory >> Makefile:28: zz_sort.d: No such file or directory >> Makefile:28: zz_rand.d: No such file or directory >> Makefile:28: lb_balance.d: No such file or directory >> Makefile:28: lb_eval.d: No such file or directory >> Makefile:28: lb_free.d: No such file or directory >> Makefile:28: lb_copy.d: No such file or directory >> Makefile:28: lb_init.d: No such file or directory >> Makefile:28: lb_invert.d: No such file or directory >> Makefile:28: lb_migrate.d: No such file or directory >> Makefile:28: lb_set_fn.d: No such file or directory >> Makefile:28: lb_set_method.d: No such file or directory >> Makefile:28: lb_point_assign.d: No such file or directory >> Makefile:28: lb_box_assign.d: No such file or directory >> Makefile:28: lb_part2proc.d: No such file or directory >> Makefile:28: lb_set_part_sizes.d: No such file or directory >> Makefile:28: lb_remap.d: No such file or directory >> Makefile:28: all_allo.d: No such file or directory >> Makefile:28: rcb.d: No such file or directory >> Makefile:28: rcb_util.d: No such file or directory >> Makefile:28: rcb_box.d: No such file or directory >> Makefile:28: box_assign.d: No such file or directory >> Makefile:28: point_assign.d: No such file or directory >> Makefile:28: create_proc_list.d: No such file or directory >> Makefile:28: rib.d: No such file or directory >> Makefile:28: rib_util.d: No such file or directory >> Makefile:28: inertial1d.d: No such file or directory >> Makefile:28: inertial2d.d: No such file or directory >> Makefile:28: inertial3d.d: No such file or directory >> Makefile:28: shared.d: No such file or directory >> Makefile:28: par_average.d: No such file or directory >> Makefile:28: par_bisect.d: No such file or directory >> Makefile:28: par_median.d: No such file or directory >> Makefile:28: par_sync.d: No such file or directory >> Makefile:28: par_stats.d: No such file or directory >> Makefile:28: par_tflops_special.d: No such file or directory >> Makefile:28: coloring.d: No such file or directory >> Makefile:28: g2l_hash.d: No such file or directory >> Makefile:28: color_test.d: No such file or directory >> Makefile:28: par_average.d: No such file or directory >> Makefile:28: par_bisect.d: No such file or directory >> Makefile:28: par_median.d: No such file or directory >> Makefile:28: par_sync.d: No such file or directory >> Makefile:28: par_stats.d: No such file or directory >> Makefile:28: par_tflops_special.d: No such file or directory >> Makefile:28: order.d: No such file or directory >> Makefile:28: order_struct.d: No such file or directory >> Makefile:28: perm.d: No such file or directory >> Makefile:28: msg.d: No such file or directory >> Makefile:28: octant.d: No such file or directory >> Makefile:28: oct_util.d: No such file or directory >> Makefile:28: octupdate.d: No such file or directory >> Makefile:28: dfs.d: No such file or directory >> Makefile:28: costs.d: No such file or directory >> Makefile:28: migoct.d: No such file or directory >> Makefile:28: output.d: No such file or directory >> Makefile:28: migreg.d: No such file or directory >> Makefile:28: migtags.d: No such file or directory >> Makefile:28: octree.d: No such file or directory >> Makefile:28: rootlist.d: No such file or directory >> Makefile:28: oct_plot.d: No such file or directory >> Makefile:28: hsfc_hilbert.d: No such file or directory >> Makefile:28: hsfc.d: No such file or directory >> Makefile:28: hsfc_box_assign.d: No such file or directory >> Makefile:28: hsfc_point_assign.d: No such file or directory >> Makefile:28: set_param.d: No such file or directory >> Makefile:28: assign_param_vals.d: No such file or directory >> Makefile:28: check_param.d: No such file or directory >> Makefile:28: print_params.d: No such file or directory >> Makefile:28: key_params.d: No such file or directory >> Makefile:28: free_params.d: No such file or directory >> Makefile:28: bind_param.d: No such file or directory >> Makefile:28: parmetis_jostle.d: No such file or directory >> Makefile:28: build_graph.d: No such file or directory >> Makefile:28: verify_graph.d: No such file or directory >> Makefile:28: scatter_graph.d: No such file or directory >> Makefile:28: phg.d: No such file or directory >> Makefile:28: phg_hypergraph.d: No such file or directory >> Makefile:28: phg_build.d: No such file or directory >> Makefile:28: phg_build_calls.d: No such file or directory >> Makefile:28: phg_coarse.d: No such file or directory >> Makefile:28: phg_distrib.d: No such file or directory >> Makefile:28: phg_gather.d: No such file or directory >> Makefile:28: phg_match.d: No such file or directory >> Makefile:28: phg_plot.d: No such file or directory >> Makefile:28: phg_refinement.d: No such file or directory >> Makefile:28: phg_serialpartition.d: No such file or directory >> Makefile:28: phg_Vcycle.d: No such file or directory >> Makefile:28: phg_rdivide.d: No such file or directory >> Makefile:28: phg_util.d: No such file or directory >> Makefile:28: phg_scale.d: No such file or directory >> Makefile:28: phg_comm.d: No such file or directory >> Makefile:28: phg_order.d: No such file or directory >> Makefile:28: phg_parkway.d: No such file or directory >> Makefile:28: phg_patoh.d: No such file or directory >> Makefile:28: timer_params.d: No such file or directory >> Makefile:28: divide_machine.d: No such file or directory >> Makefile:28: get_processor_name.d: No such file or directory >> Makefile:28: build_machine_desc.d: No such file or directory >> Makefile:28: reftree_build.d: No such file or directory >> Makefile:28: reftree_part.d: No such file or directory >> Makefile:28: reftree_hash.d: No such file or directory >> Makefile:28: reftree_coarse_path.d: No such file or directory >> ../zz/zz_coord.c:44: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_coord.c:44: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_coord.c:45: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_coord.c:45: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_coord.c:46: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_coord.c:46: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_coord.c: In function 'Zoltan_Get_Coordinates': >> ../zz/zz_coord.c:76: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_coord.c:125: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_coord.c:177: warning: passing argument 2 of 'Zoltan_Bind_Param' >> discards qualifiers from pointer target type >> ../params/params_const.h:81: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_coord.c:179: warning: passing argument 2 of 'Zoltan_Bind_Param' >> discards qualifiers from pointer target type >> ../params/params_const.h:81: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_coord.c:180: warning: passing argument 2 of 'Zoltan_Bind_Param' >> discards qualifiers from pointer target type >> ../params/params_const.h:81: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_coord.c:395: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_obj_list.c: In function 'Zoltan_Get_Obj_List': >> ../zz/zz_obj_list.c:44: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_obj_list.c:83: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' >> discards qualifiers from pointer target type >> ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument >> is of type 'const char *' >> ../zz/zz_obj_list.c:87: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' >> discards qualifiers from pointer target type >> ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument >> is of type 'const char *' >> ../zz/zz_obj_list.c:92: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_obj_list.c:142: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_obj_list.c:180: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_obj_list.c:181: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_obj_list.c:182: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_struct.c: In function 'Zoltan_Create': >> ../zz/zz_struct.c:57: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_struct.c:64: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_struct.c: In function 'Zoltan_Destroy': >> ../zz/zz_struct.c:176: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_set_fn.c: In function 'Zoltan_Set_Fn': >> ../zz/zz_set_fn.c:55: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_util.c: In function 'Zoltan_Clean_String': >> ../zz/zz_util.c:56: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_util.c: In function 'Zoltan_Strdup': >> ../zz/zz_util.c:83: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> In file included from ../zz/zz_gen_files.c:21: >> ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:35: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:35: error: conflicting types for >> 'METIS_NodeND' >> /opt/build_for_gcc-mp-4.4/petsc-3.3-p3/darwin10.2.0-c-debug/include/metis.h:224: >> note: previous declaration of 'METIS_NodeND' was here >> ../parmetis/parmetis_jostle.h:134: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:134: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:135: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:135: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:135: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:138: error: expected ')' before '*' token >> ../parmetis/parmetis_jostle.h:144: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:144: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../parmetis/parmetis_jostle.h:144: error: expected declaration specifiers >> or '...' before 'idxtype' >> ../zz/zz_gen_files.c: In function 'Zoltan_Generate_Files': >> ../zz/zz_gen_files.c:93: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_gen_files.c:107: warning: assignment discards qualifiers from >> pointer target type >> ../zz/zz_gen_files.c:123: warning: passing argument 9 of >> 'Zoltan_Build_Graph' from incompatible pointer type >> ../parmetis/parmetis_jostle.h:141: note: expected 'float **' but argument >> is of type 'int **' >> ../zz/zz_gen_files.c:123: error: too many arguments to function >> 'Zoltan_Build_Graph' >> ../zz/zz_gen_files.c:488: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:489: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:490: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:491: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:492: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:493: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:494: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:495: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:496: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:497: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:500: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:501: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:502: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:503: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:504: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c: In function 'turn_off_reduce_dimensions': >> ../zz/zz_gen_files.c:515: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_gen_files.c:515: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_gen_files.c:518: warning: passing argument 2 of >> 'Zoltan_Bind_Param' discards qualifiers from pointer target type >> ../params/params_const.h:81: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c: In function 'Zoltan_HG_Get_Pins': >> ../zz/zz_gen_files.c:533: warning: initialization discards qualifiers from >> pointer target type >> ../zz/zz_gen_files.c:553: warning: passing argument 2 of >> 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type >> ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument >> is of type 'const char *' >> ../zz/zz_gen_files.c:554: warning: passing argument 2 of >> 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type >> ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument >> is of type 'const char *' >> ../zz/zz_gen_files.c:556: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:559: warning: passing argument 1 of >> 'Zoltan_Multifree' discards qualifiers from pointer target type >> ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:568: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:583: warning: passing argument 1 of >> 'Zoltan_Multifree' discards qualifiers from pointer target type >> ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:602: warning: passing argument 1 of >> 'Zoltan_Multifree' discards qualifiers from pointer target type >> ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c: In function 'fan_in_edge_global_ids': >> ../zz/zz_gen_files.c:661: warning: passing argument 3 of 'Zoltan_Calloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:62: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:664: warning: passing argument 2 of >> 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type >> ../Utilities/shared/zoltan_id.h:63: note: expected 'char *' but argument >> is of type 'const char *' >> ../zz/zz_gen_files.c:700: warning: passing argument 3 of 'Zoltan_Realloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:64: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:725: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:730: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:731: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c:732: warning: passing argument 2 of 'Zoltan_Free' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c: In function 'augment_search_structure': >> ../zz/zz_gen_files.c:767: warning: passing argument 2 of 'Zoltan_Malloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >> type 'const char *' >> ../zz/zz_gen_files.c: In function 'merge_gids': >> ../zz/zz_gen_files.c:796: warning: passing argument 3 of 'Zoltan_Realloc' >> discards qualifiers from pointer target type >> ../include/zoltan_mem.h:64: note: expected 'char *' but argument is of >> type 'const char *' >> make[1]: *** [zz_gen_files.o] Error 1 >> make: *** [zoltan] Error 2 >> >> ******************************************************************************* >> >> >> > -- Scanned by iCritical. From knepley at gmail.com Fri Sep 21 07:28:28 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 21 Sep 2012 07:28:28 -0500 Subject: [petsc-users] compilation problem, zoltan & parmetis In-Reply-To: <505C4EFA.3070203@stfc.ac.uk> References: <505C4EFA.3070203@stfc.ac.uk> Message-ID: On Fri, Sep 21, 2012 at 6:26 AM, Xiaohu Guo wrote: > Dear Matt, > > Just curious, why PETSc drop zoltan ? Thanks ! > Its work to maintain, and since the only graph partitioner is ParMetis, we did not need it anymore. Matt > Best Regards > Xiaohu > > On 18/09/2012 23:22, petsc-users-request at mcs.anl.**govwrote: > >> Send petsc-users mailing list submissions to >> petsc-users at mcs.anl.gov >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.mcs.anl.gov/**mailman/listinfo/petsc-users >> or, via email, send a message with subject or body 'help' to >> petsc-users-request at mcs.anl.**gov >> >> You can reach the person managing the list at >> petsc-users-owner at mcs.anl.gov >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of petsc-users digest..." >> >> >> Today's Topics: >> >> 1. Re: compilation problem, zoltan & parmetis (Matthew Knepley) >> >> >> ------------------------------**------------------------------** >> ---------- >> >> Message: 1 >> Date: Tue, 18 Sep 2012 17:22:12 -0500 >> From: Matthew Knepley >> To: PETSc users list >> Subject: Re: [petsc-users] compilation problem, zoltan & parmetis >> Message-ID: >> > 1RA at mail.gmail.com<2M-JERObw-Bp65dQX91WRsqzxdG9puPJeMDEYErs1RA at mail.gmail.com> >> > >> Content-Type: text/plain; charset="iso-8859-1" >> >> On Tue, Sep 18, 2012 at 5:19 PM, Lukasz Kaczmarczyk < >> Lukasz.Kaczmarczyk at glasgow.ac.**uk > >> wrote: >> >> Hallo, >>> >>> I have following proble with compilation (is the same error on MacOS and >>> Ubuntu). >>> >>> This is my fault. Zoltan is no longer used. I should have removed it >> before >> the release. >> >> Matt >> >> >> Compilers: >>> gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00) >>> gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) >>> >>> ./configure --with-fortran=0 >>> --with-cc=/opt/build_for_gcc-**mp-4.4/local/bin/mpicc >>> --with-cxx=/opt/build_for_gcc-**mp-4.4/local/bin/mpicxx >>> --download-superlu_dist=1 --download-parmetis=1 -download-umfpack=1 >>> -download-zoltan=1 --with-shared-libraries=0 >>> >>> ==============================**==============================** >>> =================== >>> Configuring PETSc to compile on your system >>> >>> ==============================**==============================** >>> =================== >>> ==============================**==============================** >>> =================== >>> >>> Compiling UMFPACK; >>> this >>> may take several minutes >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> TESTING: configureLibrary >>> from >>> PETSc.packages.parmetis(**config/BuildSystem/config/**package.py:433) >>> >>> >>> **************************************************************** >>> ******************* >>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >>> for >>> details): >>> >>> ------------------------------**------------------------------** >>> ------------------- >>> Did not find package METIS needed by parmetis. >>> Enable the package using --with-metis or --download-metis >>> >>> **************************************************************** >>> ******************* >>> >>> unimackiss:petsc-3.3-p3 likask$ ./configure --with-fortran=0 >>> --with-cc=/opt/build_for_gcc-**mp-4.4/local/bin/mpicc >>> --with-cxx=/opt/build_for_gcc-**mp-4.4/local/bin/mpicxx >>> --download-superlu_dist=1 --download-metis=1 --download-parmetis=1 >>> -download-umfpack=1 -download-zoltan=1 --with-shared-libraries=0 >>> >>> ==============================**==============================** >>> =================== >>> Configuring PETSc to compile on your system >>> >>> ==============================**==============================** >>> =================== >>> ==============================**==============================** >>> =================== >>> >>> Configuring METIS; >>> this >>> may take several minutes >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> Compiling METIS; this >>> may take several minutes >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> Configuring ParMETIS; >>> this may take several minutes >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> Compiling ParMETIS; >>> this >>> may take several minutes >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> Compiling >>> superlu_dist; >>> this may take several minutes >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> >>> **************************************************************** >>> **************************************** >>> >>> Please register to use Zoltan at >>> http://www.cs.sandia.gov/**Zoltan/Zoltan.html >>> >>> >>> **************************************************************** >>> **************************************** >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> Compiling zoltan; >>> this >>> may take several minutes >>> >>> >>> ==============================**==============================** >>> =================== >>> >>> >>> >>> >>> >>> **************************************************************** >>> ******************* >>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >>> for >>> details): >>> >>> ------------------------------**------------------------------** >>> ------------------- >>> Error running make on ZOLTAN: Could not execute "cd >>> /opt/build_for_gcc-mp-4.4/**petsc-3.3-p3/externalpackages/**Zoltan && >>> make >>> clean && make ZOLTAN_ARCH="darwin10.2.0-c-**debug" >>> CC="/opt/build_for_gcc-mp-4.4/**local/bin/mpicc" CFLAGS=" -Wall >>> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline >>> -O0 " AR="/usr/bin/ar cr" RANLIB="/usr/bin/ranlib -c" >>> X_LIBS="['-L/opt/local/lib', '-lX11']" >>> MPI_INCPATH="-I/opt/build_for_**gcc-mp-4.4/local/include" >>> PARMETIS_INCPATH="-I/opt/**build_for_gcc-mp-4.4/petsc-3.** >>> 3-p3/darwin10.2.0-c-debug/**include" >>> PARMETIS_LIBPATH="-L/opt/**build_for_gcc-mp-4.4/petsc-3.** >>> 3-p3/darwin10.2.0-c-debug/lib >>> -L/opt/build_for_gcc-mp-4.4/**petsc-3.3-p3/darwin10.2.0-c-**debug/lib >>> -lparmetis" zoltan": >>> driver >>> ch >>> zz >>> all >>> lb >>> order >>> par >>> rcb >>> coloring >>> oct >>> phg >>> util >>> hsfc >>> parmetis >>> params >>> timer >>> ha >>> reftree >>> include >>> Memory >>> Communication >>> DDirectory >>> Timer >>> shared >>> Obj_generic >>> exit 0 >>> rm -f *.o libexzoltan.a >>> rm -f *.o zoltanSimple zoltanExample1 subDirs phgExample >>> rm -f *.o ex1 >>> rm -f *.o zCPPExample1 zCPPExample2 >>> Obj_generic >>> fort >>> fdriver >>> fdriver_old >>> exit 0 >>> make libzoltan_mem.a >>> (Re)Building dependency for ../Memory/mem.c... >>> Compiling ../Memory/mem.c... >>> creating library libzoltan_mem.a >>> make libzoltan_comm.a >>> (Re)Building dependency for ../Communication/comm_sort_**ints.c... >>> (Re)Building dependency for ../Communication/comm_resize.**c... >>> (Re)Building dependency for ../Communication/comm_**exchange_sizes.c... >>> (Re)Building dependency for ../Communication/comm_invert_**map.c... >>> (Re)Building dependency for ../Communication/comm_invert_**plan.c... >>> (Re)Building dependency for ../Communication/comm_info.c..**. >>> (Re)Building dependency for ../Communication/comm_destroy.**c... >>> (Re)Building dependency for ../Communication/comm_do_**reverse.c... >>> (Re)Building dependency for ../Communication/comm_do.c... >>> (Re)Building dependency for ../Communication/comm_create.**c... >>> Compiling ../Communication/comm_create.**c... >>> Compiling ../Communication/comm_do.c... >>> Compiling ../Communication/comm_do_**reverse.c... >>> Compiling ../Communication/comm_destroy.**c... >>> Compiling ../Communication/comm_info.c..**. >>> Compiling ../Communication/comm_invert_**plan.c... >>> Compiling ../Communication/comm_invert_**map.c... >>> Compiling ../Communication/comm_**exchange_sizes.c... >>> Compiling ../Communication/comm_resize.**c... >>> Compiling ../Communication/comm_sort_**ints.c... >>> creating library libzoltan_comm.a >>> make libzoltan_dd.a >>> (Re)Building dependency for ../shared/zoltan_align.c... >>> (Re)Building dependency for ../shared/zoltan_id.c... >>> (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_** >>> Hash_Fn3.c... >>> (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_** >>> Hash_Fn2.c... >>> (Re)Building dependency for ../DDirectory/DD_Set_Neighbor_** >>> Hash_Fn1.c... >>> (Re)Building dependency for ../DDirectory/DD_Print.c... >>> (Re)Building dependency for ../DDirectory/DD_Stats.c... >>> (Re)Building dependency for ../DDirectory/DD_Hash2.c... >>> (Re)Building dependency for ../DDirectory/DD_Set_Hash_Fn.**c... >>> (Re)Building dependency for ../DDirectory/DD_Update.c... >>> (Re)Building dependency for ../DDirectory/DD_Remove.c... >>> (Re)Building dependency for ../DDirectory/DD_Find.c... >>> (Re)Building dependency for ../DDirectory/DD_Destroy.c... >>> (Re)Building dependency for ../DDirectory/DD_Create.c... >>> Compiling ../DDirectory/DD_Create.c... >>> Compiling ../DDirectory/DD_Destroy.c... >>> Compiling ../DDirectory/DD_Find.c... >>> Compiling ../DDirectory/DD_Remove.c... >>> Compiling ../DDirectory/DD_Update.c... >>> Compiling ../DDirectory/DD_Set_Hash_Fn.**c... >>> Compiling ../DDirectory/DD_Hash2.c... >>> Compiling ../DDirectory/DD_Stats.c... >>> Compiling ../DDirectory/DD_Print.c... >>> Compiling ../DDirectory/DD_Set_Neighbor_**Hash_Fn1.c... >>> Compiling ../DDirectory/DD_Set_Neighbor_**Hash_Fn2.c... >>> Compiling ../DDirectory/DD_Set_Neighbor_**Hash_Fn3.c... >>> Compiling ../shared/zoltan_id.c... >>> Compiling ../shared/zoltan_align.c... >>> creating library libzoltan_dd.a >>> make libzoltan_timer.a >>> (Re)Building dependency for ../Timer/timer.c... >>> (Re)Building dependency for ../Timer/zoltan_timer.c... >>> Compiling ../Timer/zoltan_timer.c... >>> Compiling ../Timer/timer.c... >>> creating library libzoltan_timer.a >>> creating library libzoltan.a >>> (Re)Building dependency for ../reftree/reftree_coarse_**path.c... >>> (Re)Building dependency for ../reftree/reftree_hash.c... >>> (Re)Building dependency for ../reftree/reftree_part.c... >>> (Re)Building dependency for ../reftree/reftree_build.c... >>> (Re)Building dependency for ../ha/build_machine_desc.c... >>> (Re)Building dependency for ../ha/get_processor_name.c... >>> (Re)Building dependency for ../ha/divide_machine.c... >>> (Re)Building dependency for ../timer/timer_params.c... >>> (Re)Building dependency for ../phg/phg_patoh.c... >>> (Re)Building dependency for ../phg/phg_parkway.c... >>> (Re)Building dependency for ../phg/phg_order.c... >>> (Re)Building dependency for ../phg/phg_comm.c... >>> (Re)Building dependency for ../phg/phg_scale.c... >>> (Re)Building dependency for ../phg/phg_util.c... >>> (Re)Building dependency for ../phg/phg_rdivide.c... >>> (Re)Building dependency for ../phg/phg_Vcycle.c... >>> (Re)Building dependency for ../phg/phg_serialpartition.c..**. >>> (Re)Building dependency for ../phg/phg_refinement.c... >>> (Re)Building dependency for ../phg/phg_plot.c... >>> (Re)Building dependency for ../phg/phg_match.c... >>> (Re)Building dependency for ../phg/phg_gather.c... >>> (Re)Building dependency for ../phg/phg_distrib.c... >>> (Re)Building dependency for ../phg/phg_coarse.c... >>> (Re)Building dependency for ../phg/phg_build_calls.c... >>> (Re)Building dependency for ../phg/phg_build.c... >>> (Re)Building dependency for ../phg/phg_hypergraph.c... >>> (Re)Building dependency for ../phg/phg.c... >>> (Re)Building dependency for ../parmetis/scatter_graph.c... >>> (Re)Building dependency for ../parmetis/verify_graph.c... >>> (Re)Building dependency for ../parmetis/build_graph.c... >>> (Re)Building dependency for ../parmetis/parmetis_jostle.c.**.. >>> (Re)Building dependency for ../params/bind_param.c... >>> (Re)Building dependency for ../params/free_params.c... >>> (Re)Building dependency for ../params/key_params.c... >>> (Re)Building dependency for ../params/print_params.c... >>> (Re)Building dependency for ../params/check_param.c... >>> (Re)Building dependency for ../params/assign_param_vals.c.**.. >>> (Re)Building dependency for ../params/set_param.c... >>> (Re)Building dependency for ../hsfc/hsfc_point_assign.c... >>> (Re)Building dependency for ../hsfc/hsfc_box_assign.c... >>> (Re)Building dependency for ../hsfc/hsfc.c... >>> (Re)Building dependency for ../hsfc/hsfc_hilbert.c... >>> (Re)Building dependency for ../oct/oct_plot.c... >>> (Re)Building dependency for ../oct/rootlist.c... >>> (Re)Building dependency for ../oct/octree.c... >>> (Re)Building dependency for ../oct/migtags.c... >>> (Re)Building dependency for ../oct/migreg.c... >>> (Re)Building dependency for ../oct/output.c... >>> (Re)Building dependency for ../oct/migoct.c... >>> (Re)Building dependency for ../oct/costs.c... >>> (Re)Building dependency for ../oct/dfs.c... >>> (Re)Building dependency for ../oct/octupdate.c... >>> (Re)Building dependency for ../oct/oct_util.c... >>> (Re)Building dependency for ../oct/octant.c... >>> (Re)Building dependency for ../oct/msg.c... >>> (Re)Building dependency for ../order/perm.c... >>> (Re)Building dependency for ../order/order_struct.c... >>> (Re)Building dependency for ../order/order.c... >>> (Re)Building dependency for ../par/par_tflops_special.c... >>> (Re)Building dependency for ../par/par_stats.c... >>> (Re)Building dependency for ../par/par_sync.c... >>> (Re)Building dependency for ../par/par_median.c... >>> (Re)Building dependency for ../par/par_bisect.c... >>> (Re)Building dependency for ../par/par_average.c... >>> (Re)Building dependency for ../coloring/color_test.c... >>> (Re)Building dependency for ../coloring/g2l_hash.c... >>> (Re)Building dependency for ../coloring/coloring.c... >>> (Re)Building dependency for ../rcb/shared.c... >>> (Re)Building dependency for ../rcb/inertial3d.c... >>> (Re)Building dependency for ../rcb/inertial2d.c... >>> (Re)Building dependency for ../rcb/inertial1d.c... >>> (Re)Building dependency for ../rcb/rib_util.c... >>> (Re)Building dependency for ../rcb/rib.c... >>> (Re)Building dependency for ../rcb/create_proc_list.c... >>> (Re)Building dependency for ../rcb/point_assign.c... >>> (Re)Building dependency for ../rcb/box_assign.c... >>> (Re)Building dependency for ../rcb/rcb_box.c... >>> (Re)Building dependency for ../rcb/rcb_util.c... >>> (Re)Building dependency for ../rcb/rcb.c... >>> (Re)Building dependency for ../all/all_allo.c... >>> (Re)Building dependency for ../lb/lb_remap.c... >>> (Re)Building dependency for ../lb/lb_set_part_sizes.c... >>> (Re)Building dependency for ../lb/lb_part2proc.c... >>> (Re)Building dependency for ../lb/lb_box_assign.c... >>> (Re)Building dependency for ../lb/lb_point_assign.c... >>> (Re)Building dependency for ../lb/lb_set_method.c... >>> (Re)Building dependency for ../lb/lb_set_fn.c... >>> (Re)Building dependency for ../lb/lb_migrate.c... >>> (Re)Building dependency for ../lb/lb_invert.c... >>> (Re)Building dependency for ../lb/lb_init.c... >>> (Re)Building dependency for ../lb/lb_copy.c... >>> (Re)Building dependency for ../lb/lb_free.c... >>> (Re)Building dependency for ../lb/lb_eval.c... >>> (Re)Building dependency for ../lb/lb_balance.c... >>> (Re)Building dependency for ../zz/zz_rand.c... >>> (Re)Building dependency for ../zz/zz_sort.c... >>> (Re)Building dependency for ../zz/zz_heap.c... >>> (Re)Building dependency for ../zz/zz_hash.c... >>> (Re)Building dependency for ../zz/zz_gen_files.c... >>> (Re)Building dependency for ../zz/zz_util.c... >>> (Re)Building dependency for ../zz/zz_set_fn.c... >>> (Re)Building dependency for ../zz/zz_init.c... >>> (Re)Building dependency for ../zz/zz_struct.c... >>> (Re)Building dependency for ../zz/zz_obj_list.c... >>> (Re)Building dependency for ../zz/zz_coord.c... >>> Compiling ../zz/zz_coord.c... >>> Compiling ../zz/zz_obj_list.c... >>> Compiling ../zz/zz_struct.c... >>> Compiling ../zz/zz_init.c... >>> Compiling ../zz/zz_set_fn.c... >>> Compiling ../zz/zz_util.c... >>> Compiling ../zz/zz_gen_files.c... >>> Makefile:28: mem.d: No such file or directory >>> ../Memory/mem.c: In function 'Zoltan_Array_Alloc': >>> ../Memory/mem.c:162: warning: initialization discards qualifiers from >>> pointer target type >>> ../Memory/mem.c: In function 'Zoltan_Malloc': >>> ../Memory/mem.c:280: warning: initialization discards qualifiers from >>> pointer target type >>> ../Memory/mem.c: In function 'Zoltan_Realloc': >>> ../Memory/mem.c:345: warning: initialization discards qualifiers from >>> pointer target type >>> Makefile:28: comm_create.d: No such file or directory >>> Makefile:28: comm_do.d: No such file or directory >>> Makefile:28: comm_do_reverse.d: No such file or directory >>> Makefile:28: comm_destroy.d: No such file or directory >>> Makefile:28: comm_info.d: No such file or directory >>> Makefile:28: comm_invert_plan.d: No such file or directory >>> Makefile:28: comm_invert_map.d: No such file or directory >>> Makefile:28: comm_exchange_sizes.d: No such file or directory >>> Makefile:28: comm_resize.d: No such file or directory >>> Makefile:28: comm_sort_ints.d: No such file or directory >>> ../Communication/comm_create.**c: In function 'Zoltan_Comm_Create': >>> ../Communication/comm_create.**c:63: warning: initialization discards >>> qualifiers from pointer target type >>> ../Communication/comm_create.**c:79: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:123: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:124: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:125: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:170: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:195: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:196: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:197: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:222: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:232: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:248: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:249: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:250: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:251: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:252: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:253: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:261: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:292: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:293: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c: In function 'Zoltan_Comm_Copy_To': >>> ../Communication/comm_create.**c:326: warning: initialization discards >>> qualifiers from pointer target type >>> ../Communication/comm_create.**c:342: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:348: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:349: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:350: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:351: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:352: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:353: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:354: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:355: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:356: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:357: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:358: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:359: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:360: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:361: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:362: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:364: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_create.**c:365: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do.c: In function 'Zoltan_Comm_Do_Post': >>> ../Communication/comm_do.c:85: warning: initialization discards >>> qualifiers >>> from pointer target type >>> ../Communication/comm_do.c:**134: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do.c:**180: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do.c:**193: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do.c:**195: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do.c:**262: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do.c:**334: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do.c: In function 'Zoltan_Comm_Do_Wait': >>> ../Communication/comm_do.c:**398: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c: In function >>> 'Zoltan_Comm_Do_Reverse_Post': >>> ../Communication/comm_do_**reverse.c:66: warning: initialization >>> discards >>> qualifiers from pointer target type >>> ../Communication/comm_do_**reverse.c:87: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:115: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:117: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:122: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:124: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:125: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:132: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:133: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:134: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:140: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:141: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:142: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c: In function >>> 'Zoltan_Comm_Do_Reverse_Wait': >>> ../Communication/comm_do_**reverse.c:168: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:169: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:170: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:171: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:172: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:173: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:174: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:176: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:177: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_do_**reverse.c:178: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c: In function 'Zoltan_Comm_Destroy': >>> ../Communication/comm_destroy.**c:32: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:33: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:34: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:35: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:36: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:37: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:38: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:39: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:40: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:41: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:42: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:43: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:44: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:45: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:46: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:47: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:48: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_destroy.**c:51: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_info.c: In function 'Zoltan_Comm_Info': >>> ../Communication/comm_info.c:**56: warning: initialization discards >>> qualifiers from pointer target type >>> ../Communication/comm_invert_**plan.c: In function >>> 'Zoltan_Comm_Invert_Plan': >>> ../Communication/comm_invert_**plan.c:42: warning: initialization >>> discards >>> qualifiers from pointer target type >>> ../Communication/comm_invert_**plan.c:65: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:98: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:99: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:108: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:109: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:110: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:111: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:112: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:113: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:114: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:115: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:116: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:117: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:118: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:123: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:124: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**plan.c:125: warning: passing argument 2 >>> of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**map.c: In function >>> 'Zoltan_Comm_Invert_Map': >>> ../Communication/comm_invert_**map.c:62: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**map.c:63: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**map.c:69: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**map.c:70: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**map.c:94: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**map.c:95: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**map.c:97: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_invert_**map.c:98: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c: In function 'Zoltan_Comm_Resize': >>> ../Communication/comm_resize.**c:53: warning: initialization discards >>> qualifiers from pointer target type >>> ../Communication/comm_resize.**c:70: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:71: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:72: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:73: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:74: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:75: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:76: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:99: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:107: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:113: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:130: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:139: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:140: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:169: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:170: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:174: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:175: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:202: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:214: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:224: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:225: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:243: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:244: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:256: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:257: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:258: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:259: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:260: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:261: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:262: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:263: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:264: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_resize.**c:265: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_sort_**ints.c: In function >>> 'Zoltan_Comm_Sort_Ints': >>> ../Communication/comm_sort_**ints.c:52: warning: passing argument 3 of >>> 'Zoltan_Calloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:62: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_sort_**ints.c:53: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_sort_**ints.c:54: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Communication/comm_sort_**ints.c:78: warning: passing argument 1 of >>> 'Zoltan_Multifree' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:69: note: expected 'char *' but argument is of >>> type 'const char *' >>> Makefile:28: DD_Create.d: No such file or directory >>> Makefile:28: DD_Destroy.d: No such file or directory >>> Makefile:28: DD_Find.d: No such file or directory >>> Makefile:28: DD_Remove.d: No such file or directory >>> Makefile:28: DD_Update.d: No such file or directory >>> Makefile:28: DD_Set_Hash_Fn.d: No such file or directory >>> Makefile:28: DD_Hash2.d: No such file or directory >>> Makefile:28: DD_Stats.d: No such file or directory >>> Makefile:28: DD_Print.d: No such file or directory >>> Makefile:28: DD_Set_Neighbor_Hash_Fn1.d: No such file or directory >>> Makefile:28: DD_Set_Neighbor_Hash_Fn2.d: No such file or directory >>> Makefile:28: DD_Set_Neighbor_Hash_Fn3.d: No such file or directory >>> Makefile:28: zoltan_id.d: No such file or directory >>> Makefile:28: zoltan_align.d: No such file or directory >>> ../DDirectory/DD_Create.c: In function 'Zoltan_DD_Create': >>> ../DDirectory/DD_Create.c:45: warning: initialization discards qualifiers >>> from pointer target type >>> ../DDirectory/DD_Create.c:80: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Create.c: In function 'Zoltan_DD_Copy_To': >>> ../DDirectory/DD_Create.c:151: warning: initialization discards >>> qualifiers >>> from pointer target type >>> ../DDirectory/DD_Create.c:167: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Create.c: In function 'allocate_copy_list': >>> ../DDirectory/DD_Create.c:200: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Destroy.c: In function 'Zoltan_DD_Destroy': >>> ../DDirectory/DD_Destroy.c:46: warning: initialization discards >>> qualifiers >>> from pointer target type >>> ../DDirectory/DD_Destroy.c:63: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Destroy.c:75: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Find.c: In function 'Zoltan_DD_Find': >>> ../DDirectory/DD_Find.c:58: warning: initialization discards qualifiers >>> from pointer target type >>> ../DDirectory/DD_Find.c:77: warning: passing argument 2 of >>> 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Find.c:90: warning: passing argument 2 of >>> 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Find.c:93: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Find.c:129: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Find.c:197: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Find.c:198: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Find.c:199: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Find.c: In function 'DD_Find_Local': >>> ../DDirectory/DD_Find.c:240: warning: initialization discards qualifiers >>> from pointer target type >>> ../DDirectory/DD_Remove.c: In function 'Zoltan_DD_Remove': >>> ../DDirectory/DD_Remove.c:55: warning: initialization discards qualifiers >>> from pointer target type >>> ../DDirectory/DD_Remove.c:75: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Remove.c:88: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Remove.c:125: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Remove.c:161: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Remove.c:162: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Remove.c:163: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Remove.c: In function 'DD_Remove_Local': >>> ../DDirectory/DD_Remove.c:196: warning: initialization discards >>> qualifiers >>> from pointer target type >>> ../DDirectory/DD_Remove.c:220: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Update.c: In function 'Zoltan_DD_Update': >>> ../DDirectory/DD_Update.c:63: warning: initialization discards qualifiers >>> from pointer target type >>> ../DDirectory/DD_Update.c:91: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Update.c:104: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Update.c:155: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Update.c:200: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Update.c:201: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Update.c:202: warning: passing argument 2 of >>> 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Update.c: In function 'DD_Update_Local': >>> ../DDirectory/DD_Update.c:239: warning: initialization discards >>> qualifiers >>> from pointer target type >>> ../DDirectory/DD_Update.c:303: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Set_Hash_Fn.**c: In function 'Zoltan_DD_Set_Hash_Fn': >>> ../DDirectory/DD_Set_Hash_Fn.**c:42: warning: initialization discards >>> qualifiers from pointer target type >>> ../DDirectory/DD_Stats.c: In function 'Zoltan_DD_Stats': >>> ../DDirectory/DD_Stats.c:48: warning: initialization discards qualifiers >>> from pointer target type >>> ../DDirectory/DD_Print.c: In function 'Zoltan_DD_Print': >>> ../DDirectory/DD_Print.c:42: warning: initialization discards qualifiers >>> from pointer target type >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn1.c: In function >>> 'Zoltan_DD_Set_Neighbor_Hash_**Fn1': >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn1.c:52: warning: initialization >>> discards qualifiers from pointer target type >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn2.c: In function >>> 'Zoltan_DD_Set_Neighbor_Hash_**Fn2': >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn2.c:75: warning: initialization >>> discards qualifiers from pointer target type >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn2.c:88: warning: passing >>> argument 2 >>> of 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn2.c: In function 'dd_nh2': >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn2.c:119: warning: initialization >>> discards qualifiers from pointer target type >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn2.c: In function >>> 'dd_nh2_cleanup': >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn2.c:169: warning: passing >>> argument 2 >>> of 'Zoltan_Free' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn3.c: In function >>> 'Zoltan_DD_Set_Neighbor_Hash_**Fn3': >>> ../DDirectory/DD_Set_Neighbor_**Hash_Fn3.c:55: warning: initialization >>> discards qualifiers from pointer target type >>> ../shared/zoltan_id.c: In function 'ZOLTAN_Malloc_ID': >>> ../shared/zoltan_id.c:51: warning: initialization discards qualifiers >>> from >>> pointer target type >>> Makefile:28: zoltan_timer.d: No such file or directory >>> Makefile:28: timer.d: No such file or directory >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Copy_To': >>> ../Timer/zoltan_timer.c:129: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Timer/zoltan_timer.c:136: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Create': >>> ../Timer/zoltan_timer.c:158: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Timer/zoltan_timer.c:159: warning: passing argument 2 of >>> 'Zoltan_Malloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Init': >>> ../Timer/zoltan_timer.c:181: warning: initialization discards qualifiers >>> from pointer target type >>> ../Timer/zoltan_timer.c:190: warning: passing argument 3 of >>> 'Zoltan_Realloc' discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:64: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Reset': >>> ../Timer/zoltan_timer.c:216: warning: initialization discards qualifiers >>> from pointer target type >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_ChangeFlag': >>> ../Timer/zoltan_timer.c:245: warning: initialization discards qualifiers >>> from pointer target type >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Start': >>> ../Timer/zoltan_timer.c:263: warning: initialization discards qualifiers >>> from pointer target type >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Stop': >>> ../Timer/zoltan_timer.c:306: warning: initialization discards qualifiers >>> from pointer target type >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Print': >>> ../Timer/zoltan_timer.c:358: warning: initialization discards qualifiers >>> from pointer target type >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_PrintAll': >>> ../Timer/zoltan_timer.c:393: warning: initialization discards qualifiers >>> from pointer target type >>> ../Timer/zoltan_timer.c: In function 'Zoltan_Timer_Destroy': >>> ../Timer/zoltan_timer.c:411: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../Timer/zoltan_timer.c:412: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> Makefile:28: zz_coord.d: No such file or directory >>> Makefile:28: zz_obj_list.d: No such file or directory >>> Makefile:28: zz_struct.d: No such file or directory >>> Makefile:28: zz_init.d: No such file or directory >>> Makefile:28: zz_set_fn.d: No such file or directory >>> Makefile:28: zz_util.d: No such file or directory >>> Makefile:28: zz_gen_files.d: No such file or directory >>> Makefile:28: zz_hash.d: No such file or directory >>> Makefile:28: zz_heap.d: No such file or directory >>> Makefile:28: zz_sort.d: No such file or directory >>> Makefile:28: zz_rand.d: No such file or directory >>> Makefile:28: lb_balance.d: No such file or directory >>> Makefile:28: lb_eval.d: No such file or directory >>> Makefile:28: lb_free.d: No such file or directory >>> Makefile:28: lb_copy.d: No such file or directory >>> Makefile:28: lb_init.d: No such file or directory >>> Makefile:28: lb_invert.d: No such file or directory >>> Makefile:28: lb_migrate.d: No such file or directory >>> Makefile:28: lb_set_fn.d: No such file or directory >>> Makefile:28: lb_set_method.d: No such file or directory >>> Makefile:28: lb_point_assign.d: No such file or directory >>> Makefile:28: lb_box_assign.d: No such file or directory >>> Makefile:28: lb_part2proc.d: No such file or directory >>> Makefile:28: lb_set_part_sizes.d: No such file or directory >>> Makefile:28: lb_remap.d: No such file or directory >>> Makefile:28: all_allo.d: No such file or directory >>> Makefile:28: rcb.d: No such file or directory >>> Makefile:28: rcb_util.d: No such file or directory >>> Makefile:28: rcb_box.d: No such file or directory >>> Makefile:28: box_assign.d: No such file or directory >>> Makefile:28: point_assign.d: No such file or directory >>> Makefile:28: create_proc_list.d: No such file or directory >>> Makefile:28: rib.d: No such file or directory >>> Makefile:28: rib_util.d: No such file or directory >>> Makefile:28: inertial1d.d: No such file or directory >>> Makefile:28: inertial2d.d: No such file or directory >>> Makefile:28: inertial3d.d: No such file or directory >>> Makefile:28: shared.d: No such file or directory >>> Makefile:28: par_average.d: No such file or directory >>> Makefile:28: par_bisect.d: No such file or directory >>> Makefile:28: par_median.d: No such file or directory >>> Makefile:28: par_sync.d: No such file or directory >>> Makefile:28: par_stats.d: No such file or directory >>> Makefile:28: par_tflops_special.d: No such file or directory >>> Makefile:28: coloring.d: No such file or directory >>> Makefile:28: g2l_hash.d: No such file or directory >>> Makefile:28: color_test.d: No such file or directory >>> Makefile:28: par_average.d: No such file or directory >>> Makefile:28: par_bisect.d: No such file or directory >>> Makefile:28: par_median.d: No such file or directory >>> Makefile:28: par_sync.d: No such file or directory >>> Makefile:28: par_stats.d: No such file or directory >>> Makefile:28: par_tflops_special.d: No such file or directory >>> Makefile:28: order.d: No such file or directory >>> Makefile:28: order_struct.d: No such file or directory >>> Makefile:28: perm.d: No such file or directory >>> Makefile:28: msg.d: No such file or directory >>> Makefile:28: octant.d: No such file or directory >>> Makefile:28: oct_util.d: No such file or directory >>> Makefile:28: octupdate.d: No such file or directory >>> Makefile:28: dfs.d: No such file or directory >>> Makefile:28: costs.d: No such file or directory >>> Makefile:28: migoct.d: No such file or directory >>> Makefile:28: output.d: No such file or directory >>> Makefile:28: migreg.d: No such file or directory >>> Makefile:28: migtags.d: No such file or directory >>> Makefile:28: octree.d: No such file or directory >>> Makefile:28: rootlist.d: No such file or directory >>> Makefile:28: oct_plot.d: No such file or directory >>> Makefile:28: hsfc_hilbert.d: No such file or directory >>> Makefile:28: hsfc.d: No such file or directory >>> Makefile:28: hsfc_box_assign.d: No such file or directory >>> Makefile:28: hsfc_point_assign.d: No such file or directory >>> Makefile:28: set_param.d: No such file or directory >>> Makefile:28: assign_param_vals.d: No such file or directory >>> Makefile:28: check_param.d: No such file or directory >>> Makefile:28: print_params.d: No such file or directory >>> Makefile:28: key_params.d: No such file or directory >>> Makefile:28: free_params.d: No such file or directory >>> Makefile:28: bind_param.d: No such file or directory >>> Makefile:28: parmetis_jostle.d: No such file or directory >>> Makefile:28: build_graph.d: No such file or directory >>> Makefile:28: verify_graph.d: No such file or directory >>> Makefile:28: scatter_graph.d: No such file or directory >>> Makefile:28: phg.d: No such file or directory >>> Makefile:28: phg_hypergraph.d: No such file or directory >>> Makefile:28: phg_build.d: No such file or directory >>> Makefile:28: phg_build_calls.d: No such file or directory >>> Makefile:28: phg_coarse.d: No such file or directory >>> Makefile:28: phg_distrib.d: No such file or directory >>> Makefile:28: phg_gather.d: No such file or directory >>> Makefile:28: phg_match.d: No such file or directory >>> Makefile:28: phg_plot.d: No such file or directory >>> Makefile:28: phg_refinement.d: No such file or directory >>> Makefile:28: phg_serialpartition.d: No such file or directory >>> Makefile:28: phg_Vcycle.d: No such file or directory >>> Makefile:28: phg_rdivide.d: No such file or directory >>> Makefile:28: phg_util.d: No such file or directory >>> Makefile:28: phg_scale.d: No such file or directory >>> Makefile:28: phg_comm.d: No such file or directory >>> Makefile:28: phg_order.d: No such file or directory >>> Makefile:28: phg_parkway.d: No such file or directory >>> Makefile:28: phg_patoh.d: No such file or directory >>> Makefile:28: timer_params.d: No such file or directory >>> Makefile:28: divide_machine.d: No such file or directory >>> Makefile:28: get_processor_name.d: No such file or directory >>> Makefile:28: build_machine_desc.d: No such file or directory >>> Makefile:28: reftree_build.d: No such file or directory >>> Makefile:28: reftree_part.d: No such file or directory >>> Makefile:28: reftree_hash.d: No such file or directory >>> Makefile:28: reftree_coarse_path.d: No such file or directory >>> ../zz/zz_coord.c:44: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_coord.c:44: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_coord.c:45: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_coord.c:45: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_coord.c:46: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_coord.c:46: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_coord.c: In function 'Zoltan_Get_Coordinates': >>> ../zz/zz_coord.c:76: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_coord.c:125: warning: passing argument 2 of 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_coord.c:177: warning: passing argument 2 of 'Zoltan_Bind_Param' >>> discards qualifiers from pointer target type >>> ../params/params_const.h:81: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_coord.c:179: warning: passing argument 2 of 'Zoltan_Bind_Param' >>> discards qualifiers from pointer target type >>> ../params/params_const.h:81: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_coord.c:180: warning: passing argument 2 of 'Zoltan_Bind_Param' >>> discards qualifiers from pointer target type >>> ../params/params_const.h:81: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_coord.c:395: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_obj_list.c: In function 'Zoltan_Get_Obj_List': >>> ../zz/zz_obj_list.c:44: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_obj_list.c:83: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' >>> discards qualifiers from pointer target type >>> ../Utilities/shared/zoltan_id.**h:63: note: expected 'char *' but >>> argument >>> is of type 'const char *' >>> ../zz/zz_obj_list.c:87: warning: passing argument 2 of 'ZOLTAN_Malloc_ID' >>> discards qualifiers from pointer target type >>> ../Utilities/shared/zoltan_id.**h:63: note: expected 'char *' but >>> argument >>> is of type 'const char *' >>> ../zz/zz_obj_list.c:92: warning: passing argument 2 of 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_obj_list.c:142: warning: passing argument 2 of 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_obj_list.c:180: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_obj_list.c:181: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_obj_list.c:182: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_struct.c: In function 'Zoltan_Create': >>> ../zz/zz_struct.c:57: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_struct.c:64: warning: passing argument 2 of 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_struct.c: In function 'Zoltan_Destroy': >>> ../zz/zz_struct.c:176: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_set_fn.c: In function 'Zoltan_Set_Fn': >>> ../zz/zz_set_fn.c:55: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_util.c: In function 'Zoltan_Clean_String': >>> ../zz/zz_util.c:56: warning: passing argument 2 of 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_util.c: In function 'Zoltan_Strdup': >>> ../zz/zz_util.c:83: warning: passing argument 2 of 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> In file included from ../zz/zz_gen_files.c:21: >>> ../parmetis/parmetis_jostle.h:**35: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**35: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**35: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**35: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**35: error: conflicting types for >>> 'METIS_NodeND' >>> /opt/build_for_gcc-mp-4.4/**petsc-3.3-p3/darwin10.2.0-c-** >>> debug/include/metis.h:224: >>> note: previous declaration of 'METIS_NodeND' was here >>> ../parmetis/parmetis_jostle.h:**134: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**134: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**135: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**135: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**135: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**138: error: expected ')' before '*' >>> token >>> ../parmetis/parmetis_jostle.h:**144: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**144: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../parmetis/parmetis_jostle.h:**144: error: expected declaration >>> specifiers >>> or '...' before 'idxtype' >>> ../zz/zz_gen_files.c: In function 'Zoltan_Generate_Files': >>> ../zz/zz_gen_files.c:93: warning: initialization discards qualifiers from >>> pointer target type >>> ../zz/zz_gen_files.c:107: warning: assignment discards qualifiers from >>> pointer target type >>> ../zz/zz_gen_files.c:123: warning: passing argument 9 of >>> 'Zoltan_Build_Graph' from incompatible pointer type >>> ../parmetis/parmetis_jostle.h:**141: note: expected 'float **' but >>> argument >>> is of type 'int **' >>> ../zz/zz_gen_files.c:123: error: too many arguments to function >>> 'Zoltan_Build_Graph' >>> ../zz/zz_gen_files.c:488: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:489: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:490: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:491: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:492: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:493: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:494: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:495: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:496: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:497: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:500: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:501: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:502: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:503: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:504: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c: In function 'turn_off_reduce_dimensions': >>> ../zz/zz_gen_files.c:515: warning: initialization discards qualifiers >>> from >>> pointer target type >>> ../zz/zz_gen_files.c:515: warning: initialization discards qualifiers >>> from >>> pointer target type >>> ../zz/zz_gen_files.c:518: warning: passing argument 2 of >>> 'Zoltan_Bind_Param' discards qualifiers from pointer target type >>> ../params/params_const.h:81: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c: In function 'Zoltan_HG_Get_Pins': >>> ../zz/zz_gen_files.c:533: warning: initialization discards qualifiers >>> from >>> pointer target type >>> ../zz/zz_gen_files.c:553: warning: passing argument 2 of >>> 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type >>> ../Utilities/shared/zoltan_id.**h:63: note: expected 'char *' but >>> argument >>> is of type 'const char *' >>> ../zz/zz_gen_files.c:554: warning: passing argument 2 of >>> 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type >>> ../Utilities/shared/zoltan_id.**h:63: note: expected 'char *' but >>> argument >>> is of type 'const char *' >>> ../zz/zz_gen_files.c:556: warning: passing argument 2 of 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:559: warning: passing argument 1 of >>> 'Zoltan_Multifree' discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:568: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:583: warning: passing argument 1 of >>> 'Zoltan_Multifree' discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:602: warning: passing argument 1 of >>> 'Zoltan_Multifree' discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:69: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c: In function 'fan_in_edge_global_ids': >>> ../zz/zz_gen_files.c:661: warning: passing argument 3 of 'Zoltan_Calloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:62: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:664: warning: passing argument 2 of >>> 'ZOLTAN_Malloc_ID' discards qualifiers from pointer target type >>> ../Utilities/shared/zoltan_id.**h:63: note: expected 'char *' but >>> argument >>> is of type 'const char *' >>> ../zz/zz_gen_files.c:700: warning: passing argument 3 of 'Zoltan_Realloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:64: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:725: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:730: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:731: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c:732: warning: passing argument 2 of 'Zoltan_Free' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:61: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c: In function 'augment_search_structure': >>> ../zz/zz_gen_files.c:767: warning: passing argument 2 of 'Zoltan_Malloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:63: note: expected 'char *' but argument is of >>> type 'const char *' >>> ../zz/zz_gen_files.c: In function 'merge_gids': >>> ../zz/zz_gen_files.c:796: warning: passing argument 3 of 'Zoltan_Realloc' >>> discards qualifiers from pointer target type >>> ../include/zoltan_mem.h:64: note: expected 'char *' but argument is of >>> type 'const char *' >>> make[1]: *** [zz_gen_files.o] Error 1 >>> make: *** [zoltan] Error 2 >>> >>> **************************************************************** >>> ******************* >>> >>> >>> >>> >> > -- > Scanned by iCritical. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Sep 21 07:45:54 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 21 Sep 2012 07:45:54 -0500 Subject: [petsc-users] difference between left and right pc In-Reply-To: References: Message-ID: What happens if you use -pc_type lu ? On Sep 21, 2012, at 3:29 AM, "Klaij, Christiaan" wrote: > > When I use zero initial guess, GMRES with left PC gives a huge > jump in true resisdual between iteration 0 and 1 and GMRES with > right PC is stuck, the solution remains zero, as mentioned before. > > When I use the Knoll trick, both issues are gone (!) and I do get > similar results for left and right preconditioning, both for the > iteration count and for the physics of the solution. > > I didn't expect such a difference, did you? If so, why? Somehow > it must be related to the rhs being quite small. > > > GMRES, left PC, initial guess zero: > 0 KSP preconditioned resid norm 2.980694554053e+01 true resid norm 7.058057578378e-05 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 1.121717063399e+01 true resid norm 2.239445995669e+00 ||r(i)||/||b|| 3.172892783603e+04 > 2 KSP preconditioned resid norm 8.419094257245e+00 true resid norm 4.482707776056e+00 ||r(i)||/||b|| 6.351191848857e+04 > 3 KSP preconditioned resid norm 6.113655853636e+00 true resid norm 1.759217056899e+00 ||r(i)||/||b|| 2.492494623858e+04 > 4 KSP preconditioned resid norm 4.949889403847e+00 true resid norm 3.572848898052e-01 ||r(i)||/||b|| 5.062085224406e+03 > 5 KSP preconditioned resid norm 4.187220822242e+00 true resid norm 7.071876172117e-01 ||r(i)||/||b|| 1.001957846558e+04 > 6 KSP preconditioned resid norm 3.598699773848e+00 true resid norm 6.751395101318e-01 ||r(i)||/||b|| 9.565514344910e+03 > 7 KSP preconditioned resid norm 3.024026574700e+00 true resid norm 5.529170377011e-01 ||r(i)||/||b|| 7.833841415447e+03 > 8 KSP preconditioned resid norm 2.609636515722e+00 true resid norm 5.689992696649e-01 ||r(i)||/||b|| 8.061697759565e+03 > 9 KSP preconditioned resid norm 2.254221020819e+00 true resid norm 4.949259965429e-01 ||r(i)||/||b|| 7.012212510975e+03 > 10 KSP preconditioned resid norm 1.873529244708e+00 true resid norm 6.109183824231e-01 ||r(i)||/||b|| 8.655616302912e+03 > 11 KSP preconditioned resid norm 1.505474576580e+00 true resid norm 4.363808762555e-01 ||r(i)||/||b|| 6.182733300340e+03 > 12 KSP preconditioned resid norm 1.273391808351e+00 true resid norm 5.799473619663e-01 ||r(i)||/||b|| 8.216812565299e+03 > 13 KSP preconditioned resid norm 1.092596045026e+00 true resid norm 5.341297417537e-01 ||r(i)||/||b|| 7.567659172829e+03 > 14 KSP preconditioned resid norm 9.145639963916e-01 true resid norm 4.424631524670e-01 ||r(i)||/||b|| 6.268908230820e+03 > 15 KSP preconditioned resid norm 7.619506249149e-01 true resid norm 4.154893466277e-01 ||r(i)||/||b|| 5.886737845558e+03 > 16 KSP preconditioned resid norm 6.305034569873e-01 true resid norm 4.166530590059e-01 ||r(i)||/||b|| 5.903225559994e+03 > 17 KSP preconditioned resid norm 5.020718919136e-01 true resid norm 3.542538361268e-01 ||r(i)||/||b|| 5.019140637390e+03 > 18 KSP preconditioned resid norm 4.099172843566e-01 true resid norm 2.942812083953e-01 ||r(i)||/||b|| 4.169436209997e+03 > 19 KSP preconditioned resid norm 3.456791256934e-01 true resid norm 2.474858759247e-01 ||r(i)||/||b|| 3.506430390747e+03 > 20 KSP preconditioned resid norm 2.730195605094e-01 true resid norm 2.641558094323e-01 ||r(i)||/||b|| 3.742613410260e+03 > > > GMRES, right PC, initial guess zero: > 0 KSP unpreconditioned resid norm 7.058057578378e-05 true resid norm 7.058057578378e-05 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 7.054747142321e-05 true resid norm 7.054747142321e-05 ||r(i)||/||b|| 9.995309706643e-01 > 2 KSP unpreconditioned resid norm 7.020651831374e-05 true resid norm 7.020651831373e-05 ||r(i)||/||b|| 9.947002774360e-01 > 3 KSP unpreconditioned resid norm 7.006225380599e-05 true resid norm 7.006225374905e-05 ||r(i)||/||b|| 9.926563076458e-01 > 4 KSP unpreconditioned resid norm 7.004188290578e-05 true resid norm 7.004188287852e-05 ||r(i)||/||b|| 9.923676889953e-01 > 5 KSP unpreconditioned resid norm 7.004130975499e-05 true resid norm 7.004130973557e-05 ||r(i)||/||b|| 9.923595685891e-01 > 6 KSP unpreconditioned resid norm 7.002915081650e-05 true resid norm 7.002915072237e-05 ||r(i)||/||b|| 9.921872972090e-01 > 7 KSP unpreconditioned resid norm 6.992906439247e-05 true resid norm 6.992906454226e-05 ||r(i)||/||b|| 9.907692557861e-01 > 8 KSP unpreconditioned resid norm 6.992498998319e-05 true resid norm 6.992499016218e-05 ||r(i)||/||b|| 9.907115291379e-01 > 9 KSP unpreconditioned resid norm 6.992334551935e-05 true resid norm 6.992334572069e-05 ||r(i)||/||b|| 9.906882303552e-01 > 10 KSP unpreconditioned resid norm 6.992269976389e-05 true resid norm 6.992269995062e-05 ||r(i)||/||b|| 9.906790809531e-01 > 11 KSP unpreconditioned resid norm 6.992074987133e-05 true resid norm 6.992075003662e-05 ||r(i)||/||b|| 9.906514541736e-01 > 12 KSP unpreconditioned resid norm 6.991044260131e-05 true resid norm 6.991044279866e-05 ||r(i)||/||b|| 9.905054191230e-01 > 13 KSP unpreconditioned resid norm 6.990672948921e-05 true resid norm 6.990672970817e-05 ||r(i)||/||b|| 9.904528112992e-01 > 14 KSP unpreconditioned resid norm 6.990672944080e-05 true resid norm 6.990672965979e-05 ||r(i)||/||b|| 9.904528106138e-01 > 15 KSP unpreconditioned resid norm 6.990484339200e-05 true resid norm 6.990484361213e-05 ||r(i)||/||b|| 9.904260887057e-01 > 16 KSP unpreconditioned resid norm 6.990392558763e-05 true resid norm 6.990392579857e-05 ||r(i)||/||b|| 9.904130849359e-01 > 17 KSP unpreconditioned resid norm 6.990024258014e-05 true resid norm 6.990024281929e-05 ||r(i)||/||b|| 9.903609037340e-01 > 18 KSP unpreconditioned resid norm 6.989684197988e-05 true resid norm 6.989684218329e-05 ||r(i)||/||b|| 9.903127228292e-01 > 19 KSP unpreconditioned resid norm 6.985738628710e-05 true resid norm 6.985738637250e-05 ||r(i)||/||b|| 9.897537048507e-01 > 20 KSP unpreconditioned resid norm 6.984955654109e-05 true resid norm 6.984955670609e-05 ||r(i)||/||b|| 9.896427725393e-01 > > > GMRES, left PC, initial guess Knoll: > 0 KSP preconditioned resid norm 2.536595064974e+01 true resid norm 3.944974940985e-01 ||r(i)||/||b|| 5.589320995439e+03 > 1 KSP preconditioned resid norm 9.971908215661e+00 true resid norm 4.077906575518e-01 ||r(i)||/||b|| 5.777661247778e+03 > 2 KSP preconditioned resid norm 6.198762212035e+00 true resid norm 3.853236164713e-01 ||r(i)||/||b|| 5.459343625245e+03 > 3 KSP preconditioned resid norm 4.951817153938e+00 true resid norm 4.674909493321e-01 ||r(i)||/||b|| 6.623507163843e+03 > 4 KSP preconditioned resid norm 4.188493255513e+00 true resid norm 6.015128222440e-01 ||r(i)||/||b|| 8.522356407046e+03 > 5 KSP preconditioned resid norm 3.598679986172e+00 true resid norm 6.793369544706e-01 ||r(i)||/||b|| 9.624984592810e+03 > 6 KSP preconditioned resid norm 3.026306816528e+00 true resid norm 6.207257182597e-01 ||r(i)||/||b|| 8.794568638278e+03 > 7 KSP preconditioned resid norm 2.611366914822e+00 true resid norm 4.740155530307e-01 ||r(i)||/||b|| 6.715949080421e+03 > 8 KSP preconditioned resid norm 2.255031232002e+00 true resid norm 5.349008778459e-01 ||r(i)||/||b|| 7.578584786338e+03 > 9 KSP preconditioned resid norm 1.876627495146e+00 true resid norm 5.138940584869e-01 ||r(i)||/||b|| 7.280955883120e+03 > 10 KSP preconditioned resid norm 1.506188962283e+00 true resid norm 4.643670549998e-01 ||r(i)||/||b|| 6.579247191499e+03 > 11 KSP preconditioned resid norm 1.273921387013e+00 true resid norm 5.547288999142e-01 ||r(i)||/||b|| 7.859512249002e+03 > 12 KSP preconditioned resid norm 1.093007569456e+00 true resid norm 5.471894647153e-01 ||r(i)||/||b|| 7.752691992647e+03 > 13 KSP preconditioned resid norm 9.148792210071e-01 true resid norm 4.290679437670e-01 ||r(i)||/||b|| 6.079122180604e+03 > 14 KSP preconditioned resid norm 7.621434670141e-01 true resid norm 4.204495201638e-01 ||r(i)||/||b|| 5.957014596364e+03 > 15 KSP preconditioned resid norm 6.309047197329e-01 true resid norm 4.055181123700e-01 ||r(i)||/||b|| 5.745463363919e+03 > 16 KSP preconditioned resid norm 5.022199545208e-01 true resid norm 3.536062790063e-01 ||r(i)||/||b|| 5.009965915970e+03 > 17 KSP preconditioned resid norm 4.100790738955e-01 true resid norm 2.905349530225e-01 ||r(i)||/||b|| 4.116358499434e+03 > 18 KSP preconditioned resid norm 3.458485004841e-01 true resid norm 2.487595183782e-01 ||r(i)||/||b|| 3.524475616921e+03 > 19 KSP preconditioned resid norm 2.732837304652e-01 true resid norm 2.634708100274e-01 ||r(i)||/||b|| 3.732908198915e+03 > 20 KSP preconditioned resid norm 2.148737365384e-01 true resid norm 2.428458971510e-01 ||r(i)||/||b|| 3.440690224672e+03 > > > GMRES, right PC, initial guess Knoll: > 0 KSP unpreconditioned resid norm 3.944974940985e-01 true resid norm 3.944974940985e-01 ||r(i)||/||b|| 5.589320995439e+03 > 1 KSP unpreconditioned resid norm 3.940913664585e-01 true resid norm 3.940913578719e-01 ||r(i)||/||b|| 5.583566774507e+03 > 2 KSP unpreconditioned resid norm 3.605826275419e-01 true resid norm 3.605793788632e-01 ||r(i)||/||b|| 5.108762217636e+03 > 3 KSP unpreconditioned resid norm 3.525824891368e-01 true resid norm 3.525778821245e-01 ||r(i)||/||b|| 4.995395379099e+03 > 4 KSP unpreconditioned resid norm 3.517641531250e-01 true resid norm 3.517589349038e-01 ||r(i)||/||b|| 4.983792367766e+03 > 5 KSP unpreconditioned resid norm 3.325758429609e-01 true resid norm 3.325658592167e-01 ||r(i)||/||b|| 4.711860954995e+03 > 6 KSP unpreconditioned resid norm 3.247494282670e-01 true resid norm 3.247370694407e-01 ||r(i)||/||b|| 4.600941063948e+03 > 7 KSP unpreconditioned resid norm 3.189071703462e-01 true resid norm 3.188945126635e-01 ||r(i)||/||b|| 4.518162527328e+03 > 8 KSP unpreconditioned resid norm 3.151473946748e-01 true resid norm 3.151329240142e-01 ||r(i)||/||b|| 4.464867571775e+03 > 9 KSP unpreconditioned resid norm 3.051927212838e-01 true resid norm 3.051755614688e-01 ||r(i)||/||b|| 4.323789627385e+03 > 10 KSP unpreconditioned resid norm 3.002500146185e-01 true resid norm 3.002270739730e-01 ||r(i)||/||b|| 4.253678446783e+03 > 11 KSP unpreconditioned resid norm 2.901039932221e-01 true resid norm 2.900782484433e-01 ||r(i)||/||b|| 4.109887815762e+03 > 12 KSP unpreconditioned resid norm 2.841009118718e-01 true resid norm 2.840678386064e-01 ||r(i)||/||b|| 4.024731102742e+03 > 13 KSP unpreconditioned resid norm 2.688743733473e-01 true resid norm 2.688391744653e-01 ||r(i)||/||b|| 3.808968281711e+03 > 14 KSP unpreconditioned resid norm 2.610702842020e-01 true resid norm 2.610215351071e-01 ||r(i)||/||b|| 3.698206372058e+03 > 15 KSP unpreconditioned resid norm 2.446243742637e-01 true resid norm 2.445600184232e-01 ||r(i)||/||b|| 3.464976244631e+03 > 16 KSP unpreconditioned resid norm 2.349762947397e-01 true resid norm 2.348876685032e-01 ||r(i)||/||b|| 3.327936417276e+03 > 17 KSP unpreconditioned resid norm 2.114220662561e-01 true resid norm 2.113137631497e-01 ||r(i)||/||b|| 2.993936515863e+03 > 18 KSP unpreconditioned resid norm 1.946947209126e-01 true resid norm 1.945481276255e-01 ||r(i)||/||b|| 2.756397570650e+03 > 19 KSP unpreconditioned resid norm 1.769817110500e-01 true resid norm 1.767956706672e-01 ||r(i)||/||b|| 2.504877137994e+03 > 20 KSP unpreconditioned resid norm 1.591399642978e-01 true resid norm 1.589097747697e-01 ||r(i)||/||b|| 2.251466115217e+03 > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > From markus.mayr at outlook.com Fri Sep 21 08:17:13 2012 From: markus.mayr at outlook.com (Markus Mayr) Date: Fri, 21 Sep 2012 15:17:13 +0200 Subject: [petsc-users] Custom Matrix- and Vector-Class Message-ID: Dear list, I would like to use PETSc's linear solver for matrix free methods, but I think that I need to create a custom vector type in order to write an efficient matrix-vector multiplication. First, this is my problem: I would like to solve a system of linear equations ? /?? A? |? B?? \??? /? x? \?? /? a? \ ? | -----+----- | *? | --- | = | --- | ? \?? C? |? D?? /??? \? y? /?? \? b? / The problem is that only A is a PETSc matrix. For the other matrices, an external library provides a matrix-vector multiplication for PETSc vectors. I would like to use GMRES to solve this system. If I would like to use GMRES, I have to provide a matrix-vector multiplication for the whole matrix, right?. A, B, C, D and all vectors involved are distributed across the same set of processes. This is why I think it would be inefficient to use PETSc's MPIVec vector class. Is it? Or can I obtain the required parts of the vector efficiently, i.e. with little communication operations involved? I think, it would be the best solution, to write a vector class myself, that contains an array of PETSc vector. I took a look at the PETSc source code and it does not look too hard, but first: 1.) I wanted to get some feedback, because I am a beginner with PETSc and there might be a way to do this in PETSc more easily. 2.) I do not know which functions I have to implement in order to get GMRES to work. Is there a comprehensive list or something? Or does PETSc return meaningful error messages about missing functions? 3.) If you do not mind, could you give a short example how new vector classes are created once the vector operations struct is created? I could not find this yet. Thanks for your help! Best regards, Markus Mayr From knepley at gmail.com Fri Sep 21 08:19:49 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 21 Sep 2012 08:19:49 -0500 Subject: [petsc-users] Custom Matrix- and Vector-Class In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 8:17 AM, Markus Mayr wrote: > Dear list, > > I would like to use PETSc's linear solver for matrix free methods, but I > think that I need to create a custom vector type in order to write an > efficient matrix-vector multiplication. First, this is my problem: > > I would like to solve a system of linear equations > > / A | B \ / x \ / a \ > | -----+----- | * | --- | = | --- | > \ C | D / \ y / \ b / > > The problem is that only A is a PETSc matrix. For the other matrices, an > external library provides a matrix-vector multiplication for PETSc vectors. > I would like to use GMRES to solve this system. > > If I would like to use GMRES, I have to provide a matrix-vector > multiplication for the whole matrix, right?. A, B, C, D and all vectors > involved are distributed across the same set of processes. This is why I > think it would be inefficient to use PETSc's MPIVec vector class. Is it? Or > can I obtain the required parts of the vector efficiently, i.e. with little > communication operations involved? > I think the best way to do this is to use the MATNEST matrix type, and then put in MATSHELL entries for B, C, and D. PETSc will automatically break down and reassemble the vectors for you. Matt > I think, it would be the best solution, to write a vector class myself, > that contains an array of PETSc vector. I took a look at the PETSc source > code and it does not look too hard, but first: > > 1.) I wanted to get some feedback, because I am a beginner with PETSc and > there might be a way to do this in PETSc more easily. > 2.) I do not know which functions I have to implement in order to get > GMRES to work. Is there a comprehensive list or something? Or does PETSc > return meaningful error messages about missing functions? > 3.) If you do not mind, could you give a short example how new vector > classes are created once the vector operations struct is created? I could > not find this yet. > > Thanks for your help! > > Best regards, > Markus Mayr -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From subramanya.g at gmail.com Fri Sep 21 14:20:53 2012 From: subramanya.g at gmail.com (Subramanya G) Date: Fri, 21 Sep 2012 12:20:53 -0700 Subject: [petsc-users] Boundary conditions using DMComplex Message-ID: I have a small question about using DMComplex. How does one keep track of external boundaries. I found no methods to check if a particular node/face belonged to a particular external edge set. Also, Is it possible to set up a problem over a part of the mesh instead of the entire mesh? Thanks Subramanya G Sadasiva, Graduate Research Assistant, Hierarchical Design and Characterization Laboratory, School of Mechanical Engineering, Purdue University. "The art of structure is where to put the holes" Robert Le Ricolais, 1894-1977 From huangsc at gmail.com Fri Sep 21 14:45:55 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 12:45:55 -0700 Subject: [petsc-users] ksppreonly question Message-ID: Hi I am trying to write the procedure of a legacy code in PETSc (for comparison purposes). Instead of solving the system "Ax = b", the legacy code iterates on something like: A1*x(n+1) = b - A2*x(n) where A=A1+A2 and "n" is the iteration index. It seems to me that I need to use ksppreonly (http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPPREONLY.html). My question is: how do I update the right-hand-side, b - A2*x(n), after every iteration using PETSc API? Thanks. Shao-Ching From knepley at gmail.com Fri Sep 21 17:09:43 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 21 Sep 2012 17:09:43 -0500 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 2:45 PM, Shao-Ching Huang wrote: > Hi > > I am trying to write the procedure of a legacy code in PETSc (for > comparison purposes). Instead of solving the system "Ax = b", the > legacy code iterates on something like: > > A1*x(n+1) = b - A2*x(n) > > where A=A1+A2 and "n" is the iteration index. It seems to me that I > need to use ksppreonly > ( > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPPREONLY.html > ). > This kind of splitting is common. PETSc has Richardson, Jacobi, SOR that do this. What are the matrices A1 and A2? Matt > My question is: how do I update the right-hand-side, b - A2*x(n), > after every iteration using PETSc API? > > Thanks. > > Shao-Ching > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From huangsc at gmail.com Fri Sep 21 17:13:15 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 15:13:15 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: Thanks. A1 is part of the Laplacian operator discretized by finite volume method on non-Cartesian mesh. On Fri, Sep 21, 2012 at 3:09 PM, Matthew Knepley wrote: > On Fri, Sep 21, 2012 at 2:45 PM, Shao-Ching Huang wrote: >> >> Hi >> >> I am trying to write the procedure of a legacy code in PETSc (for >> comparison purposes). Instead of solving the system "Ax = b", the >> legacy code iterates on something like: >> >> A1*x(n+1) = b - A2*x(n) >> >> where A=A1+A2 and "n" is the iteration index. It seems to me that I >> need to use ksppreonly >> >> (http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPPREONLY.html). > > > This kind of splitting is common. PETSc has Richardson, Jacobi, SOR that do > this. What > are the matrices A1 and A2? > > Matt > >> >> My question is: how do I update the right-hand-side, b - A2*x(n), >> after every iteration using PETSc API? >> >> Thanks. >> >> Shao-Ching > > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener From knepley at gmail.com Fri Sep 21 17:14:25 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 21 Sep 2012 17:14:25 -0500 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 5:13 PM, Shao-Ching Huang wrote: > Thanks. A1 is part of the Laplacian operator discretized by finite > volume method on non-Cartesian mesh. > That is not very specific. Can you be more specific? Matt > On Fri, Sep 21, 2012 at 3:09 PM, Matthew Knepley > wrote: > > On Fri, Sep 21, 2012 at 2:45 PM, Shao-Ching Huang > wrote: > >> > >> Hi > >> > >> I am trying to write the procedure of a legacy code in PETSc (for > >> comparison purposes). Instead of solving the system "Ax = b", the > >> legacy code iterates on something like: > >> > >> A1*x(n+1) = b - A2*x(n) > >> > >> where A=A1+A2 and "n" is the iteration index. It seems to me that I > >> need to use ksppreonly > >> > >> ( > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPPREONLY.html > ). > > > > > > This kind of splitting is common. PETSc has Richardson, Jacobi, SOR that > do > > this. What > > are the matrices A1 and A2? > > > > Matt > > > >> > >> My question is: how do I update the right-hand-side, b - A2*x(n), > >> after every iteration using PETSc API? > >> > >> Thanks. > >> > >> Shao-Ching > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments > > is infinitely more interesting than any results to which their > experiments > > lead. > > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From huangsc at gmail.com Fri Sep 21 17:14:40 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 15:14:40 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: ... and A2 is the rest of the Laplacian. On Fri, Sep 21, 2012 at 3:13 PM, Shao-Ching Huang wrote: > Thanks. A1 is part of the Laplacian operator discretized by finite > volume method on non-Cartesian mesh. From jedbrown at mcs.anl.gov Fri Sep 21 17:15:49 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 21 Sep 2012 17:15:49 -0500 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: Like which part? Lower and upper triangular in some ordering? That's exactly what PCSOR does. On Fri, Sep 21, 2012 at 5:14 PM, Shao-Ching Huang wrote: > ... and A2 is the rest of the Laplacian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From huangsc at gmail.com Fri Sep 21 17:15:56 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 15:15:56 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: Diffusion term from incompressible Navier Stokes equation. Is that what you are asking? Thanks. On Fri, Sep 21, 2012 at 3:14 PM, Matthew Knepley wrote: > On Fri, Sep 21, 2012 at 5:13 PM, Shao-Ching Huang wrote: >> >> Thanks. A1 is part of the Laplacian operator discretized by finite >> volume method on non-Cartesian mesh. > > > That is not very specific. Can you be more specific? > > Matt > >> >> On Fri, Sep 21, 2012 at 3:09 PM, Matthew Knepley >> wrote: >> > On Fri, Sep 21, 2012 at 2:45 PM, Shao-Ching Huang >> > wrote: >> >> >> >> Hi >> >> >> >> I am trying to write the procedure of a legacy code in PETSc (for >> >> comparison purposes). Instead of solving the system "Ax = b", the >> >> legacy code iterates on something like: >> >> >> >> A1*x(n+1) = b - A2*x(n) >> >> >> >> where A=A1+A2 and "n" is the iteration index. It seems to me that I >> >> need to use ksppreonly >> >> >> >> >> >> (http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPPREONLY.html). >> > >> > >> > This kind of splitting is common. PETSc has Richardson, Jacobi, SOR that >> > do >> > this. What >> > are the matrices A1 and A2? >> > >> > Matt >> > >> >> >> >> My question is: how do I update the right-hand-side, b - A2*x(n), >> >> after every iteration using PETSc API? >> >> >> >> Thanks. >> >> >> >> Shao-Ching >> > >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> > experiments >> > is infinitely more interesting than any results to which their >> > experiments >> > lead. >> > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener From jedbrown at mcs.anl.gov Fri Sep 21 17:17:50 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 21 Sep 2012 17:17:50 -0500 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: So, not a Laplacian at all. Rather, a vector operator involving the symmetrized gradient of the velocity (or, not symmetrized if you are cheating due to nice boundary conditions). But what splitting are you using? On Fri, Sep 21, 2012 at 5:15 PM, Shao-Ching Huang wrote: > Diffusion term from incompressible Navier Stokes equation. Is that > what you are asking? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From huangsc at gmail.com Fri Sep 21 17:21:06 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 15:21:06 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 3:15 PM, Jed Brown wrote: > Like which part? Lower and upper triangular in some ordering? That's exactly > what PCSOR does. > In this particular finite volume discretization, the flux normal to a face involves the cell-center values on each side of the face (1), plus values from neighboring nodes (2) [due to non-orthogonal mesh cell shape]. The A1 part include coefficients from (1). A2 includes those in (2). From huangsc at gmail.com Fri Sep 21 17:25:47 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 15:25:47 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 3:17 PM, Jed Brown wrote: > So, not a Laplacian at all. Rather, a vector operator involving the > symmetrized gradient of the velocity (or, not symmetrized if you are > cheating due to nice boundary conditions). But what splitting are you using? You are right. I loosely called it Laplacian because the non-zero entries locations are the same as a standard Laplacian. Sorry for the confusion. To be accurate, it is a Laplacian plus additional diagonal terms. From jedbrown at mcs.anl.gov Fri Sep 21 17:26:31 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 21 Sep 2012 17:26:31 -0500 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 5:21 PM, Shao-Ching Huang wrote: > In this particular finite volume discretization, the flux normal to a > face involves the cell-center values on each side of the face (1), > plus values from neighboring nodes (2) [due to non-orthogonal mesh > cell shape]. The A1 part include coefficients from (1). A2 includes > those in (2). > 1. Call KSPSetOperators(ksp,A,A1,flag) You can make A in the above a MATSHELL that applies A1 + A2 matrix-free (or just the A2 part). 2. Use any Krylov method. The specific method -ksp_type richardson will do the defect-correction version of what you have written, but a real Krylov method will almost certainly perform much better. Note that A1^{-1} will be applied using whatever method you choose (via -pc_type). A V-cycle of algebraic multigrid should work very well. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Sep 21 17:30:28 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 21 Sep 2012 17:30:28 -0500 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: On Sep 21, 2012, at 5:26 PM, Jed Brown wrote: > On Fri, Sep 21, 2012 at 5:21 PM, Shao-Ching Huang wrote: > In this particular finite volume discretization, the flux normal to a > face involves the cell-center values on each side of the face (1), > plus values from neighboring nodes (2) [due to non-orthogonal mesh > cell shape]. The A1 part include coefficients from (1). A2 includes > those in (2). > > 1. Call KSPSetOperators(ksp,A,A1,flag) > > You can make A in the above a MATSHELL that applies A1 + A2 matrix-free (or just the A2 part). > > 2. Use any Krylov method. The specific method -ksp_type richardson will do the defect-correction version of what you have written, but a real Krylov method will almost certainly perform much better. Note that A1^{-1} will be applied using whatever method you choose (via -pc_type). A V-cycle of algebraic multigrid should work very well. To mimic the exact old algorithm for comparison purposes I don't think you can get this directly with KSP you'll need to manage the "outer" iteration yourself, something like for (n=0; n References: Message-ID: On Fri, Sep 21, 2012 at 3:30 PM, Barry Smith wrote: > Your KSP solve could use any solver you like (what does the old code use?, you should use the same thing for comparison purposes) This old code uses this method: http://en.wikipedia.org/wiki/Stone_method From knepley at gmail.com Fri Sep 21 17:42:14 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 21 Sep 2012 17:42:14 -0500 Subject: [petsc-users] Boundary conditions using DMComplex In-Reply-To: References: Message-ID: On Fri, Sep 21, 2012 at 2:20 PM, Subramanya G wrote: > I have a small question about using DMComplex. > How does one keep track of external boundaries. I found no methods to > check if a particular node/face belonged to a particular external > edge set. > First you want to distinguish between a) Topological boundaries For this you use Labels, so you can mark parts of the mesh using DMCcomplexSetLabelValue() b) Algebraic Boundary Conditions These are specified in the PetscSection using PetscSectionSetConstraintDof() and PetscSectionSetConstraintIndices(). To construct these, you typically use the information in the mesh labels. > Also, Is it possible to set up a problem over a part of the mesh > instead of the entire mesh? > This is very easy. Just create a PetscSection that only has unknowns over part of the mesh. Matt > Thanks > > > Subramanya G Sadasiva, > > Graduate Research Assistant, > Hierarchical Design and Characterization Laboratory, > School of Mechanical Engineering, > Purdue University. > > "The art of structure is where to put the holes" > Robert Le Ricolais, 1894-1977 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Sep 21 17:50:34 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 21 Sep 2012 17:50:34 -0500 Subject: [petsc-users] ksppreonly question In-Reply-To: References: Message-ID: <70835364-BF16-4342-A7C0-83AAD1169F8D@mcs.anl.gov> Ok, so it is using a FULL LU factorization of A1. Hence with what I outlined below you would use -ksp_type preonly -pc_type lu If you reorganize the iteration then in exact arithmetic it is what we in PETSc call Richardson's method with preconditioner defined from M (the LU of M) so I was wrong; you can do as Jed suggested KSPSetOperations(ksp, A,A1,?.) and run with -ksp_type richardson to mimic the old algorithm. Simply switch to -ksp_type gmres and you have the late 80's version of the algorithm Barry On Sep 21, 2012, at 5:30 PM, Barry Smith wrote: > > On Sep 21, 2012, at 5:26 PM, Jed Brown wrote: > >> On Fri, Sep 21, 2012 at 5:21 PM, Shao-Ching Huang wrote: >> In this particular finite volume discretization, the flux normal to a >> face involves the cell-center values on each side of the face (1), >> plus values from neighboring nodes (2) [due to non-orthogonal mesh >> cell shape]. The A1 part include coefficients from (1). A2 includes >> those in (2). >> >> 1. Call KSPSetOperators(ksp,A,A1,flag) >> >> You can make A in the above a MATSHELL that applies A1 + A2 matrix-free (or just the A2 part). >> >> 2. Use any Krylov method. The specific method -ksp_type richardson will do the defect-correction version of what you have written, but a real Krylov method will almost certainly perform much better. Note that A1^{-1} will be applied using whatever method you choose (via -pc_type). A V-cycle of algebraic multigrid should work very well. > > To mimic the exact old algorithm for comparison purposes > I don't think you can get this directly with KSP you'll need to manage the "outer" iteration yourself, something like > > for (n=0; n MatMultAdd(A2,x,b,c) where A2 is the opposite sign of your A2 above > KSPSolve(ksp,c,x); > } > Your KSP solve could use any solver you like (what does the old code use?, you should use the same thing for comparison purposes) > > Of course, this is only for comparison purposes, no one in 2012 except in a legacy code would use such a primitive nested solver. > > Barry > From huangsc at gmail.com Fri Sep 21 18:07:53 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 16:07:53 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: <70835364-BF16-4342-A7C0-83AAD1169F8D@mcs.anl.gov> References: <70835364-BF16-4342-A7C0-83AAD1169F8D@mcs.anl.gov> Message-ID: I will try the Richardson procedure Jed suggested. Thank you! On Fri, Sep 21, 2012 at 3:50 PM, Barry Smith wrote: > > Ok, so it is using a FULL LU factorization of A1. Hence with what I outlined below you would use -ksp_type preonly -pc_type lu > > If you reorganize the iteration then in exact arithmetic it is what we in PETSc call Richardson's method with preconditioner defined from M (the LU of M) so I was wrong; you can do as Jed suggested > KSPSetOperations(ksp, A,A1,?.) and run with -ksp_type richardson to mimic the old algorithm. Simply switch to -ksp_type gmres and you have the late 80's version of the algorithm > > > > Barry > > > On Sep 21, 2012, at 5:30 PM, Barry Smith wrote: > >> >> On Sep 21, 2012, at 5:26 PM, Jed Brown wrote: >> >>> On Fri, Sep 21, 2012 at 5:21 PM, Shao-Ching Huang wrote: >>> In this particular finite volume discretization, the flux normal to a >>> face involves the cell-center values on each side of the face (1), >>> plus values from neighboring nodes (2) [due to non-orthogonal mesh >>> cell shape]. The A1 part include coefficients from (1). A2 includes >>> those in (2). >>> >>> 1. Call KSPSetOperators(ksp,A,A1,flag) >>> >>> You can make A in the above a MATSHELL that applies A1 + A2 matrix-free (or just the A2 part). >>> >>> 2. Use any Krylov method. The specific method -ksp_type richardson will do the defect-correction version of what you have written, but a real Krylov method will almost certainly perform much better. Note that A1^{-1} will be applied using whatever method you choose (via -pc_type). A V-cycle of algebraic multigrid should work very well. >> >> To mimic the exact old algorithm for comparison purposes >> I don't think you can get this directly with KSP you'll need to manage the "outer" iteration yourself, something like >> >> for (n=0; n> MatMultAdd(A2,x,b,c) where A2 is the opposite sign of your A2 above >> KSPSolve(ksp,c,x); >> } >> Your KSP solve could use any solver you like (what does the old code use?, you should use the same thing for comparison purposes) >> >> Of course, this is only for comparison purposes, no one in 2012 except in a legacy code would use such a primitive nested solver. >> >> Barry >> > From huangsc at gmail.com Fri Sep 21 18:15:37 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 16:15:37 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: References: <70835364-BF16-4342-A7C0-83AAD1169F8D@mcs.anl.gov> Message-ID: I am reading this page, http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPRICHARDSON.html It says: "This method often (usually) will not converge unless scale is very small. It is described in " and seems to miss a reference there. On Fri, Sep 21, 2012 at 4:07 PM, Shao-Ching Huang wrote: > I will try the Richardson procedure Jed suggested. Thank you! > > > On Fri, Sep 21, 2012 at 3:50 PM, Barry Smith wrote: >> >> Ok, so it is using a FULL LU factorization of A1. Hence with what I outlined below you would use -ksp_type preonly -pc_type lu >> >> If you reorganize the iteration then in exact arithmetic it is what we in PETSc call Richardson's method with preconditioner defined from M (the LU of M) so I was wrong; you can do as Jed suggested >> KSPSetOperations(ksp, A,A1,?.) and run with -ksp_type richardson to mimic the old algorithm. Simply switch to -ksp_type gmres and you have the late 80's version of the algorithm >> >> >> >> Barry >> >> >> On Sep 21, 2012, at 5:30 PM, Barry Smith wrote: >> >>> >>> On Sep 21, 2012, at 5:26 PM, Jed Brown wrote: >>> >>>> On Fri, Sep 21, 2012 at 5:21 PM, Shao-Ching Huang wrote: >>>> In this particular finite volume discretization, the flux normal to a >>>> face involves the cell-center values on each side of the face (1), >>>> plus values from neighboring nodes (2) [due to non-orthogonal mesh >>>> cell shape]. The A1 part include coefficients from (1). A2 includes >>>> those in (2). >>>> >>>> 1. Call KSPSetOperators(ksp,A,A1,flag) >>>> >>>> You can make A in the above a MATSHELL that applies A1 + A2 matrix-free (or just the A2 part). >>>> >>>> 2. Use any Krylov method. The specific method -ksp_type richardson will do the defect-correction version of what you have written, but a real Krylov method will almost certainly perform much better. Note that A1^{-1} will be applied using whatever method you choose (via -pc_type). A V-cycle of algebraic multigrid should work very well. >>> >>> To mimic the exact old algorithm for comparison purposes >>> I don't think you can get this directly with KSP you'll need to manage the "outer" iteration yourself, something like >>> >>> for (n=0; n>> MatMultAdd(A2,x,b,c) where A2 is the opposite sign of your A2 above >>> KSPSolve(ksp,c,x); >>> } >>> Your KSP solve could use any solver you like (what does the old code use?, you should use the same thing for comparison purposes) >>> >>> Of course, this is only for comparison purposes, no one in 2012 except in a legacy code would use such a primitive nested solver. >>> >>> Barry >>> >> From jedbrown at mcs.anl.gov Fri Sep 21 18:18:22 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 21 Sep 2012 18:18:22 -0500 Subject: [petsc-users] ksppreonly question In-Reply-To: References: <70835364-BF16-4342-A7C0-83AAD1169F8D@mcs.anl.gov> Message-ID: On Fri, Sep 21, 2012 at 6:15 PM, Shao-Ching Huang wrote: > I am reading this page, > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPRICHARDSON.html > It says: > > "This method often (usually) will not converge unless scale is very > small. That statement applies if you use a general preconditioner that does not bound the preconditioned spectrum. With LU, you are fine with a scale of 1 (or close). > It is described in " > > and seems to miss a reference there. > The reference is further down the page. Thanks for pointing out the anomaly. > > > On Fri, Sep 21, 2012 at 4:07 PM, Shao-Ching Huang > wrote: > > I will try the Richardson procedure Jed suggested. Thank you! > > > > > > On Fri, Sep 21, 2012 at 3:50 PM, Barry Smith wrote: > >> > >> Ok, so it is using a FULL LU factorization of A1. Hence with what I > outlined below you would use -ksp_type preonly -pc_type lu > >> > >> If you reorganize the iteration then in exact arithmetic it is what > we in PETSc call Richardson's method with preconditioner defined from M > (the LU of M) so I was wrong; you can do as Jed suggested > >> KSPSetOperations(ksp, A,A1,?.) and run with -ksp_type richardson to > mimic the old algorithm. Simply switch to -ksp_type gmres and you have the > late 80's version of the algorithm > >> > >> > >> > >> Barry > >> > >> > >> On Sep 21, 2012, at 5:30 PM, Barry Smith wrote: > >> > >>> > >>> On Sep 21, 2012, at 5:26 PM, Jed Brown wrote: > >>> > >>>> On Fri, Sep 21, 2012 at 5:21 PM, Shao-Ching Huang > wrote: > >>>> In this particular finite volume discretization, the flux normal to a > >>>> face involves the cell-center values on each side of the face (1), > >>>> plus values from neighboring nodes (2) [due to non-orthogonal mesh > >>>> cell shape]. The A1 part include coefficients from (1). A2 includes > >>>> those in (2). > >>>> > >>>> 1. Call KSPSetOperators(ksp,A,A1,flag) > >>>> > >>>> You can make A in the above a MATSHELL that applies A1 + A2 > matrix-free (or just the A2 part). > >>>> > >>>> 2. Use any Krylov method. The specific method -ksp_type richardson > will do the defect-correction version of what you have written, but a real > Krylov method will almost certainly perform much better. Note that A1^{-1} > will be applied using whatever method you choose (via -pc_type). A V-cycle > of algebraic multigrid should work very well. > >>> > >>> To mimic the exact old algorithm for comparison purposes > >>> I don't think you can get this directly with KSP you'll need to > manage the "outer" iteration yourself, something like > >>> > >>> for (n=0; n >>> MatMultAdd(A2,x,b,c) where A2 is the opposite sign of your A2 > above > >>> KSPSolve(ksp,c,x); > >>> } > >>> Your KSP solve could use any solver you like (what does the old code > use?, you should use the same thing for comparison purposes) > >>> > >>> Of course, this is only for comparison purposes, no one in 2012 except > in a legacy code would use such a primitive nested solver. > >>> > >>> Barry > >>> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From huangsc at gmail.com Fri Sep 21 18:20:27 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 16:20:27 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: References: <70835364-BF16-4342-A7C0-83AAD1169F8D@mcs.anl.gov> Message-ID: On Fri, Sep 21, 2012 at 4:18 PM, Jed Brown wrote: > The reference is further down the page Thanks. That's what I guessed after sending last email... From huangsc at gmail.com Sat Sep 22 00:33:13 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Fri, 21 Sep 2012 22:33:13 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: References: <70835364-BF16-4342-A7C0-83AAD1169F8D@mcs.anl.gov> Message-ID: Jed, Is this equivalent to setting up a SNES where the (constant) Jacobian is my A1 matrix, similar to ex35.c in the snes directory (despite with different PC type)? Thanks. Shao-Ching On Fri, Sep 21, 2012 at 4:18 PM, Jed Brown wrote: > On Fri, Sep 21, 2012 at 6:15 PM, Shao-Ching Huang wrote: >> >> I am reading this page, >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPRICHARDSON.html >> It says: >> >> "This method often (usually) will not converge unless scale is very >> small. > > > That statement applies if you use a general preconditioner that does not > bound the preconditioned spectrum. With LU, you are fine with a scale of 1 > (or close). > >> >> It is described in " >> >> and seems to miss a reference there. > > > The reference is further down the page. Thanks for pointing out the anomaly. > >> >> >> >> On Fri, Sep 21, 2012 at 4:07 PM, Shao-Ching Huang >> wrote: >> > I will try the Richardson procedure Jed suggested. Thank you! >> > >> > >> > On Fri, Sep 21, 2012 at 3:50 PM, Barry Smith wrote: >> >> >> >> Ok, so it is using a FULL LU factorization of A1. Hence with what I >> >> outlined below you would use -ksp_type preonly -pc_type lu >> >> >> >> If you reorganize the iteration then in exact arithmetic it is what >> >> we in PETSc call Richardson's method with preconditioner defined from M (the >> >> LU of M) so I was wrong; you can do as Jed suggested >> >> KSPSetOperations(ksp, A,A1,?.) and run with -ksp_type richardson to >> >> mimic the old algorithm. Simply switch to -ksp_type gmres and you have the >> >> late 80's version of the algorithm >> >> >> >> >> >> >> >> Barry >> >> >> >> >> >> On Sep 21, 2012, at 5:30 PM, Barry Smith wrote: >> >> >> >>> >> >>> On Sep 21, 2012, at 5:26 PM, Jed Brown wrote: >> >>> >> >>>> On Fri, Sep 21, 2012 at 5:21 PM, Shao-Ching Huang >> >>>> wrote: >> >>>> In this particular finite volume discretization, the flux normal to a >> >>>> face involves the cell-center values on each side of the face (1), >> >>>> plus values from neighboring nodes (2) [due to non-orthogonal mesh >> >>>> cell shape]. The A1 part include coefficients from (1). A2 includes >> >>>> those in (2). >> >>>> >> >>>> 1. Call KSPSetOperators(ksp,A,A1,flag) >> >>>> >> >>>> You can make A in the above a MATSHELL that applies A1 + A2 >> >>>> matrix-free (or just the A2 part). >> >>>> >> >>>> 2. Use any Krylov method. The specific method -ksp_type richardson >> >>>> will do the defect-correction version of what you have written, but a real >> >>>> Krylov method will almost certainly perform much better. Note that A1^{-1} >> >>>> will be applied using whatever method you choose (via -pc_type). A V-cycle >> >>>> of algebraic multigrid should work very well. >> >>> >> >>> To mimic the exact old algorithm for comparison purposes >> >>> I don't think you can get this directly with KSP you'll need to >> >>> manage the "outer" iteration yourself, something like >> >>> >> >>> for (n=0; n> >>> MatMultAdd(A2,x,b,c) where A2 is the opposite sign of your A2 >> >>> above >> >>> KSPSolve(ksp,c,x); >> >>> } >> >>> Your KSP solve could use any solver you like (what does the old code >> >>> use?, you should use the same thing for comparison purposes) >> >>> >> >>> Of course, this is only for comparison purposes, no one in 2012 except >> >>> in a legacy code would use such a primitive nested solver. >> >>> >> >>> Barry >> >>> >> >> > > From jedbrown at mcs.anl.gov Sat Sep 22 00:42:48 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 22 Sep 2012 00:42:48 -0500 Subject: [petsc-users] ksppreonly question In-Reply-To: References: <70835364-BF16-4342-A7C0-83AAD1169F8D@mcs.anl.gov> Message-ID: On Sat, Sep 22, 2012 at 12:33 AM, Shao-Ching Huang wrote: > Is this equivalent to setting up a SNES where the (constant) Jacobian > is my A1 matrix, similar to ex35.c in the snes directory (despite with > different PC type)? > Sure, make your residual function F(x) = A x - b and "Jacobian" J = A1 This is the defect correction version of your algorithm. You can make it better by bringing in the concept of orthogonality/spectral adaptivity either in via -snes_type ngmres or, recognizing that this system is linear, by making the operator A available to the linear solver. You can do that by (a) -snes_mf_operator which will finite difference F to compute the action or (b) by filling the first Jacobian slot in the SNES compute Jacobian callback with a MATSHELL that applies the entire A. I would start with (a) because it is simpler, especially if your underlying problem is nonlinear. -------------- next part -------------- An HTML attachment was scrubbed... URL: From huangsc at gmail.com Sat Sep 22 02:42:04 2012 From: huangsc at gmail.com (Shao-Ching Huang) Date: Sat, 22 Sep 2012 00:42:04 -0700 Subject: [petsc-users] ksppreonly question In-Reply-To: References: <70835364-BF16-4342-A7C0-83AAD1169F8D@mcs.anl.gov> Message-ID: Sounds good. Many thanks! On Fri, Sep 21, 2012 at 10:42 PM, Jed Brown wrote: > On Sat, Sep 22, 2012 at 12:33 AM, Shao-Ching Huang > wrote: >> >> Is this equivalent to setting up a SNES where the (constant) Jacobian >> is my A1 matrix, similar to ex35.c in the snes directory (despite with >> different PC type)? > > > Sure, make your residual function > > F(x) = A x - b > > and "Jacobian" > > J = A1 > > This is the defect correction version of your algorithm. You can make it > better by bringing in the concept of orthogonality/spectral adaptivity > either in via -snes_type ngmres or, recognizing that this system is linear, > by making the operator A available to the linear solver. You can do that by > (a) -snes_mf_operator which will finite difference F to compute the action > or (b) by filling the first Jacobian slot in the SNES compute Jacobian > callback with a MATSHELL that applies the entire A. I would start with (a) > because it is simpler, especially if your underlying problem is nonlinear. From yifli82 at gmail.com Sun Sep 23 13:02:50 2012 From: yifli82 at gmail.com (Yifei Li) Date: Sun, 23 Sep 2012 14:02:50 -0400 Subject: [petsc-users] C compiler you provided with -with-cc=win32fe cl does not work Message-ID: Hi, I tried to build petsc on a maching running 32-bit Windows XP with SP3. I got the above error after running configure Attached is the log file. Any help is appreciated. Yifei -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 29876 bytes Desc: not available URL: From chetan.jhurani at gmail.com Sun Sep 23 13:33:43 2012 From: chetan.jhurani at gmail.com (Chetan Jhurani) Date: Sun, 23 Sep 2012 11:33:43 -0700 Subject: [petsc-users] C compiler you provided with -with-cc=win32fe cl does not work In-Reply-To: References: Message-ID: <505f5609.ca4f420a.347d.ffff99bd@mx.google.com> Try after extracting petsc in a directory which does not contain a space in its name. Chetan From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Yifei Li Sent: Sunday, September 23, 2012 11:03 AM To: PETSc users list Subject: [petsc-users] C compiler you provided with -with-cc=win32fe cl does not work Hi, I tried to build petsc on a maching running 32-bit Windows XP with SP3. I got the above error after running configure Attached is the log file. Any help is appreciated. Yifei -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenleix at cs.cornell.edu Sun Sep 23 16:58:21 2012 From: wenleix at cs.cornell.edu (Wenlei Xie) Date: Sun, 23 Sep 2012 17:58:21 -0400 Subject: [petsc-users] Number of Blocks in the Block Jacobi preconditionr Message-ID: <2AFD2DA6-E530-4F93-A337-6E7D8F022969@cs.cornell.edu> Hi, I am using PETSc to solve linear systems with Block Jacobi preconditioner, and I am currently studying the trade off between the number of blocks. In my experiments I tried a very sparse 1M*1M matrix and try different number of blocks. I expect the result would be, if I use less blocks (thus each block has larger size), then the convergence rate would increase but the time spend on each iteration would would also increase; while for more blocks (and each block has smaller size), both the convergence rate and the time on each iteration will be decreased. However, the result turns out to that the time on each iteration also decreased as the block size increased. Is it because I misused it? I am use Richardson method as the Krylov method, and ILU as the solver inside each block matrix. The matrix contains around 3.5M non-zeros entries and I use METIS to partition it into blocks. Thank you! Best, Wenlei From jedbrown at mcs.anl.gov Sun Sep 23 17:03:01 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 23 Sep 2012 17:03:01 -0500 Subject: [petsc-users] Number of Blocks in the Block Jacobi preconditionr In-Reply-To: <2AFD2DA6-E530-4F93-A337-6E7D8F022969@cs.cornell.edu> References: <2AFD2DA6-E530-4F93-A337-6E7D8F022969@cs.cornell.edu> Message-ID: On Sun, Sep 23, 2012 at 4:58 PM, Wenlei Xie wrote: > I am using PETSc to solve linear systems with Block Jacobi preconditioner, > and I am currently studying the trade off between the number of blocks. > In my experiments I tried a very sparse 1M*1M matrix and try different > number of blocks. I expect the result would be, if I use less blocks (thus > each block has larger size), then the convergence rate would increase but > the time spend on each iteration would would also increase; while for more > blocks (and each block has smaller size), both the convergence rate and the > time on each iteration will be decreased. However, the result turns out to > that the time on each iteration also decreased as the block size increased. > Is it because I misused it? > > I am use Richardson method as the Krylov method, and ILU as the solver > inside each block matrix. The matrix contains around 3.5M non-zeros entries > and I use METIS to partition it into blocks. > The work required for ILU is essentially linear in the subdomain size so the only possible speedup comes from potentially better cache reuse by small blocks. With the default -sub_ksp_type preonly, that isn't significant yet. To see the effect you expect, use -sub_pc_type lu. The cost of the direct solve (especially in 3D) grows noticeably superlinearly, thus large subdomains become much more expensive than the overhead of decomposing the problem at that granularity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenleix at cs.cornell.edu Sun Sep 23 18:14:40 2012 From: wenleix at cs.cornell.edu (Wenlei Xie) Date: Sun, 23 Sep 2012 19:14:40 -0400 Subject: [petsc-users] Number of Blocks in the Block Jacobi preconditionr In-Reply-To: References: <2AFD2DA6-E530-4F93-A337-6E7D8F022969@cs.cornell.edu> Message-ID: <5C7C0027-EBFB-4758-87DD-7D0AD9252B83@cs.cornell.edu> Thank you! However as I observed if there are too many blocks (say 200), the performance would decrease drastically. Here are some numbers (collected under ILU as the preconditioner in the sub block -- I am going to test it with LU) #Blocks Setup Time Solving Time (50 Iterations) 100 0.5s 17.5s 200 0.76s 19.1s 300 2.75s 26.5s 500 7.39s 43.64s 1000 39.13s 165.83s It looks like both the Setup Time and the Solving Time increase significantly as the number of blocks are more than 200. Is this a normal result? Thanks! Wenlei On Sep 23, 2012, at 6:03 PM, Jed Brown wrote: On Sun, Sep 23, 2012 at 4:58 PM, Wenlei Xie > wrote: I am using PETSc to solve linear systems with Block Jacobi preconditioner, and I am currently studying the trade off between the number of blocks. In my experiments I tried a very sparse 1M*1M matrix and try different number of blocks. I expect the result would be, if I use less blocks (thus each block has larger size), then the convergence rate would increase but the time spend on each iteration would would also increase; while for more blocks (and each block has smaller size), both the convergence rate and the time on each iteration will be decreased. However, the result turns out to that the time on each iteration also decreased as the block size increased. Is it because I misused it? I am use Richardson method as the Krylov method, and ILU as the solver inside each block matrix. The matrix contains around 3.5M non-zeros entries and I use METIS to partition it into blocks. The work required for ILU is essentially linear in the subdomain size so the only possible speedup comes from potentially better cache reuse by small blocks. With the default -sub_ksp_type preonly, that isn't significant yet. To see the effect you expect, use -sub_pc_type lu. The cost of the direct solve (especially in 3D) grows noticeably superlinearly, thus large subdomains become much more expensive than the overhead of decomposing the problem at that granularity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Sep 23 19:11:20 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 23 Sep 2012 19:11:20 -0500 Subject: [petsc-users] Number of Blocks in the Block Jacobi preconditionr In-Reply-To: <5C7C0027-EBFB-4758-87DD-7D0AD9252B83@cs.cornell.edu> References: <2AFD2DA6-E530-4F93-A337-6E7D8F022969@cs.cornell.edu> <5C7C0027-EBFB-4758-87DD-7D0AD9252B83@cs.cornell.edu> Message-ID: On Sun, Sep 23, 2012 at 6:14 PM, Wenlei Xie wrote: > Thank you! However as I observed if there are too many blocks (say 200), > the performance would decrease drastically. Here are some numbers > (collected under ILU as the preconditioner in the sub block -- I am going > to test it with LU) > #Blocks Setup Time Solving Time (50 Iterations) > 100 0.5s 17.5s > 200 0.76s 19.1s > 300 2.75s 26.5s > 500 7.39s 43.64s > 1000 39.13s 165.83s > > It looks like both the Setup Time and the Solving Time increase > significantly as the number of blocks are more than 200. Is this a normal > result? > It does not surprise me. A couple factors 1. Ghost values get copied many times as the subdomains shrink. 2. If you are running in debug mode, there is overhead of going through the KSP interface for tiny subdomains. The overhead is small in optimized mode unless there are only a handful of degrees per subdomain. This overhead would only grow linearly with number of subdomains so it does not explain your super-linear slow-down. -------------- next part -------------- An HTML attachment was scrubbed... URL: From subramanya.g at gmail.com Sun Sep 23 23:32:17 2012 From: subramanya.g at gmail.com (Subramanya G) Date: Sun, 23 Sep 2012 21:32:17 -0700 Subject: [petsc-users] Boundary conditions using DMComplex In-Reply-To: References: Message-ID: Hi Matt, I got what you were saying about the boundary conditions. But I am unable to figure out which routines to use for selecting only a section of the mesh while setting up a Petsc Section. The DMComplexCreateSection does not seem to offer anyway of restricting the mesh. Thanks, Subramanya G Sadasiva, Graduate Research Assistant, Hierarchical Design and Characterization Laboratory, School of Mechanical Engineering, Purdue University. "The art of structure is where to put the holes" Robert Le Ricolais, 1894-1977 On Fri, Sep 21, 2012 at 3:42 PM, Matthew Knepley wrote: > On Fri, Sep 21, 2012 at 2:20 PM, Subramanya G > wrote: >> >> I have a small question about using DMComplex. >> How does one keep track of external boundaries. I found no methods to >> check if a particular node/face belonged to a particular external >> edge set. > > > First you want to distinguish between > > a) Topological boundaries > > For this you use Labels, so you can mark parts of the mesh using > DMCcomplexSetLabelValue() > > b) Algebraic Boundary Conditions > > These are specified in the PetscSection using PetscSectionSetConstraintDof() > and > PetscSectionSetConstraintIndices(). To construct these, you typically use > the information > in the mesh labels. > >> >> Also, Is it possible to set up a problem over a part of the mesh >> instead of the entire mesh? > > > This is very easy. Just create a PetscSection that only has unknowns over > part of the mesh. > > Matt > >> >> Thanks >> >> >> Subramanya G Sadasiva, >> >> Graduate Research Assistant, >> Hierarchical Design and Characterization Laboratory, >> School of Mechanical Engineering, >> Purdue University. >> >> "The art of structure is where to put the holes" >> Robert Le Ricolais, 1894-1977 > > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener From knepley at gmail.com Mon Sep 24 07:51:07 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 24 Sep 2012 08:51:07 -0400 Subject: [petsc-users] Boundary conditions using DMComplex In-Reply-To: References: Message-ID: On Mon, Sep 24, 2012 at 12:32 AM, Subramanya G wrote: > Hi Matt, > I got what you were saying about the boundary conditions. But I am > unable to figure out which routines to use for selecting only a > section of the mesh while setting up a Petsc Section. The > DMComplexCreateSection does not seem to offer anyway of restricting > the mesh. > Take a look at the source for DMComplexCreateMesh(): http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/5b880935157f/src/dm/impls/complex/complex.c#l5069 It just loops over ever point in the domain (DMComplexGetChart()) and sets a number of unknowns on it using PetscSectionSetDof(). After that is finished, it calls PetscSectionSetUp(). Its a little more complicated with boundary conditions, but you can check this function for the specifics. If you want just part of the domain, only call PetscSectionSetDof() on part of the points. Matt > Thanks, > > > Subramanya G Sadasiva, > > Graduate Research Assistant, > Hierarchical Design and Characterization Laboratory, > School of Mechanical Engineering, > Purdue University. > > "The art of structure is where to put the holes" > Robert Le Ricolais, 1894-1977 > > > On Fri, Sep 21, 2012 at 3:42 PM, Matthew Knepley > wrote: > > On Fri, Sep 21, 2012 at 2:20 PM, Subramanya G > > wrote: > >> > >> I have a small question about using DMComplex. > >> How does one keep track of external boundaries. I found no methods to > >> check if a particular node/face belonged to a particular external > >> edge set. > > > > > > First you want to distinguish between > > > > a) Topological boundaries > > > > For this you use Labels, so you can mark parts of the mesh using > > DMCcomplexSetLabelValue() > > > > b) Algebraic Boundary Conditions > > > > These are specified in the PetscSection using > PetscSectionSetConstraintDof() > > and > > PetscSectionSetConstraintIndices(). To construct these, you typically use > > the information > > in the mesh labels. > > > >> > >> Also, Is it possible to set up a problem over a part of the mesh > >> instead of the entire mesh? > > > > > > This is very easy. Just create a PetscSection that only has unknowns over > > part of the mesh. > > > > Matt > > > >> > >> Thanks > >> > >> > >> Subramanya G Sadasiva, > >> > >> Graduate Research Assistant, > >> Hierarchical Design and Characterization Laboratory, > >> School of Mechanical Engineering, > >> Purdue University. > >> > >> "The art of structure is where to put the holes" > >> Robert Le Ricolais, 1894-1977 > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments > > is infinitely more interesting than any results to which their > experiments > > lead. > > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Mon Sep 24 08:36:03 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Mon, 24 Sep 2012 13:36:03 +0000 Subject: [petsc-users] difference between left and right pc Message-ID: > What happens if you use -pc_type lu ? Barry, I can't do that, it's a matshell. Chris > > > On Sep 21, 2012, at 3:29 AM, "Klaij, Christiaan" wrote: > > > > > When I use zero initial guess, GMRES with left PC gives a huge > > jump in true resisdual between iteration 0 and 1 and GMRES with > > right PC is stuck, the solution remains zero, as mentioned before. > > > > When I use the Knoll trick, both issues are gone (!) and I do get > > similar results for left and right preconditioning, both for the > > iteration count and for the physics of the solution. > > > > I didn't expect such a difference, did you? If so, why? Somehow > > it must be related to the rhs being quite small. dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From jedbrown at mcs.anl.gov Mon Sep 24 08:41:03 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 24 Sep 2012 08:41:03 -0500 Subject: [petsc-users] difference between left and right pc In-Reply-To: References: Message-ID: On Mon, Sep 24, 2012 at 8:36 AM, Klaij, Christiaan wrote: > I can't do that, it's a matshell. You can use MatComputeExplicitOperator or MatFDColoring to get the entries. Also, make sure the MATSHELL is really, truly a linear operator. You'll get very confusing results if you accidentally leave some nonlinearity inside that function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Vincent.De-Groof at uibk.ac.at Tue Sep 25 09:59:16 2012 From: Vincent.De-Groof at uibk.ac.at (De Groof, Vincent Frans Maria) Date: Tue, 25 Sep 2012 14:59:16 +0000 Subject: [petsc-users] problem with spooles solver Message-ID: <17A78B9D13564547AC894B88C15967470DBD1713@XMBX4.uibk.ac.at> Hi All, I'm having a problem with the spooles direct solver. I have it working, but unfortunately it seems to be lacking robustness. I'm dealing with the stiffness matrices of a FE program, where some iterations converge, and for others I get an error. It might be important to mention I'm dealing with ill-conditioned matrices (condition number ~10^10). Any suggestions to solve this problem? thanks. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] MatFactorNumeric_SeqSpooles line 132 src/mat/impls/aij/seq/spooles/spooles.c [0]PETSC ERROR: [0] MatCholeskyFactorNumeric line 2950 src/mat/interface/matrix.c [0]PETSC ERROR: [0] PCSetUp_Cholesky line 92 src/ksp/pc/impls/factor/cholesky/cholesky.c [0]PETSC ERROR: [0] PCSetUp line 810 src/ksp/pc/interface/precon.c [0]PETSC ERROR: [0] KSPSetUp line 182 src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: [0] KSPSolve line 351 src/ksp/ksp/interface/itfunc.c -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Sep 25 10:06:11 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 25 Sep 2012 10:06:11 -0500 Subject: [petsc-users] problem with spooles solver In-Reply-To: <17A78B9D13564547AC894B88C15967470DBD1713@XMBX4.uibk.ac.at> References: <17A78B9D13564547AC894B88C15967470DBD1713@XMBX4.uibk.ac.at> Message-ID: Spooles is no longer supported. Suggest trying SuperLU, Cholmod, and MUMPS. On Tue, Sep 25, 2012 at 9:59 AM, De Groof, Vincent Frans Maria < Vincent.De-Groof at uibk.ac.at> wrote: > Hi All, > > > I'm having a problem with the spooles direct solver. I have it working, > but unfortunately it seems to be lacking robustness. I'm dealing with the > stiffness matrices of a FE program, where some iterations converge, and for > others I get an error. It might be important to mention I'm dealing with > ill-conditioned matrices (condition number ~10^10). Any suggestions to > solve this problem? > > > thanks. > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] MatFactorNumeric_SeqSpooles line 132 > src/mat/impls/aij/seq/spooles/spooles.c > [0]PETSC ERROR: [0] MatCholeskyFactorNumeric line 2950 > src/mat/interface/matrix.c > [0]PETSC ERROR: [0] PCSetUp_Cholesky line 92 > src/ksp/pc/impls/factor/cholesky/cholesky.c > [0]PETSC ERROR: [0] PCSetUp line 810 src/ksp/pc/interface/precon.c > [0]PETSC ERROR: [0] KSPSetUp line 182 src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: [0] KSPSolve line 351 src/ksp/ksp/interface/itfunc.c > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.mousel at gmail.com Tue Sep 25 16:44:59 2012 From: john.mousel at gmail.com (John Mousel) Date: Tue, 25 Sep 2012 16:44:59 -0500 Subject: [petsc-users] MatSetValues on same column multiple times Message-ID: If I'm adding to a particular column in multiple passes, do I need to preallocate enough memory for each pass, or just 1 for each column in the row? Right now I'm drastically over-allocating, and I'm trying to reduce this. If I give the true number of columns in the row, I keep getting new entry errors. New nonzero at (8045,60) caused a malloc! I have it spit into phases where I set zeros in the non-zero pattern, then I come back and fill in entries in a bulk pass, and then do a second pass to clean up entries in special rows. Could someone clarify how this works. John -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Sep 25 16:50:38 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 25 Sep 2012 17:50:38 -0400 Subject: [petsc-users] MatSetValues on same column multiple times In-Reply-To: References: Message-ID: On Tue, Sep 25, 2012 at 5:44 PM, John Mousel wrote: > > If I'm adding to a particular column in multiple passes, do I need to > preallocate enough memory for each pass, or just 1 for each column in the > row? > Right now I'm drastically over-allocating, and I'm trying to reduce this. > If I give the true number of columns in the row, I keep getting new entry > errors. > > New nonzero at (8045,60) caused a malloc! > You have to be more specific. It sounds like what you are doing is This will not work since assembly squeezes out extra space. Why are you assembling in between? If you need to clear buffers, you can just to MAT_ASSEMBLY_FLUSH. Matt > I have it spit into phases where I set zeros in the non-zero pattern, then > I come back and fill in entries in a bulk pass, and then do a second pass > to clean up entries in special rows. > Could someone clarify how this works. > > John > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.mousel at gmail.com Tue Sep 25 16:56:23 2012 From: john.mousel at gmail.com (John Mousel) Date: Tue, 25 Sep 2012 16:56:23 -0500 Subject: [petsc-users] MatSetValues on same column multiple times In-Reply-To: References: Message-ID: To be more clear, I do the following: What I want to clarify is if I touch the same column more than once, say I'm looping over neighbors in an octree mesh, and it's just easier to insert them multiple times as I find them for fluxes in different directions instead of finding one exact coefficient for each neighbor, how much preallocation would I need to specify? The number of times it gets touched, or just one? John On Tue, Sep 25, 2012 at 4:50 PM, Matthew Knepley wrote: > On Tue, Sep 25, 2012 at 5:44 PM, John Mousel wrote: > >> >> If I'm adding to a particular column in multiple passes, do I need to >> preallocate enough memory for each pass, or just 1 for each column in the >> row? >> Right now I'm drastically over-allocating, and I'm trying to reduce this. >> If I give the true number of columns in the row, I keep getting new entry >> errors. >> >> New nonzero at (8045,60) caused a malloc! >> > > You have to be more specific. It sounds like what you are doing is > > > > > > > This will not work since assembly squeezes out extra space. Why are you > assembling in between? > If you need to clear buffers, you can just to MAT_ASSEMBLY_FLUSH. > > Matt > > >> I have it spit into phases where I set zeros in the non-zero pattern, >> then I come back and fill in entries in a bulk pass, and then do a second >> pass to clean up entries in special rows. >> Could someone clarify how this works. >> >> John >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Sep 25 16:58:13 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 25 Sep 2012 17:58:13 -0400 Subject: [petsc-users] MatSetValues on same column multiple times In-Reply-To: References: Message-ID: On Tue, Sep 25, 2012 at 5:56 PM, John Mousel wrote: > To be more clear, I do the following: > > > > > > > > > What I want to clarify is if I touch the same column more than once, say > I'm looping over neighbors in an octree mesh, and it's just easier to > insert them multiple times as I find them for fluxes in different > directions instead of finding one exact coefficient for each neighbor, how > much preallocation would I need to specify? The number of times it gets > touched, or just one? > Just once. This is exactly how we operate with DMCreateMatrix(). You can test it in any of the examples, like SNES ex5. Matt > John > > > On Tue, Sep 25, 2012 at 4:50 PM, Matthew Knepley wrote: > >> On Tue, Sep 25, 2012 at 5:44 PM, John Mousel wrote: >> >>> >>> If I'm adding to a particular column in multiple passes, do I need to >>> preallocate enough memory for each pass, or just 1 for each column in the >>> row? >>> Right now I'm drastically over-allocating, and I'm trying to reduce >>> this. If I give the true number of columns in the row, I keep getting new >>> entry errors. >>> >>> New nonzero at (8045,60) caused a malloc! >>> >> >> You have to be more specific. It sounds like what you are doing is >> >> >> >> >> >> >> This will not work since assembly squeezes out extra space. Why are you >> assembling in between? >> If you need to clear buffers, you can just to MAT_ASSEMBLY_FLUSH. >> >> Matt >> >> >>> I have it spit into phases where I set zeros in the non-zero pattern, >>> then I come back and fill in entries in a bulk pass, and then do a second >>> pass to clean up entries in special rows. >>> Could someone clarify how this works. >>> >>> John >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.mousel at gmail.com Tue Sep 25 17:01:12 2012 From: john.mousel at gmail.com (John Mousel) Date: Tue, 25 Sep 2012 17:01:12 -0500 Subject: [petsc-users] MatSetValues on same column multiple times In-Reply-To: References: Message-ID: Thanks Matt. Are you allowed to insert the same column multiple times in one call to MatSetValues though? On Tue, Sep 25, 2012 at 4:58 PM, Matthew Knepley wrote: > On Tue, Sep 25, 2012 at 5:56 PM, John Mousel wrote: > >> To be more clear, I do the following: >> >> >> >> >> >> >> >> >> What I want to clarify is if I touch the same column more than once, say >> I'm looping over neighbors in an octree mesh, and it's just easier to >> insert them multiple times as I find them for fluxes in different >> directions instead of finding one exact coefficient for each neighbor, how >> much preallocation would I need to specify? The number of times it gets >> touched, or just one? >> > > Just once. This is exactly how we operate with DMCreateMatrix(). You can > test it in any of the examples, like SNES ex5. > > Matt > > >> John >> >> >> On Tue, Sep 25, 2012 at 4:50 PM, Matthew Knepley wrote: >> >>> On Tue, Sep 25, 2012 at 5:44 PM, John Mousel wrote: >>> >>>> >>>> If I'm adding to a particular column in multiple passes, do I need to >>>> preallocate enough memory for each pass, or just 1 for each column in the >>>> row? >>>> Right now I'm drastically over-allocating, and I'm trying to reduce >>>> this. If I give the true number of columns in the row, I keep getting new >>>> entry errors. >>>> >>>> New nonzero at (8045,60) caused a malloc! >>>> >>> >>> You have to be more specific. It sounds like what you are doing is >>> >>> >>> >>> >>> >>> >>> This will not work since assembly squeezes out extra space. Why are you >>> assembling in between? >>> If you need to clear buffers, you can just to MAT_ASSEMBLY_FLUSH. >>> >>> Matt >>> >>> >>>> I have it spit into phases where I set zeros in the non-zero pattern, >>>> then I come back and fill in entries in a bulk pass, and then do a second >>>> pass to clean up entries in special rows. >>>> Could someone clarify how this works. >>>> >>>> John >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Sep 25 17:19:57 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 25 Sep 2012 18:19:57 -0400 Subject: [petsc-users] MatSetValues on same column multiple times In-Reply-To: References: Message-ID: On Tue, Sep 25, 2012 at 6:01 PM, John Mousel wrote: > Thanks Matt. Are you allowed to insert the same column multiple times in > one call to MatSetValues though? > Should not be a problem. If it is, its a bug. Matt > On Tue, Sep 25, 2012 at 4:58 PM, Matthew Knepley wrote: > >> On Tue, Sep 25, 2012 at 5:56 PM, John Mousel wrote: >> >>> To be more clear, I do the following: >>> >>> >>> >>> >>> >>> >>> >>> >>> What I want to clarify is if I touch the same column more than once, say >>> I'm looping over neighbors in an octree mesh, and it's just easier to >>> insert them multiple times as I find them for fluxes in different >>> directions instead of finding one exact coefficient for each neighbor, how >>> much preallocation would I need to specify? The number of times it gets >>> touched, or just one? >>> >> >> Just once. This is exactly how we operate with DMCreateMatrix(). You can >> test it in any of the examples, like SNES ex5. >> >> Matt >> >> >>> John >>> >>> >>> On Tue, Sep 25, 2012 at 4:50 PM, Matthew Knepley wrote: >>> >>>> On Tue, Sep 25, 2012 at 5:44 PM, John Mousel wrote: >>>> >>>>> >>>>> If I'm adding to a particular column in multiple passes, do I need to >>>>> preallocate enough memory for each pass, or just 1 for each column in the >>>>> row? >>>>> Right now I'm drastically over-allocating, and I'm trying to reduce >>>>> this. If I give the true number of columns in the row, I keep getting new >>>>> entry errors. >>>>> >>>>> New nonzero at (8045,60) caused a malloc! >>>>> >>>> >>>> You have to be more specific. It sounds like what you are doing is >>>> >>>> >>>> >>>> >>>> >>>> >>>> This will not work since assembly squeezes out extra space. Why are you >>>> assembling in between? >>>> If you need to clear buffers, you can just to MAT_ASSEMBLY_FLUSH. >>>> >>>> Matt >>>> >>>> >>>>> I have it spit into phases where I set zeros in the non-zero pattern, >>>>> then I come back and fill in entries in a bulk pass, and then do a second >>>>> pass to clean up entries in special rows. >>>>> Could someone clarify how this works. >>>>> >>>>> John >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Wed Sep 26 01:50:23 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Wed, 26 Sep 2012 06:50:23 +0000 Subject: [petsc-users] difference between left and right pc Message-ID: > > I can't do that, it's a matshell. > > > You can use MatComputeExplicitOperator or MatFDColoring to get the entries. > Also, make sure the MATSHELL is really, truly a linear operator. You'll get > very confusing results if you accidentally leave some nonlinearity inside > that function. Thanks for pointing this out, very handy indeed for debugging shells. dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From member at linkedin.com Wed Sep 26 15:16:28 2012 From: member at linkedin.com (Matt Funk via LinkedIn) Date: Wed, 26 Sep 2012 20:16:28 +0000 (UTC) Subject: [petsc-users] Invitation to connect on LinkedIn Message-ID: <445686782.85240.1348690588513.JavaMail.app@ela4-app2313.prod> LinkedIn ------------ Matt Funk requested to add you as a connection on LinkedIn: ------------------------------------------ Scott, I'd like to add you to my professional network on LinkedIn. - Matt Accept invitation from Matt Funk http://www.linkedin.com/e/-r9oj6w-h7kvjyai-9/NPBLyes6_CJvfaFX95qTY0Fn_yVIxe9EWtXp/blk/I179357050_155/3wOtCVFbmdxnSVFbm8JrnpKqlZJrmZzbmNJpjRQnOpBtn9QfmhBt71BoSd1p65Lr6lOfPkRclYMdj0TdjcVdP59bRpgs4Jjq79EbP4QdjcOejoOe3wLrCBxbOYWrSlI/eml-comm_invm-b-in_ac-inv28/?hs=false&tok=09T-kSGSRw_lo1 View profile of Matt Funk http://www.linkedin.com/e/-r9oj6w-h7kvjyai-9/rso/129573335/0zOc/name/158557338_I179357050_155/?hs=false&tok=0V_VC_QF1w_lo1 ------------------------------------------ You are receiving Invitation emails. This email was intended for Scott Kruger. Learn why this is included: http://www.linkedin.com/e/-r9oj6w-h7kvjyai-9/plh/http%3A%2F%2Fhelp%2Elinkedin%2Ecom%2Fapp%2Fanswers%2Fdetail%2Fa_id%2F4788/-GXI/?hs=false&tok=3J-H54WSBw_lo1 (c) 2012, LinkedIn Corporation. 2029 Stierlin Ct, Mountain View, CA 94043, USA. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Sep 26 16:03:03 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 26 Sep 2012 23:03:03 +0200 Subject: [petsc-users] Error when using PCDestroy Message-ID: <50636D87.2000109@gmail.com> Hi, When I run a code in debug mode with -log_summary, there is no problem. However, in optimized mode, I encounter error "Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range" It happens at : call PCDestroy(pc_semi_x,ierr) Is there any reason why it works well in debug mode but not in optimized mode? -- Yours sincerely, TAY wee-beng From bsmith at mcs.anl.gov Wed Sep 26 16:04:57 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 26 Sep 2012 16:04:57 -0500 Subject: [petsc-users] Error when using PCDestroy In-Reply-To: <50636D87.2000109@gmail.com> References: <50636D87.2000109@gmail.com> Message-ID: On Sep 26, 2012, at 4:03 PM, TAY wee-beng wrote: > Hi, > > When I run a code in debug mode with -log_summary, there is no problem. However, in optimized mode, I encounter error "Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range" > > It happens at : > > call PCDestroy(pc_semi_x,ierr) > > Is there any reason why it works well in debug mode but not in optimized mode? Memory corruption. Run first debug then optimized version under valgrind. > > -- > Yours sincerely, > > TAY wee-beng > From renzhengyong at gmail.com Wed Sep 26 16:19:30 2012 From: renzhengyong at gmail.com (renzhengyong) Date: Wed, 26 Sep 2012 23:19:30 +0200 Subject: [petsc-users] out-of-core mumps via petsc Message-ID: <50637162.2050008@gmail.com> Dear Petsc users and developers, Could you please inform me that how can I use the out-of-core version of MUMPS via PETSc to solve Ax=b ? Thanks in advance Zhengyong -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed Sep 26 17:23:23 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 26 Sep 2012 17:23:23 -0500 Subject: [petsc-users] out-of-core mumps via petsc In-Reply-To: <50637162.2050008@gmail.com> References: <50637162.2050008@gmail.com> Message-ID: renzhengyong : > Dear Petsc users and developers, > > Could you please inform me that how can I use the out-of-core version of > MUMPS via PETSc to solve Ax=b ? > We do not have this support in Petsc/mumps interface. Hong -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzhong at scsolutions.com Wed Sep 26 19:05:10 2012 From: jzhong at scsolutions.com (Jinquan Zhong) Date: Thu, 27 Sep 2012 00:05:10 +0000 Subject: [petsc-users] Setting initial guess for KSP solver Message-ID: <7237384C7A8F2642A8854B751619FA316F2F16DA@exchange.scsolutions.com> Hi folks, Is there a way to input my initial guess x0 into the KSP context before using KSPsolve() ? I know exactly the initial guess x0 would be from other packages. Do you have an example for that? Thanks, Jinquan PS: -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 26 19:20:31 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 26 Sep 2012 19:20:31 -0500 Subject: [petsc-users] Setting initial guess for KSP solver In-Reply-To: <7237384C7A8F2642A8854B751619FA316F2F16DA@exchange.scsolutions.com> References: <7237384C7A8F2642A8854B751619FA316F2F16DA@exchange.scsolutions.com> Message-ID: KSPSetInitialGuessNonzero() On Wed, Sep 26, 2012 at 7:05 PM, Jinquan Zhong wrote: > Hi folks,**** > > ** ** > > Is there a way to input my initial guess x0 into the KSP context before > using KSPsolve() ? I know exactly the initial guess x0 would be from other > packages. Do you have an example for that?**** > > ** ** > > Thanks,**** > > ** ** > > Jinquan**** > > ** ** > > PS: **** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kruger at txcorp.com Wed Sep 26 19:23:25 2012 From: kruger at txcorp.com (Scott Kruger) Date: Wed, 26 Sep 2012 18:23:25 -0600 Subject: [petsc-users] Invitation to connect on LinkedIn In-Reply-To: <445686782.85240.1348690588513.JavaMail.app@ela4-app2313.prod> References: <445686782.85240.1348690588513.JavaMail.app@ela4-app2313.prod> Message-ID: <50639C7D.7060108@txcorp.com> Apologies -- do not know how this happened. On 9/26/12 2:16 PM, Matt Funk via LinkedIn wrote: > LinkedIn Logo > > > Scott, > > Matt Funk wants to connect with you on LinkedIn. > > Matt Funk > Software Engineer at SAP AG View Profile ? > > > > Accept > > > > > You are receiving Invitation emails. Unsubscribe > . > > > This email was intended for Scott Kruger (Scientist/VP at Tech-X > Corporation). Learn why we included this > . > ? 2012, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, > USA > -- Scott Kruger Tech-X Corporation kruger at txcorp.com 5621 Arapahoe Ave Phone: (720) 974-1841 Boulder, CO 80303 Fax: (303) 448-7756 From jzhong at scsolutions.com Wed Sep 26 19:33:24 2012 From: jzhong at scsolutions.com (Jinquan Zhong) Date: Thu, 27 Sep 2012 00:33:24 +0000 Subject: [petsc-users] Setting initial guess for KSP solver In-Reply-To: References: <7237384C7A8F2642A8854B751619FA316F2F16DA@exchange.scsolutions.com> Message-ID: <7237384C7A8F2642A8854B751619FA316F2F1717@exchange.scsolutions.com> Jed, PetscErrorCode KSPSetInitialGuessNonzero(KSP ksp,PetscBool flg) does not provide the option to input x0 though. Jinquan From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Jed Brown Sent: Wednesday, September 26, 2012 5:21 PM To: PETSc users list Subject: Re: [petsc-users] Setting initial guess for KSP solver KSPSetInitialGuessNonzero() On Wed, Sep 26, 2012 at 7:05 PM, Jinquan Zhong > wrote: Hi folks, Is there a way to input my initial guess x0 into the KSP context before using KSPsolve() ? I know exactly the initial guess x0 would be from other packages. Do you have an example for that? Thanks, Jinquan PS: -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 26 19:36:43 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 26 Sep 2012 19:36:43 -0500 Subject: [petsc-users] Setting initial guess for KSP solver In-Reply-To: <7237384C7A8F2642A8854B751619FA316F2F1717@exchange.scsolutions.com> References: <7237384C7A8F2642A8854B751619FA316F2F16DA@exchange.scsolutions.com> <7237384C7A8F2642A8854B751619FA316F2F1717@exchange.scsolutions.com> Message-ID: On Wed, Sep 26, 2012 at 7:33 PM, Jinquan Zhong wrote: > PetscErrorCode KSPSetInitialGuessNonzero(KSP ksp,PetscBool flg)* > *** > > **** > > does not provide the option to input x0 though. > It causes the vector X that you pass to KSPSolve to be used as the initial guess. Did you read the man page? http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetInitialGuessNonzero.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jzhong at scsolutions.com Wed Sep 26 19:43:14 2012 From: jzhong at scsolutions.com (Jinquan Zhong) Date: Thu, 27 Sep 2012 00:43:14 +0000 Subject: [petsc-users] Setting initial guess for KSP solver In-Reply-To: References: <7237384C7A8F2642A8854B751619FA316F2F16DA@exchange.scsolutions.com> <7237384C7A8F2642A8854B751619FA316F2F1717@exchange.scsolutions.com> Message-ID: <7237384C7A8F2642A8854B751619FA316F2F1731@exchange.scsolutions.com> Jed, So I need to assembled x before calling KSPSetInitialGuessNonzero(). KSPSolve will take the assembled x as its initial guess. Correct? That is 129: KSPSetFromOptions(ksp); 131: if (nonzeroguess) { 132: PetscScalar p = .5; 133: VecSet(x,p); 134: KSPSetInitialGuessNonzero(ksp,PETSC_TRUE); 135: } 136: 137: /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 138: Solve the linear system 139: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */ 140: /* 141: Solve linear system 142: */ 143: KSPSolve(ksp,b,x); I don?t think the manual specifies this. I only read ?Tells the iterative solver that the initial guess is nonzero; otherwise KSP assumes the initial guess is to be zero (and thus zeros it out before solving).? Thanks, Jinquan From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Jed Brown Sent: Wednesday, September 26, 2012 5:37 PM To: PETSc users list Subject: Re: [petsc-users] Setting initial guess for KSP solver On Wed, Sep 26, 2012 at 7:33 PM, Jinquan Zhong > wrote: PetscErrorCode KSPSetInitialGuessNonzero(KSP ksp,PetscBool flg) does not provide the option to input x0 though. It causes the vector X that you pass to KSPSolve to be used as the initial guess. Did you read the man page? http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetInitialGuessNonzero.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Sep 26 19:45:56 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 26 Sep 2012 19:45:56 -0500 Subject: [petsc-users] Setting initial guess for KSP solver In-Reply-To: <7237384C7A8F2642A8854B751619FA316F2F1731@exchange.scsolutions.com> References: <7237384C7A8F2642A8854B751619FA316F2F16DA@exchange.scsolutions.com> <7237384C7A8F2642A8854B751619FA316F2F1717@exchange.scsolutions.com> <7237384C7A8F2642A8854B751619FA316F2F1731@exchange.scsolutions.com> Message-ID: On Wed, Sep 26, 2012 at 7:43 PM, Jinquan Zhong wrote: > Jed,**** > > ** ** > > So I need to assembled x before calling KSPSetInitialGuessNonzero(). KSPSolve > will take the assembled x as its initial guess. Correct? > Yes > That is**** > > ** ** > > 129: KSPSetFromOptions (ksp);**** > > ** ** > > 131: if (nonzeroguess) {**** > > 132: PetscScalar p = .5;**** > > 133: VecSet (x,p);**** > > 134: KSPSetInitialGuessNonzero (ksp,PETSC_TRUE );**** > > 135: }**** > > 136: **** > > 137: /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - **** > > 138: Solve the linear system**** > > 139: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */**** > > 140: /* **** > > 141: Solve linear system**** > > 142: */**** > > 143: KSPSolve (ksp,b,x);**** > > ** ** > > ** ** > > I don?t think the manual specifies this. I only read**** > > ** ** > > ?Tells the iterative solver that the initial guess is nonzero; otherwise > KSPassumes the initial guess is to be zero (and thus zeros it out before > solving).?**** > > ** ** > > Thanks,**** > > ** ** > > Jinquan**** > > ** ** > > From: petsc-users-bounces at mcs.anl.gov [mailto: > petsc-users-bounces at mcs.anl.gov] On Behalf Of Jed Brown > Sent: Wednesday, September 26, 2012 5:37 PM > To: PETSc users list > > *Subject:* Re: [petsc-users] Setting initial guess for KSP solver**** > > ** ** > > On Wed, Sep 26, 2012 at 7:33 PM, Jinquan Zhong > wrote:**** > > PetscErrorCode KSPSetInitialGuessNonzero(KSP ksp,PetscBool flg)* > *** > > **** > > does not provide the option to input x0 though.**** > > ** ** > > It causes the vector X that you pass to KSPSolve to be used as the initial > guess. Did you read the man page?**** > > ** ** > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetInitialGuessNonzero.html > **** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Thu Sep 27 02:49:49 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 27 Sep 2012 09:49:49 +0200 Subject: [petsc-users] Enquiry regarding log summary results Message-ID: <5064051D.1060903@gmail.com> Hi, I'm doing a log summary for my 3d cfd code. I have some questions: 1. if I'm solving 3 linear equations using ksp, is the result given in the log summary the total of the 3 linear eqns' performance? How can I get the performance for each individual eqn? 2. If I run my code for 10 time steps, does the log summary gives the total or avg performance/ratio? 3. Besides PETSc, I'm also using HYPRE's native geometric MG (Struct) to solve my Cartesian's grid CFD poisson eqn. Is there any way I can use PETSc's log summary to get HYPRE's performance? If I use boomerAMG thru PETSc, can I get its performance? -- Yours sincerely, TAY wee-beng From zonexo at gmail.com Thu Sep 27 03:31:59 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 27 Sep 2012 10:31:59 +0200 Subject: [petsc-users] Error when using PCDestroy In-Reply-To: References: <50636D87.2000109@gmail.com> Message-ID: <50640EFF.5010702@gmail.com> On 26/9/2012 11:04 PM, Barry Smith wrote: > On Sep 26, 2012, at 4:03 PM, TAY wee-beng wrote: > >> Hi, >> >> When I run a code in debug mode with -log_summary, there is no problem. However, in optimized mode, I encounter error "Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range" >> >> It happens at : >> >> call PCDestroy(pc_semi_x,ierr) >> >> Is there any reason why it works well in debug mode but not in optimized mode? > Memory corruption. Run first debug then optimized version under valgrind. I ran valgrind in debug mode and found: /==4977== Invalid read of size 4// //==4977== at 0x5D4A8E7: PetscCheckPointer (petscimpl.h:219)// //==4977== by 0x5D4997A: PCDestroy (precon.c:112)// //==4977== by 0x5B96AB9: pcdestroy_ (preconf.c:162)// //==4977== by 0x4A02E8: global_data_mp_de_ini_var_ (global.F90:1362)// //==4977== by 0xD2D2E0: MAIN__ (ibm3d_high_Re.F90:1140)// //==4977== by 0x4335CB: main (in /home/wtay/Results/10/a.out)// //==4977== Address 0xba21430 is 1,872 bytes inside a block of size 2,452 free'd// //==4977== at 0x4C263CF: free (vg_replace_malloc.c:427)// //==4977== by 0x4FC2B84: PetscFreeAlign (mal.c:75)// //==4977== by 0x4FC5BB1: PetscTrFreeDefault (mtr.c:322)// //==4977== by 0x5D4A059: PCDestroy (precon.c:121)// //==4977== by 0x5E30990: KSPDestroy (itfunc.c:786)// //==4977== by 0x5B98FCB: kspdestroy_ (itfuncf.c:236)// //==4977== by 0x4A020D: global_data_mp_de_ini_var_ (global.F90:1360)// //==4977== by 0xD2D2E0: MAIN__ (ibm3d_high_Re.F90:1140)// //==4977== by 0x4335CB: main (in /home/wtay/Results/10/a.out)// // //[0]PETSC ERROR: --------------------- Error Message ------------------------------------// //[0]PETSC ERROR: Invalid argument!// //[0]PETSC ERROR: Wrong type of object: Parameter # 1!// //[0]PETSC ERROR: ------------------------------------------------------------------------// //[0]PETSC ERROR: Petsc Development HG revision: 3675b37c376d9b5704273c07c9eacb10845dbbe9 HG Date: Thu Jul 05 23:16:22 2012 -0500// //[0]PETSC ERROR: See docs/changes/index.html for recent updates.// //[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.// //[0]PETSC ERROR: See docs/index.html for manual pages.// //[0]PETSC ERROR: ------------------------------------------------------------------------// //[0]PETSC ERROR: ./a.out on a petsc-3.3 named hpc12 by wtay Thu Sep 27 10:04:56 2012// //[0]PETSC ERROR: Libraries linked from /home/wtay/Lib/petsc-3.3-dev_shared_debug/lib// //[0]PETSC ERROR: Configure run at Fri Jul 6 16:18:04 2012// //[0]PETSC ERROR: Configure options --with-mpi-dir=/opt/openmpi-1.5.3/ --with-blas-lapack-dir=/opt/intelcpro-11.1.059/mkl/lib/em64t/ --with-debugging=1 --download-hypre=1 --prefix=/home/wtay/Lib/petsc-3.3-dev_shared_debug --known-mpi-shared=1 --with-shared-libraries// //[0]PETSC ERROR: ------------------------------------------------------------------------// //[0]PETSC ERROR: PCDestroy() line 112 in /home/wtay/Codes/petsc-dev/src/ksp/pc/interface/precon.c// //[0]PETSC ERROR: --------------------- Error Message ------------------------------------/ Does it mean that the pc_semi_x has already been freed when I call KSPDestroy(ksp_semi_x,ierr) earlier on? It's strange that I do not get this error if I just run the code in debug mode without valgrind. > >> -- >> Yours sincerely, >> >> TAY wee-beng >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From renzhengyong at gmail.com Thu Sep 27 04:16:21 2012 From: renzhengyong at gmail.com (RenZhengYong) Date: Thu, 27 Sep 2012 11:16:21 +0200 Subject: [petsc-users] out-of-core mumps via petsc In-Reply-To: References: <50637162.2050008@gmail.com> Message-ID: Hi, Hong, Thanks for your reply. Have a nice day Zhengyong On Thu, Sep 27, 2012 at 12:23 AM, Hong Zhang wrote: > renzhengyong : > >> Dear Petsc users and developers, >> >> Could you please inform me that how can I use the out-of-core version of >> MUMPS via PETSc to solve Ax=b ? >> > We do not have this support in Petsc/mumps interface. > > Hong > -- Zhengyong Ren AUG Group, Institute of Geophysics Department of Geosciences, ETH Zurich NO H 47 Sonneggstrasse 5 CH-8092, Z?rich, Switzerland Tel: +41 44 633 37561 e-mail: zhengyong.ren at aug.ig.erdw.ethz.ch Gmail: renzhengyong at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Thu Sep 27 04:48:18 2012 From: u.tabak at tudelft.nl (Umut Tabak) Date: Thu, 27 Sep 2012 11:48:18 +0200 Subject: [petsc-users] Fastest solver option after factorization Message-ID: <506420E2.1070306@tudelft.nl> Dear all, I am doing an investigation on the fastest 'Forward backward solver' among available sparse direct solvers. I am reading on comparison report, dated 2005, namely, A numerical evaluation of sparse direct solvers for the solution of large sparse, symmetric linear systems of equations N I M Gould, Y Hu, J A Scott And Pardiso seems to be the best among many solvers. Any further ideas on this topic? I guess this group is one the best lists to ask for opinions ;-) BR, Umut From knepley at gmail.com Thu Sep 27 06:39:50 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 27 Sep 2012 07:39:50 -0400 Subject: [petsc-users] Setting initial guess for KSP solver In-Reply-To: References: <7237384C7A8F2642A8854B751619FA316F2F16DA@exchange.scsolutions.com> <7237384C7A8F2642A8854B751619FA316F2F1717@exchange.scsolutions.com> <7237384C7A8F2642A8854B751619FA316F2F1731@exchange.scsolutions.com> Message-ID: On Wed, Sep 26, 2012 at 8:45 PM, Jed Brown wrote: > On Wed, Sep 26, 2012 at 7:43 PM, Jinquan Zhong wrote: > >> Jed,**** >> >> ** ** >> >> So I need to assembled x before calling KSPSetInitialGuessNonzero(). KSPSolve >> will take the assembled x as its initial guess. Correct? >> > No you do not need to "assemble" the vector before calling KSPSetInitialGuessNonzero(). You need to assemble it before calling KSPSolve(). Matt > Yes > > >> That is**** >> >> ** ** >> >> 129: KSPSetFromOptions (ksp);**** >> >> ** ** >> >> 131: if (nonzeroguess) {**** >> >> 132: PetscScalar p = .5;**** >> >> 133: VecSet (x,p);**** >> >> 134: KSPSetInitialGuessNonzero (ksp,PETSC_TRUE );**** >> >> 135: }**** >> >> 136: **** >> >> 137: /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - **** >> >> 138: Solve the linear system**** >> >> 139: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */**** >> >> 140: /* **** >> >> 141: Solve linear system**** >> >> 142: */**** >> >> 143: KSPSolve (ksp,b,x);**** >> >> ** ** >> >> ** ** >> >> I don?t think the manual specifies this. I only read**** >> >> ** ** >> >> ?Tells the iterative solver that the initial guess is nonzero; otherwise >> KSPassumes the initial guess is to be zero (and thus zeros it out before >> solving).?**** >> >> ** ** >> >> Thanks,**** >> >> ** ** >> >> Jinquan**** >> >> ** ** >> >> From: petsc-users-bounces at mcs.anl.gov [mailto: >> petsc-users-bounces at mcs.anl.gov] On Behalf Of Jed Brown >> Sent: Wednesday, September 26, 2012 5:37 PM >> To: PETSc users list >> >> *Subject:* Re: [petsc-users] Setting initial guess for KSP solver**** >> >> ** ** >> >> On Wed, Sep 26, 2012 at 7:33 PM, Jinquan Zhong >> wrote:**** >> >> PetscErrorCode KSPSetInitialGuessNonzero(KSP ksp,PetscBool flg) >> **** >> >> **** >> >> does not provide the option to input x0 though.**** >> >> ** ** >> >> It causes the vector X that you pass to KSPSolve to be used as the >> initial guess. Did you read the man page?**** >> >> ** ** >> >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetInitialGuessNonzero.html >> **** >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Sep 27 06:44:49 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 27 Sep 2012 07:44:49 -0400 Subject: [petsc-users] Enquiry regarding log summary results In-Reply-To: <5064051D.1060903@gmail.com> References: <5064051D.1060903@gmail.com> Message-ID: On Thu, Sep 27, 2012 at 3:49 AM, TAY wee-beng wrote: > Hi, > > I'm doing a log summary for my 3d cfd code. I have some questions: > > 1. if I'm solving 3 linear equations using ksp, is the result given in the > log summary the total of the 3 linear eqns' performance? How can I get the > performance for each individual eqn? > Use logging stages: http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogStagePush.html > 2. If I run my code for 10 time steps, does the log summary gives the > total or avg performance/ratio? > Total. > 3. Besides PETSc, I'm also using HYPRE's native geometric MG (Struct) to > solve my Cartesian's grid CFD poisson eqn. Is there any way I can use > PETSc's log summary to get HYPRE's performance? If I use boomerAMG thru > PETSc, can I get its performance? If you mean flops, only if you count them yourself and tell PETSc using http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogFlops.html This is the disadvantage of using packages that do not properly monitor things :) Matt > > -- > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Sep 27 06:45:37 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 27 Sep 2012 07:45:37 -0400 Subject: [petsc-users] Error when using PCDestroy In-Reply-To: <50640EFF.5010702@gmail.com> References: <50636D87.2000109@gmail.com> <50640EFF.5010702@gmail.com> Message-ID: On Thu, Sep 27, 2012 at 4:31 AM, TAY wee-beng wrote: > On 26/9/2012 11:04 PM, Barry Smith wrote: > > On Sep 26, 2012, at 4:03 PM, TAY wee-beng wrote: > > > Hi, > > When I run a code in debug mode with -log_summary, there is no problem. However, in optimized mode, I encounter error "Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range" > > It happens at : > > call PCDestroy(pc_semi_x,ierr) > > Is there any reason why it works well in debug mode but not in optimized mode? > > Memory corruption. Run first debug then optimized version under valgrind. > > I ran valgrind in debug mode and found: > > *==4977== Invalid read of size 4** > **==4977== at 0x5D4A8E7: PetscCheckPointer (petscimpl.h:219)** > **==4977== by 0x5D4997A: PCDestroy (precon.c:112)** > **==4977== by 0x5B96AB9: pcdestroy_ (preconf.c:162)** > **==4977== by 0x4A02E8: global_data_mp_de_ini_var_ (global.F90:1362)** > **==4977== by 0xD2D2E0: MAIN__ (ibm3d_high_Re.F90:1140)** > **==4977== by 0x4335CB: main (in /home/wtay/Results/10/a.out)** > **==4977== Address 0xba21430 is 1,872 bytes inside a block of size 2,452 > free'd** > **==4977== at 0x4C263CF: free (vg_replace_malloc.c:427)** > **==4977== by 0x4FC2B84: PetscFreeAlign (mal.c:75)** > **==4977== by 0x4FC5BB1: PetscTrFreeDefault (mtr.c:322)** > **==4977== by 0x5D4A059: PCDestroy (precon.c:121)** > **==4977== by 0x5E30990: KSPDestroy (itfunc.c:786)** > **==4977== by 0x5B98FCB: kspdestroy_ (itfuncf.c:236)** > **==4977== by 0x4A020D: global_data_mp_de_ini_var_ (global.F90:1360)** > **==4977== by 0xD2D2E0: MAIN__ (ibm3d_high_Re.F90:1140)** > **==4977== by 0x4335CB: main (in /home/wtay/Results/10/a.out)** > ** > **[0]PETSC ERROR: --------------------- Error Message > ------------------------------------** > **[0]PETSC ERROR: Invalid argument!** > **[0]PETSC ERROR: Wrong type of object: Parameter # 1!** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: Petsc Development HG revision: > 3675b37c376d9b5704273c07c9eacb10845dbbe9 HG Date: Thu Jul 05 23:16:22 2012 > -0500** > **[0]PETSC ERROR: See docs/changes/index.html for recent updates.** > **[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.** > **[0]PETSC ERROR: See docs/index.html for manual pages.** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: ./a.out on a petsc-3.3 named hpc12 by wtay Thu Sep 27 > 10:04:56 2012** > **[0]PETSC ERROR: Libraries linked from > /home/wtay/Lib/petsc-3.3-dev_shared_debug/lib** > **[0]PETSC ERROR: Configure run at Fri Jul 6 16:18:04 2012** > **[0]PETSC ERROR: Configure options --with-mpi-dir=/opt/openmpi-1.5.3/ > --with-blas-lapack-dir=/opt/intelcpro-11.1.059/mkl/lib/em64t/ > --with-debugging=1 --download-hypre=1 > --prefix=/home/wtay/Lib/petsc-3.3-dev_shared_debug --known-mpi-shared=1 > --with-shared-libraries** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: PCDestroy() line 112 in > /home/wtay/Codes/petsc-dev/src/ksp/pc/interface/precon.c** > **[0]PETSC ERROR: --------------------- Error Message > ------------------------------------* > > Does it mean that the pc_semi_x has already been freed when I call > KSPDestroy(ksp_semi_x,ierr) earlier on? It's strange that I do not get this > error if I just run the code in debug mode without valgrind. > Yes you do not separately destroy the KSP and PC. Matt > -- > Yours sincerely, > > TAY wee-beng > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Thu Sep 27 06:46:05 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 27 Sep 2012 11:46:05 +0000 Subject: [petsc-users] snes ex62.c header file Message-ID: If I understand correctly, the first step to running SNES ex62.c is to create the header file, however: $ $PETSC_DIR/bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim 1 laplacian dim order 1 1 gradient $PETSC_DIR/src/snes/examples/tutorials/ex62.h Traceback (most recent call last): File "/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p3/bin/pythonscripts/PetscGenerateFEMQuadrature.py", line 12, in from FIAT.reference_element import default_simplex ImportError: No module named FIAT.reference_element dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From knepley at gmail.com Thu Sep 27 06:49:14 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 27 Sep 2012 07:49:14 -0400 Subject: [petsc-users] snes ex62.c header file In-Reply-To: References: Message-ID: On Thu, Sep 27, 2012 at 7:46 AM, Klaij, Christiaan wrote: > If I understand correctly, the first step to running SNES ex62.c is to > create the header file, however: > > $ $PETSC_DIR/bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim > 1 laplacian dim order 1 1 gradient > $PETSC_DIR/src/snes/examples/tutorials/ex62.h > Traceback (most recent call last): > File > "/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p3/bin/pythonscripts/PetscGenerateFEMQuadrature.py", > line 12, in > from FIAT.reference_element import default_simplex > ImportError: No module named FIAT.reference_element In order to run the tests, you must configure with # For finite element support --download-scientificpython --download-fiat --download-generator # For unstructured mesh generation and partitioning --download-triangle --with-ctetgen --download-chaco and you can run the tests using the new Python build system python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex62.c or do it manually as you are doing. Matt dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.cruise.paul at gmail.com Thu Sep 27 07:12:45 2012 From: paul.cruise.paul at gmail.com (Paul Cruise) Date: Thu, 27 Sep 2012 14:12:45 +0200 Subject: [petsc-users] Problem with Finding Eigenvalues using SLEPc In-Reply-To: References: Message-ID: Hello, I input a hessian matrix of size (5148 X 5148 size) into SLEPc to find it's eigenvalues (using EPS) as follows: *call MatCreate(PETSC_COMM_WORLD,A,ierr) call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,3*natoms,3*natoms,ierr) call MatSetFromOptions(A,ierr) call MatSetUp(A,ierr) do n=1,number_of_elements call MatSetValues(A,3*natoms,indices(n,1),3*natoms,indices(n,2),elements(n),INSERT_VALUES,ierr) end do call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)* But somehow, the eigenvalues do not seem to converge as I find the following output: * Number of iterations of the method: 1 Solution method: krylovschur Number of requested eigenvalues: 1 Number of iterations of the method: 1 Solution method: krylovschur Number of requested eigenvalues: 1 Stopping condition: tol=1.0000E-08, maxit= 648 Number of converged eigenpairs: 0 * Can someone please tell me what's the problem, because this matrix for sure has eigenvalues as I have obtained before without using SLEPc? Thanks & Regards, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.cruise.paul at gmail.com Thu Sep 27 07:55:13 2012 From: paul.cruise.paul at gmail.com (Paul Cruise) Date: Thu, 27 Sep 2012 14:55:13 +0200 Subject: [petsc-users] Problem with Finding Eigenvalues using SLEPc In-Reply-To: References: Message-ID: The error messages that I have is, * [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Floating point exception! [0]PETSC ERROR: Vec entry at local location 0 is not-a-number or infinite at beginning of function: Parameter number 2! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: /gpfs/home/swayamjyoti_s/slepc-3.3/src/eps/examples/tutorials/Test/ex1f on a arch-linu named merlinc19 by swayamjyoti_s Thu Sep 27 14:20:49 2012 [0]PETSC ERROR: Libraries linked from /gpfs/home/swayamjyoti_s/petsc-3.3/arch-linux2-c-debug/lib [0]PETSC ERROR: Configure run at Tue Aug 21 15:47:21 2012 [0]PETSC ERROR: Configure options --with-fc=ifort --download-f-blas-lapack --download-mpich [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatMult() line 2081 in /gpfs/home/swayamjyoti_s/petsc-3.3/src/mat/interface/matrix.c [0]PETSC ERROR: STApply_Shift() line 41 in src/st/impls/shift/shift.c [0]PETSC ERROR: STApply() line 67 in src/st/interface/stsolve.c [0]PETSC ERROR: EPSFullLanczos() line 179 in src/eps/impls/krylov/krylov.c [0]PETSC ERROR: EPSSolve_KrylovSchur_Symm() line 58 in src/eps/impls/krylov/krylovschur/ks-symm.c [0]PETSC ERROR: EPSSolve() line 130 in src/eps/interface/solve.c* Could someone please help how to fix these? On Thu, Sep 27, 2012 at 2:12 PM, Paul Cruise wrote: > > Hello, > > I input a hessian matrix of size (5148 X 5148 size) into SLEPc to find > it's eigenvalues (using EPS) as follows: > > *call MatCreate(PETSC_COMM_WORLD,A,ierr) > call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,3*natoms,3*natoms,ierr) > call MatSetFromOptions(A,ierr) > call MatSetUp(A,ierr) > > do n=1,number_of_elements > call > MatSetValues(A,3*natoms,indices(n,1),3*natoms,indices(n,2),elements(n),INSERT_VALUES,ierr) > end do > > call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) > call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)* > > But somehow, the eigenvalues do not seem to converge as I find the > following output: > * > Number of iterations of the method: 1 > Solution method: krylovschur > Number of requested eigenvalues: 1 > Number of iterations of the method: 1 > Solution method: krylovschur > Number of requested eigenvalues: 1 > Stopping condition: tol=1.0000E-08, maxit= 648 > Number of converged eigenpairs: 0 > * > Can someone please tell me what's the problem, because this matrix for > sure has eigenvalues as I have obtained before without using SLEPc? > > Thanks & Regards, > Paul > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Sep 27 08:39:34 2012 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 27 Sep 2012 15:39:34 +0200 Subject: [petsc-users] Problem with Finding Eigenvalues using SLEPc In-Reply-To: References: Message-ID: Before using EPS, try something like this and see if you get the same error: call MatGetVecs(A,v,w,ierr) call MatMult(A,v,w,ierr) Jose El 27/09/2012, a las 14:55, Paul Cruise escribi?: > The error messages that I have is, > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Floating point exception! > [0]PETSC ERROR: Vec entry at local location 0 is not-a-number or infinite at beginning of function: Parameter number 2! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00 CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: /gpfs/home/swayamjyoti_s/slepc-3.3/src/eps/examples/tutorials/Test/ex1f on a arch-linu named merlinc19 by swayamjyoti_s Thu Sep 27 14:20:49 2012 > [0]PETSC ERROR: Libraries linked from /gpfs/home/swayamjyoti_s/petsc-3.3/arch-linux2-c-debug/lib > [0]PETSC ERROR: Configure run at Tue Aug 21 15:47:21 2012 > [0]PETSC ERROR: Configure options --with-fc=ifort --download-f-blas-lapack --download-mpich > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: MatMult() line 2081 in /gpfs/home/swayamjyoti_s/petsc-3.3/src/mat/interface/matrix.c > [0]PETSC ERROR: STApply_Shift() line 41 in src/st/impls/shift/shift.c > [0]PETSC ERROR: STApply() line 67 in src/st/interface/stsolve.c > [0]PETSC ERROR: EPSFullLanczos() line 179 in src/eps/impls/krylov/krylov.c > [0]PETSC ERROR: EPSSolve_KrylovSchur_Symm() line 58 in src/eps/impls/krylov/krylovschur/ks-symm.c > [0]PETSC ERROR: EPSSolve() line 130 in src/eps/interface/solve.c > > Could someone please help how to fix these? > > > On Thu, Sep 27, 2012 at 2:12 PM, Paul Cruise wrote: > > Hello, > > I input a hessian matrix of size (5148 X 5148 size) into SLEPc to find it's eigenvalues (using EPS) as follows: > > call MatCreate(PETSC_COMM_WORLD,A,ierr) > call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,3*natoms,3*natoms,ierr) > call MatSetFromOptions(A,ierr) > call MatSetUp(A,ierr) > > do n=1,number_of_elements > call MatSetValues(A,3*natoms,indices(n,1),3*natoms,indices(n,2),elements(n),INSERT_VALUES,ierr) > end do > > call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) > call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) > > But somehow, the eigenvalues do not seem to converge as I find the following output: > > Number of iterations of the method: 1 > Solution method: krylovschur > Number of requested eigenvalues: 1 > Number of iterations of the method: 1 > Solution method: krylovschur > Number of requested eigenvalues: 1 > Stopping condition: tol=1.0000E-08, maxit= 648 > Number of converged eigenpairs: 0 > > Can someone please tell me what's the problem, because this matrix for sure has eigenvalues as I have obtained before without using SLEPc? > > Thanks & Regards, > Paul > > > > > From hzhang at mcs.anl.gov Thu Sep 27 10:56:33 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 27 Sep 2012 10:56:33 -0500 Subject: [petsc-users] Fastest solver option after factorization In-Reply-To: <506420E2.1070306@tudelft.nl> References: <506420E2.1070306@tudelft.nl> Message-ID: Umut : We rewrote petsc 'Forward backward solver' in Petsc-3.1 base on @article{sz2010, author = "B. Smith and H. Zhang", title = "Sparse Triangular Solves for {ILU} Revisited: Data Layout Crucial to Better Performance", journal = "International Journal of High Performance Computing Applications", volume = 25, number = 4, pages = "386--391", year = 2011 } (note: this is for LU factorization, not symmetric Cholesky factorization). We've never done comparison with other triangular solves. It would be interesting to know. Hong Dear all, > > I am doing an investigation on the fastest 'Forward backward solver' among > available sparse direct solvers. I am reading on comparison report, dated > 2005, namely, > > A numerical evaluation of sparse direct solvers for the solution of large > sparse, symmetric linear systems of equations > > N I M Gould, Y Hu, J A Scott > > And Pardiso seems to be the best among many solvers. Any further ideas on > this topic? > > I guess this group is one the best lists to ask for opinions ;-) > > BR, > Umut > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack.poulson at gmail.com Thu Sep 27 11:24:17 2012 From: jack.poulson at gmail.com (Jack Poulson) Date: Thu, 27 Sep 2012 11:24:17 -0500 Subject: [petsc-users] Fastest solver option after factorization In-Reply-To: <506420E2.1070306@tudelft.nl> References: <506420E2.1070306@tudelft.nl> Message-ID: Dear Umut, Since you are interested in the performance of the sparse triangular solves, I am assuming that you will be performing many such solves (otherwise the cost would be negligible relative to the factorization). There is a technique known as "selective inversion" which performs slightly more work in the factorization, essentially by directly inverting diagonal blocks, so that the triangular solves can be performed entirely through dense matrix-vector multiplications. The main reference for the technique is: P. Raghavan, "Efficient parallel sparse triangular solution using selective inversion", Parallel Processing Letters, 8 (1998), no. 1, pp. 29-40. Here is a reference for the first code to implement the technique (DSCPACK): P. Raghavan, "Domain-Separator Codes for the parallel solution of sparse linear systems", Penn State, Technical Report, 2002, CSE-02-004. I have seen more than an order of magnitude of improvement in one of my algorithms due to implementing this approach. Best, Jack On Thu, Sep 27, 2012 at 4:48 AM, Umut Tabak wrote: > Dear all, > > I am doing an investigation on the fastest 'Forward backward solver' among > available sparse direct solvers. I am reading on comparison report, dated > 2005, namely, > > A numerical evaluation of sparse direct solvers for the solution of large > sparse, symmetric linear systems of equations > > N I M Gould, Y Hu, J A Scott > > And Pardiso seems to be the best among many solvers. Any further ideas on > this topic? > > I guess this group is one the best lists to ask for opinions ;-) > > BR, > Umut > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Thu Sep 27 13:01:42 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Thu, 27 Sep 2012 12:01:42 -0600 Subject: [petsc-users] question on PETSc option '-snes_mf_operator' Message-ID: Dear all, I have a question on the PETSc option '-snes_mf_operator'. I am reading the . On page 100, 2nd paragraph, it says: ============================================================================== "However, it allows us to check the analytic Jacobian we construct in FormJacobian() by passing the -snes_mf_operator flag. This causes PETSc to approximate the Jacobian using finite differencing of the function evaluation (discussed in section 5.6), and the analytic Jacobian becomes merely the preconditioner." ============================================================================== I wonder, if the '-snes_mf_operator' option is used, the Jacobian will always be calculated from the finite difference method while ignore whatever has been provided from user. One more question, if I pass '-snes' to PETSc, since it is direct Newton's method, I assume it will explicitly construct a Jacobian. Does this Jacobian come from user provided Jacobian or from the finite difference Jacobian? Thanks, Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Sep 27 13:09:14 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 27 Sep 2012 14:09:14 -0400 Subject: [petsc-users] question on PETSc option '-snes_mf_operator' In-Reply-To: References: Message-ID: On Thu, Sep 27, 2012 at 2:01 PM, Zou (Non-US), Ling wrote: > Dear all, > > I have a question on the PETSc option '-snes_mf_operator'. > I am reading the . On page 100, 2nd > paragraph, it says: > > > ============================================================================== > "However, it allows us to check the analytic Jacobian we construct in > FormJacobian() by passing the -snes_mf_operator flag. This causes PETSc to > approximate the Jacobian using finite differencing of the function > evaluation (discussed in section 5.6), and the analytic Jacobian becomes > merely the preconditioner." > > ============================================================================== > > I wonder, if the '-snes_mf_operator' option is used, the Jacobian will > always be calculated from the finite difference method while > ignore whatever has been provided from user. > Yes, yhe action will be FD, but it will take the user provided operator to form a preconditioner from. Matt > One more question, if I pass '-snes' to PETSc, since it is direct Newton's > method, I assume it will explicitly construct a Jacobian. Does this > Jacobian come from user provided Jacobian or from the finite difference > Jacobian? > > > Thanks, > > > Ling > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Thu Sep 27 13:13:53 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Thu, 27 Sep 2012 12:13:53 -0600 Subject: [petsc-users] question on PETSc option '-snes_mf_operator' In-Reply-To: References: Message-ID: Thank you Matt. I've noticed that there is a different option '-snes_fd', and I guess PETSc will use finite difference method for both Jacobian (action) and Preconditioner? By the way, do you have any comments on my another question? ==================== One more question, if I pass '-snes' to PETSc, since it is direct Newton's method, I assume it will explicitly construct a Jacobian. Does this Jacobian come from user provided Jacobian or from the finite difference Jacobian? ==================== Best, Ling On Thu, Sep 27, 2012 at 12:09 PM, Matthew Knepley wrote: > On Thu, Sep 27, 2012 at 2:01 PM, Zou (Non-US), Ling wrote: > >> Dear all, >> >> I have a question on the PETSc option '-snes_mf_operator'. >> I am reading the . On page 100, 2nd >> paragraph, it says: >> >> >> ============================================================================== >> "However, it allows us to check the analytic Jacobian we construct in >> FormJacobian() by passing the -snes_mf_operator flag. This causes PETSc to >> approximate the Jacobian using finite differencing of the function >> evaluation (discussed in section 5.6), and the analytic Jacobian becomes >> merely the preconditioner." >> >> ============================================================================== >> >> I wonder, if the '-snes_mf_operator' option is used, the Jacobian will >> always be calculated from the finite difference method while >> ignore whatever has been provided from user. >> > Yes, yhe action will be FD, but it will take the user provided operator to > form a preconditioner from. > > Matt > > >> One more question, if I pass '-snes' to PETSc, since it is direct >> Newton's method, I assume it will explicitly construct a Jacobian. Does >> this Jacobian come from user provided Jacobian or from the finite >> difference Jacobian? >> >> >> Thanks, >> >> >> Ling >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Sep 27 13:22:20 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 27 Sep 2012 14:22:20 -0400 Subject: [petsc-users] question on PETSc option '-snes_mf_operator' In-Reply-To: References: Message-ID: On Thu, Sep 27, 2012 at 2:13 PM, Zou (Non-US), Ling wrote: > Thank you Matt. > > I've noticed that there is a different option '-snes_fd', and I guess > PETSc will use finite difference method for both Jacobian (action) and > Preconditioner? > -snes_fd uses FD to create the entire dense Jacobian. This is just for testing. -snes_mf uses FD to evaluate the action of the Jacobian on a vector. > By the way, do you have any comments on my another question? > ==================== > One more question, if I pass '-snes' to PETSc, since it is direct Newton's > method, I assume it will explicitly construct a Jacobian. Does this > Jacobian come from user provided Jacobian or from the finite difference > Jacobian? > ==================== > The option -snes does not do anything. I am not sure what you are asking here. If you mean, what is used when you pass -snes_mf or -snes_mf_operator, it is FD, not the user provided Jacobian routine if it exists. Matt > Best, > > Ling > > On Thu, Sep 27, 2012 at 12:09 PM, Matthew Knepley wrote: > >> On Thu, Sep 27, 2012 at 2:01 PM, Zou (Non-US), Ling wrote: >> >>> Dear all, >>> >>> I have a question on the PETSc option '-snes_mf_operator'. >>> I am reading the . On page 100, 2nd >>> paragraph, it says: >>> >>> >>> ============================================================================== >>> "However, it allows us to check the analytic Jacobian we construct in >>> FormJacobian() by passing the -snes_mf_operator flag. This causes PETSc to >>> approximate the Jacobian using finite differencing of the function >>> evaluation (discussed in section 5.6), and the analytic Jacobian >>> becomes merely the preconditioner." >>> >>> ============================================================================== >>> >>> I wonder, if the '-snes_mf_operator' option is used, the Jacobian will >>> always be calculated from the finite difference method while >>> ignore whatever has been provided from user. >>> >> Yes, yhe action will be FD, but it will take the user provided operator >> to form a preconditioner from. >> >> Matt >> >> >>> One more question, if I pass '-snes' to PETSc, since it is direct >>> Newton's method, I assume it will explicitly construct a Jacobian. Does >>> this Jacobian come from user provided Jacobian or from the finite >>> difference Jacobian? >>> >>> >>> Thanks, >>> >>> >>> Ling >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From derek.gaston at inl.gov Thu Sep 27 13:24:10 2012 From: derek.gaston at inl.gov (Gaston, Derek R) Date: Thu, 27 Sep 2012 12:24:10 -0600 Subject: [petsc-users] Now Hiring! Message-ID: Everyone, I run the MOOSE (Multiphysics Object Oriented Simulation Environment) computational framework team at Idaho National Laboratory (INL). MOOSE is built on libMesh (http://libmesh.sourceforge.net) and PETSc and provides a pluggable, modular interface facilitating the rapid development of complex multiphysics analysis tools. MOOSE is at the center of an evolving ecosystem of multiphysics analysis tools that do everything from microstructure modeling of nuclear fuel to groundwater migration and chemical transport analysis. This is a high-profile project... in fact I recently travelled to Washington, DC to receive the Presidential Early Career Award for Scientists and Engineers (PECASE http://en.wikipedia.org/wiki/PECASE) from President Obama for my work on MOOSE ( http://www.whitehouse.gov/the-press-office/2012/07/23/president-obama-honors-outstanding-early-career-scientists ). Our user-base has been rapidly increasing over the last couple of years and it's become apparent that it's time to add a few people to the team! We are a close-knit team working in a high-impact, fast-paced environment. We're looking for self-motivated people who are going to mesh well with the team. This position provides many opportunities including travel to conferences and plenty of time for writing research journal articles. This is a _staff_ position... _not_ a post-doc or temporary position. Familiarity with packages such as libMesh, PETSc and Trilinos is preferred. A good knowledge of C++ and object-oriented design is required. A background in finite-element methods and numerical methods in general is highly-preferred. INL is located in southeast Idaho: 2 hours from Yellowstone National Park and 1.5 hours from Jackson Hole Wyoming. It's a beautiful piece of country offering many opportunities for anyone interested in outdoor activities (including world-class skiing!). If you are interested, you can apply for the position by going to http://inlrecruiting.inl.gov and searching for the keyword "MOOSE" (also it is Job # 7285). Here are a couple of our publications in case you want more info about what we do: - D. Gaston, C. Newman, G. Hansen, and D. Lebrun-Grandie?. MOOSE: A parallel computational framework for coupled systems of nonlinear equations. Nucl. Eng. Design, 239:1768?1778, 2009. - R.L. Williamson, J.D. Hales, S.R. Novascone, M.R. Tonks, D.R. Gaston, C.J. Permann, D. Andrs, and R.C. Martineau. Multidimensional multiphysics simulation of nuclear fuel behavior. Journal of Nuclear Materials, 423:149?63, 2012. - J. D. Hales, S. R. Novascone, R. L. Williamson, D. R. Gaston, and M. R. Tonks. Solving nonlinear solid mechanics problems with the Jacobian-free Newton Krylov method. CMES: Comput. Model. Eng. Sci., 84(2):123?154, 2012. - M.R. Tonks, D. Gaston, P.C. Millett, D. Andrs, and P. Talbot. An object-oriented finite element framework for multiphysics phase field simulations. Comp. Mat. Sci., 51(1):20?29, 2012. Feel free to email me personally (derek.gaston at inl.gov) if you have any questions at all! Derek Gaston -------------- next part -------------- An HTML attachment was scrubbed... URL: From karpeev at mcs.anl.gov Thu Sep 27 13:30:41 2012 From: karpeev at mcs.anl.gov (Dmitry Karpeev) Date: Thu, 27 Sep 2012 13:30:41 -0500 Subject: [petsc-users] question on PETSc option '-snes_mf_operator' In-Reply-To: References: Message-ID: On Thu, Sep 27, 2012 at 1:22 PM, Matthew Knepley wrote: > On Thu, Sep 27, 2012 at 2:13 PM, Zou (Non-US), Ling wrote: > >> Thank you Matt. >> >> I've noticed that there is a different option '-snes_fd', and I guess >> PETSc will use finite difference method for both Jacobian (action) and >> Preconditioner? >> > > -snes_fd uses FD to create the entire dense Jacobian. This is just for > testing. -snes_mf uses FD to evaluate the action > of the Jacobian on a vector. > > >> By the way, do you have any comments on my another question? >> ==================== >> One more question, if I pass '-snes' to PETSc, since it is direct >> Newton's method, I assume it will explicitly construct a Jacobian. Does >> this Jacobian come from user provided Jacobian or from the finite >> difference Jacobian? >> ==================== >> > Ling, -snes isn't really a PETSc option. I think Moose recommends using it for "completeness" and in contrast to -snes_mf or -snes_mf_operator, so that users aren't confused about what's being used to compute the Jacobian when both -snes_mf and -snes_mf_operator are omitted. I'll copy this to to moose-users, in case it is useful there. -snes_mf will implement the action of the Jacobian approximately by differencing the residual. No preconditioner matrix will be used. -snes_mf_operator is like -snes_mf, except the user-provided preconditioner matrix will be used. -snes_fd will *assemble* both the Jacobian and the preconditioner matrix using the same residual-differencing algorithm as in -snes_mf. Dmitry. > The option -snes does not do anything. I am not sure what you are asking > here. If you mean, > what is used when you pass -snes_mf or -snes_mf_operator, it is FD, not > the user provided > Jacobian routine if it exists. > > Matt > > >> Best, >> >> Ling >> >> On Thu, Sep 27, 2012 at 12:09 PM, Matthew Knepley wrote: >> >>> On Thu, Sep 27, 2012 at 2:01 PM, Zou (Non-US), Ling wrote: >>> >>>> Dear all, >>>> >>>> I have a question on the PETSc option '-snes_mf_operator'. >>>> I am reading the . On page 100, 2nd >>>> paragraph, it says: >>>> >>>> >>>> ============================================================================== >>>> "However, it allows us to check the analytic Jacobian we construct in >>>> FormJacobian() by passing the -snes_mf_operator flag. This causes PETSc to >>>> approximate the Jacobian using finite differencing of the function >>>> evaluation (discussed in section 5.6), and the analytic Jacobian >>>> becomes merely the preconditioner." >>>> >>>> ============================================================================== >>>> >>>> I wonder, if the '-snes_mf_operator' option is used, the Jacobian will >>>> always be calculated from the finite difference method while >>>> ignore whatever has been provided from user. >>>> >>> Yes, yhe action will be FD, but it will take the user provided operator >>> to form a preconditioner from. >>> >>> Matt >>> >>> >>>> One more question, if I pass '-snes' to PETSc, since it is direct >>>> Newton's method, I assume it will explicitly construct a Jacobian. Does >>>> this Jacobian come from user provided Jacobian or from the finite >>>> difference Jacobian? >>>> >>>> >>>> Thanks, >>>> >>>> >>>> Ling >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Thu Sep 27 13:31:39 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Thu, 27 Sep 2012 12:31:39 -0600 Subject: [petsc-users] question on PETSc option '-snes_mf_operator' In-Reply-To: References: Message-ID: Thanks again, Matt. Your answers are really helpful to me. Best, Ling On Thu, Sep 27, 2012 at 12:22 PM, Matthew Knepley wrote: > On Thu, Sep 27, 2012 at 2:13 PM, Zou (Non-US), Ling wrote: > >> Thank you Matt. >> >> I've noticed that there is a different option '-snes_fd', and I guess >> PETSc will use finite difference method for both Jacobian (action) and >> Preconditioner? >> > > -snes_fd uses FD to create the entire dense Jacobian. This is just for > testing. -snes_mf uses FD to evaluate the action > of the Jacobian on a vector. > > >> By the way, do you have any comments on my another question? >> ==================== >> One more question, if I pass '-snes' to PETSc, since it is direct >> Newton's method, I assume it will explicitly construct a Jacobian. Does >> this Jacobian come from user provided Jacobian or from the finite >> difference Jacobian? >> ==================== >> > > The option -snes does not do anything. I am not sure what you are asking > here. If you mean, > what is used when you pass -snes_mf or -snes_mf_operator, it is FD, not > the user provided > Jacobian routine if it exists. > > Matt > > >> Best, >> >> Ling >> >> On Thu, Sep 27, 2012 at 12:09 PM, Matthew Knepley wrote: >> >>> On Thu, Sep 27, 2012 at 2:01 PM, Zou (Non-US), Ling wrote: >>> >>>> Dear all, >>>> >>>> I have a question on the PETSc option '-snes_mf_operator'. >>>> I am reading the . On page 100, 2nd >>>> paragraph, it says: >>>> >>>> >>>> ============================================================================== >>>> "However, it allows us to check the analytic Jacobian we construct in >>>> FormJacobian() by passing the -snes_mf_operator flag. This causes PETSc to >>>> approximate the Jacobian using finite differencing of the function >>>> evaluation (discussed in section 5.6), and the analytic Jacobian >>>> becomes merely the preconditioner." >>>> >>>> ============================================================================== >>>> >>>> I wonder, if the '-snes_mf_operator' option is used, the Jacobian will >>>> always be calculated from the finite difference method while >>>> ignore whatever has been provided from user. >>>> >>> Yes, yhe action will be FD, but it will take the user provided operator >>> to form a preconditioner from. >>> >>> Matt >>> >>> >>>> One more question, if I pass '-snes' to PETSc, since it is direct >>>> Newton's method, I assume it will explicitly construct a Jacobian. Does >>>> this Jacobian come from user provided Jacobian or from the finite >>>> difference Jacobian? >>>> >>>> >>>> Thanks, >>>> >>>> >>>> Ling >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Thu Sep 27 13:38:48 2012 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Thu, 27 Sep 2012 12:38:48 -0600 Subject: [petsc-users] question on PETSc option '-snes_mf_operator' In-Reply-To: References: Message-ID: Thank you Dmitry. Yes, I am now fully understanding what '-snes_mf' and '-snes_mf_operator' are doing. Got a bit confused by the '-snes' keyword used in Moose. As you mentioned, this is for input completeness sake, so I guess there are default options as this keyword is used. I will dig out what are those default options. Appreciate your answer. Ling On Thu, Sep 27, 2012 at 12:30 PM, Dmitry Karpeev wrote: > > > On Thu, Sep 27, 2012 at 1:22 PM, Matthew Knepley wrote: > >> On Thu, Sep 27, 2012 at 2:13 PM, Zou (Non-US), Ling wrote: >> >>> Thank you Matt. >>> >>> I've noticed that there is a different option '-snes_fd', and I guess >>> PETSc will use finite difference method for both Jacobian (action) and >>> Preconditioner? >>> >> >> -snes_fd uses FD to create the entire dense Jacobian. This is just for >> testing. -snes_mf uses FD to evaluate the action >> of the Jacobian on a vector. >> >> >>> By the way, do you have any comments on my another question? >>> ==================== >>> One more question, if I pass '-snes' to PETSc, since it is direct >>> Newton's method, I assume it will explicitly construct a Jacobian. Does >>> this Jacobian come from user provided Jacobian or from the finite >>> difference Jacobian? >>> ==================== >>> >> Ling, > -snes isn't really a PETSc option. I think Moose recommends using it for > "completeness" and in contrast to -snes_mf or -snes_mf_operator, > so that users aren't confused about what's being used to compute the > Jacobian when both -snes_mf and -snes_mf_operator are omitted. > I'll copy this to to moose-users, in case it is useful there. > > -snes_mf will implement the action of the Jacobian approximately by > differencing the residual. No preconditioner matrix will be used. > -snes_mf_operator is like -snes_mf, except the user-provided > preconditioner matrix will be used. > -snes_fd will *assemble* both the Jacobian and the preconditioner matrix > using the same residual-differencing algorithm as in -snes_mf. > > Dmitry. > > > >> The option -snes does not do anything. I am not sure what you are asking >> here. If you mean, >> what is used when you pass -snes_mf or -snes_mf_operator, it is FD, not >> the user provided >> Jacobian routine if it exists. >> >> Matt >> >> >>> Best, >>> >>> Ling >>> >>> On Thu, Sep 27, 2012 at 12:09 PM, Matthew Knepley wrote: >>> >>>> On Thu, Sep 27, 2012 at 2:01 PM, Zou (Non-US), Ling wrote: >>>> >>>>> Dear all, >>>>> >>>>> I have a question on the PETSc option '-snes_mf_operator'. >>>>> I am reading the . On page 100, >>>>> 2nd paragraph, it says: >>>>> >>>>> >>>>> ============================================================================== >>>>> "However, it allows us to check the analytic Jacobian we construct in >>>>> FormJacobian() by passing the -snes_mf_operator flag. This causes PETSc to >>>>> approximate the Jacobian using finite differencing of the function >>>>> evaluation (discussed in section 5.6), and the analytic Jacobian >>>>> becomes merely the preconditioner." >>>>> >>>>> ============================================================================== >>>>> >>>>> I wonder, if the '-snes_mf_operator' option is used, the Jacobian will >>>>> always be calculated from the finite difference method while >>>>> ignore whatever has been provided from user. >>>>> >>>> Yes, yhe action will be FD, but it will take the user provided operator >>>> to form a preconditioner from. >>>> >>>> Matt >>>> >>>> >>>>> One more question, if I pass '-snes' to PETSc, since it is direct >>>>> Newton's method, I assume it will explicitly construct a Jacobian. Does >>>>> this Jacobian come from user provided Jacobian or from the finite >>>>> difference Jacobian? >>>>> >>>>> >>>>> Thanks, >>>>> >>>>> >>>>> Ling >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Thu Sep 27 14:47:17 2012 From: zonexo at gmail.com (Wee-Beng Tay) Date: Thu, 27 Sep 2012 21:47:17 +0200 Subject: [petsc-users] Variable not given explicit type in latest petsc-dev Message-ID: Hi, During compile, I got the warning msg in Compaq visual fortran: C:\Libs\petsc-3.3-dev_win32_cvf/include\finclude/ftn-custom/petscdmcomplex.h90(175) : Warning: This name has not been given an explicit type. [BCFIELD] & numDof,numBC,bcField,bcPoints,section,ierr) Can you check? Tks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Sep 27 14:57:37 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 27 Sep 2012 15:57:37 -0400 Subject: [petsc-users] Variable not given explicit type in latest petsc-dev In-Reply-To: References: Message-ID: On Thu, Sep 27, 2012 at 3:47 PM, Wee-Beng Tay wrote: > Hi, > > During compile, I got the warning msg in Compaq visual fortran: > > C:\Libs\petsc-3.3-dev_win32_cvf/include\finclude/ftn-custom/petscdmcomplex.h90(175) > : Warning: This name has not been given an explicit type. [BCFIELD] > & numDof,numBC,bcField,bcPoints,section,ierr) > > Can you check? > Yes, that was a typo. Thanks for finding it. Will push a fix tonight. Matt > Tks! > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.cruise.paul at gmail.com Fri Sep 28 03:21:08 2012 From: paul.cruise.paul at gmail.com (Paul Cruise) Date: Fri, 28 Sep 2012 10:21:08 +0200 Subject: [petsc-users] Problem with Finding Eigenvalues using SLEPc In-Reply-To: References: Message-ID: Hello Jose, Thanks for your reply. Unfortunately, I still get the same error if I do what you suggested. Please see the code below; * call MatCreate(PETSC_COMM_WORLD,A,ierr) call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,3*natoms,3*natoms,ierr) call MatSetFromOptions(A,ierr) call MatSetUp(A,ierr) do n=1,number_of_elements call MatSetValues(A,3*natoms,indices(n,1),3*natoms,indices(n,2),elements(n),INSERT_VALUES,ierr) call MatSetValues(A,3*natoms,indices(n,2),3*natoms,indices(n,1),elements(n),ADD_VALUES,ierr) end do call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) call MatGetVecs(A,v,w,ierr) call MatMult(A,v,w,ierr) ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ! Create the eigensolver and display info ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ! ** Create eigensolver context call EPSCreate(PETSC_COMM_WORLD,eps,ierr) ! ** Set operators. In this case, it is a standard eigenvalue problem call EPSSetOperators(eps,A,PETSC_NULL_OBJECT,ierr) call EPSSetProblemType(eps,EPS_HEP,ierr) ! ** Set solver parameters at runtime call EPSSetFromOptions(eps,ierr) ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ! Solve the eigensystem ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - call EPSSolve(eps,ierr)* Could you suggest a workaround? Thanks, Paul On Thu, Sep 27, 2012 at 3:39 PM, Jose E. Roman wrote: > Before using EPS, try something like this and see if you get the same > error: > > call MatGetVecs(A,v,w,ierr) > call MatMult(A,v,w,ierr) > > Jose > > El 27/09/2012, a las 14:55, Paul Cruise escribi?: > > > The error messages that I have is, > > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > > [0]PETSC ERROR: Floating point exception! > > [0]PETSC ERROR: Vec entry at local location 0 is not-a-number or > infinite at beginning of function: Parameter number 2! > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 > 15:42:00 CDT 2012 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: > /gpfs/home/swayamjyoti_s/slepc-3.3/src/eps/examples/tutorials/Test/ex1f on > a arch-linu named merlinc19 by swayamjyoti_s Thu Sep 27 14:20:49 2012 > > [0]PETSC ERROR: Libraries linked from > /gpfs/home/swayamjyoti_s/petsc-3.3/arch-linux2-c-debug/lib > > [0]PETSC ERROR: Configure run at Tue Aug 21 15:47:21 2012 > > [0]PETSC ERROR: Configure options --with-fc=ifort > --download-f-blas-lapack --download-mpich > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: MatMult() line 2081 in > /gpfs/home/swayamjyoti_s/petsc-3.3/src/mat/interface/matrix.c > > [0]PETSC ERROR: STApply_Shift() line 41 in src/st/impls/shift/shift.c > > [0]PETSC ERROR: STApply() line 67 in src/st/interface/stsolve.c > > [0]PETSC ERROR: EPSFullLanczos() line 179 in > src/eps/impls/krylov/krylov.c > > [0]PETSC ERROR: EPSSolve_KrylovSchur_Symm() line 58 in > src/eps/impls/krylov/krylovschur/ks-symm.c > > [0]PETSC ERROR: EPSSolve() line 130 in src/eps/interface/solve.c > > > > Could someone please help how to fix these? > > > > > > On Thu, Sep 27, 2012 at 2:12 PM, Paul Cruise > wrote: > > > > Hello, > > > > I input a hessian matrix of size (5148 X 5148 size) into SLEPc to find > it's eigenvalues (using EPS) as follows: > > > > call MatCreate(PETSC_COMM_WORLD,A,ierr) > > call > MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,3*natoms,3*natoms,ierr) > > call MatSetFromOptions(A,ierr) > > call MatSetUp(A,ierr) > > > > do n=1,number_of_elements > > call > MatSetValues(A,3*natoms,indices(n,1),3*natoms,indices(n,2),elements(n),INSERT_VALUES,ierr) > > end do > > > > call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) > > call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) > > > > But somehow, the eigenvalues do not seem to converge as I find the > following output: > > > > Number of iterations of the method: 1 > > Solution method: krylovschur > > Number of requested eigenvalues: 1 > > Number of iterations of the method: 1 > > Solution method: krylovschur > > Number of requested eigenvalues: 1 > > Stopping condition: tol=1.0000E-08, maxit= 648 > > Number of converged eigenpairs: 0 > > > > Can someone please tell me what's the problem, because this matrix for > sure has eigenvalues as I have obtained before without using SLEPc? > > > > Thanks & Regards, > > Paul > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Sep 28 05:03:19 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 28 Sep 2012 06:03:19 -0400 Subject: [petsc-users] Problem with Finding Eigenvalues using SLEPc In-Reply-To: References: Message-ID: On Fri, Sep 28, 2012 at 4:21 AM, Paul Cruise wrote: > Hello Jose, > > Thanks for your reply. Unfortunately, I still get the same error if I do > what you suggested. > > Please see the code below; > * > call MatCreate(PETSC_COMM_WORLD,A,ierr) > call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,3*natoms,3*natoms,ierr) > call MatSetFromOptions(A,ierr) > call MatSetUp(A,ierr) > > do n=1,number_of_elements > call > MatSetValues(A,3*natoms,indices(n,1),3*natoms,indices(n,2),elements(n),INSERT_VALUES,ierr) > call > MatSetValues(A,3*natoms,indices(n,2),3*natoms,indices(n,1),elements(n),ADD_VALUES,ierr) > end do > > call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) > call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) > > call MatGetVecs(A,v,w,ierr) > call MatMult(A,v,w,ierr) > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > ! Create the eigensolver and display info > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > > ! ** Create eigensolver context > call EPSCreate(PETSC_COMM_WORLD,eps,ierr) > > ! ** Set operators. In this case, it is a standard eigenvalue problem > call EPSSetOperators(eps,A,PETSC_NULL_OBJECT,ierr) > call EPSSetProblemType(eps,EPS_HEP,ierr) > > ! ** Set solver parameters at runtime > call EPSSetFromOptions(eps,ierr) > > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > ! Solve the eigensystem > ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > > call EPSSolve(eps,ierr)* > > Could you suggest a workaround? > Is it EXACTLY the same error, or an error in the MatMult you added? Please send the entire error output each time or we are just guessing blindly. Also, you can use the debugger to see exactly what the value is. Matt > Thanks, > Paul > > > On Thu, Sep 27, 2012 at 3:39 PM, Jose E. Roman wrote: > >> Before using EPS, try something like this and see if you get the same >> error: >> >> call MatGetVecs(A,v,w,ierr) >> call MatMult(A,v,w,ierr) >> >> Jose >> >> El 27/09/2012, a las 14:55, Paul Cruise escribi?: >> >> > The error messages that I have is, >> > >> > [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> > [0]PETSC ERROR: Floating point exception! >> > [0]PETSC ERROR: Vec entry at local location 0 is not-a-number or >> infinite at beginning of function: Parameter number 2! >> > [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 >> 15:42:00 CDT 2012 >> > [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> > [0]PETSC ERROR: See docs/index.html for manual pages. >> > [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> > [0]PETSC ERROR: >> /gpfs/home/swayamjyoti_s/slepc-3.3/src/eps/examples/tutorials/Test/ex1f on >> a arch-linu named merlinc19 by swayamjyoti_s Thu Sep 27 14:20:49 2012 >> > [0]PETSC ERROR: Libraries linked from >> /gpfs/home/swayamjyoti_s/petsc-3.3/arch-linux2-c-debug/lib >> > [0]PETSC ERROR: Configure run at Tue Aug 21 15:47:21 2012 >> > [0]PETSC ERROR: Configure options --with-fc=ifort >> --download-f-blas-lapack --download-mpich >> > [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> > [0]PETSC ERROR: MatMult() line 2081 in >> /gpfs/home/swayamjyoti_s/petsc-3.3/src/mat/interface/matrix.c >> > [0]PETSC ERROR: STApply_Shift() line 41 in src/st/impls/shift/shift.c >> > [0]PETSC ERROR: STApply() line 67 in src/st/interface/stsolve.c >> > [0]PETSC ERROR: EPSFullLanczos() line 179 in >> src/eps/impls/krylov/krylov.c >> > [0]PETSC ERROR: EPSSolve_KrylovSchur_Symm() line 58 in >> src/eps/impls/krylov/krylovschur/ks-symm.c >> > [0]PETSC ERROR: EPSSolve() line 130 in src/eps/interface/solve.c >> > >> > Could someone please help how to fix these? >> > >> > >> > On Thu, Sep 27, 2012 at 2:12 PM, Paul Cruise < >> paul.cruise.paul at gmail.com> wrote: >> > >> > Hello, >> > >> > I input a hessian matrix of size (5148 X 5148 size) into SLEPc to find >> it's eigenvalues (using EPS) as follows: >> > >> > call MatCreate(PETSC_COMM_WORLD,A,ierr) >> > call >> MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,3*natoms,3*natoms,ierr) >> > call MatSetFromOptions(A,ierr) >> > call MatSetUp(A,ierr) >> > >> > do n=1,number_of_elements >> > call >> MatSetValues(A,3*natoms,indices(n,1),3*natoms,indices(n,2),elements(n),INSERT_VALUES,ierr) >> > end do >> > >> > call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) >> > call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) >> > >> > But somehow, the eigenvalues do not seem to converge as I find the >> following output: >> > >> > Number of iterations of the method: 1 >> > Solution method: krylovschur >> > Number of requested eigenvalues: 1 >> > Number of iterations of the method: 1 >> > Solution method: krylovschur >> > Number of requested eigenvalues: 1 >> > Stopping condition: tol=1.0000E-08, maxit= 648 >> > Number of converged eigenpairs: 0 >> > >> > Can someone please tell me what's the problem, because this matrix for >> sure has eigenvalues as I have obtained before without using SLEPc? >> > >> > Thanks & Regards, >> > Paul >> > >> > >> > >> > >> > >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From siaeleni at hotmail.com Fri Sep 28 10:16:03 2012 From: siaeleni at hotmail.com (Eleni Siampli) Date: Fri, 28 Sep 2012 18:16:03 +0300 Subject: [petsc-users] PetscFinalize() Message-ID: Hey all, I would like to solve the eigenvalue problem. My code is working for 2 loops and give the right values, but after two loops it gives me the following error: [0]PETSC ERROR: PetscFinalize() line 1221 in src/sys/objects/C:\cygwin\home\liyi 0000\PETSC-~1.2-P\src\sys\objects\pinit.c [0]PETSC ERROR: SlepcFinalize() line 224 in src/sys/C:\cygwin\home\liyi0000\SLEP C-~1.2-P\src\sys\slepcinit.c Options have not been enabled. You might have forgotten to call PetscInitialize(). Do you have any idea what this error mean? Thank you in advance, Helen -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Sep 28 10:17:51 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 28 Sep 2012 11:17:51 -0400 Subject: [petsc-users] PetscFinalize() In-Reply-To: References: Message-ID: On Fri, Sep 28, 2012 at 11:16 AM, Eleni Siampli wrote: > Hey all, > > I would like to solve the eigenvalue problem. > My code is working for 2 loops and give the right values, but after two > loops it gives me the following error: > > [0]PETSC ERROR: PetscFinalize() line 1221 in > src/sys/objects/C:\cygwin\home\liyi > 0000\PETSC-~1.2-P\src\sys\objects\pinit.c > [0]PETSC ERROR: SlepcFinalize() line 224 in > src/sys/C:\cygwin\home\liyi0000\SLEP > C-~1.2-P\src\sys\slepcinit.c > Options have not been enabled. > You might have forgotten to call PetscInitialize(). > > Do you have any idea what this error mean? > You are only intended to call PetscFinalize() once, at the end of your program. Matt > > Thank you in advance, > Helen > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From siaeleni at hotmail.com Fri Sep 28 16:09:31 2012 From: siaeleni at hotmail.com (Eleni Siampli) Date: Sat, 29 Sep 2012 00:09:31 +0300 Subject: [petsc-users] PetscFinalize() In-Reply-To: References: , Message-ID: Thank you for the answer. Problem solved, but now I have another one: After a lot of iterations (around 246) It gives me the following error: [0]PETSC ERROR: Petsc has generated inconsistent data! [0]PETSC ERROR: No more room in array, limit 256 recompile src/sys/objects/destroy.c with larger value for MAXREGFIN Do you have any idea about this problem? Thanks, Helen Date: Fri, 28 Sep 2012 11:17:51 -0400 From: knepley at gmail.com To: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] PetscFinalize() On Fri, Sep 28, 2012 at 11:16 AM, Eleni Siampli wrote: Hey all, I would like to solve the eigenvalue problem. My code is working for 2 loops and give the right values, but after two loops it gives me the following error: [0]PETSC ERROR: PetscFinalize() line 1221 in src/sys/objects/C:\cygwin\home\liyi 0000\PETSC-~1.2-P\src\sys\objects\pinit.c [0]PETSC ERROR: SlepcFinalize() line 224 in src/sys/C:\cygwin\home\liyi0000\SLEP C-~1.2-P\src\sys\slepcinit.c Options have not been enabled. You might have forgotten to call PetscInitialize(). Do you have any idea what this error mean? You are only intended to call PetscFinalize() once, at the end of your program. Matt Thank you in advance, Helen -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Sep 28 19:23:20 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 28 Sep 2012 20:23:20 -0400 Subject: [petsc-users] PetscFinalize() In-Reply-To: References: Message-ID: On Fri, Sep 28, 2012 at 5:09 PM, Eleni Siampli wrote: > > Thank you for the answer. Problem solved, but now I have another one: > > After a lot of iterations (around 246) It gives me the following error: > > > [0]PETSC ERROR: Petsc has generated inconsistent data! > [0]PETSC ERROR: No more room in array, limit 256 > recompile src/sys/objects/destroy.c with larger value for MAXREGFIN > > Do you have any idea about this problem? > You are also calling PetscInitialize() a bunch of times. Its intended for just one. Matt > > Thanks, > Helen > ------------------------------ > Date: Fri, 28 Sep 2012 11:17:51 -0400 > From: knepley at gmail.com > To: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] PetscFinalize() > > On Fri, Sep 28, 2012 at 11:16 AM, Eleni Siampli wrote: > > Hey all, > > I would like to solve the eigenvalue problem. > My code is working for 2 loops and give the right values, but after two > loops it gives me the following error: > > [0]PETSC ERROR: PetscFinalize() line 1221 in > src/sys/objects/C:\cygwin\home\liyi > 0000\PETSC-~1.2-P\src\sys\objects\pinit.c > [0]PETSC ERROR: SlepcFinalize() line 224 in > src/sys/C:\cygwin\home\liyi0000\SLEP > C-~1.2-P\src\sys\slepcinit.c > Options have not been enabled. > You might have forgotten to call PetscInitialize(). > > Do you have any idea what this error mean? > > > You are only intended to call PetscFinalize() once, at the end of your > program. > > Matt > > > > Thank you in advance, > Helen > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From tibo at berkeley.edu Sat Sep 29 14:55:23 2012 From: tibo at berkeley.edu (tibo at berkeley.edu) Date: Sat, 29 Sep 2012 12:55:23 -0700 Subject: [petsc-users] Retrieve a partial solution vector from AX=B Message-ID: Hi everyone, I am using petsc to solve a system AX=B. I use the classical user approach with a KSP context. In particular I use a LU factorization once (using the external package MUMPS here, but it could be something else) and then use KSP solve to solve for multiple different right hand sides. However in my problem, I am only interested in a few elements (Let's say as an example only the first 10 elements out of 100000) of the solution vector. Is there a way in petsc to ask to only retrieve the first few elements of the solution vector, or do I have to perform the full vector calculation and then "manually" retrieve the elements I want ? Thank you for your help, Tibo From hzhang at mcs.anl.gov Sat Sep 29 16:31:32 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sat, 29 Sep 2012 16:31:32 -0500 Subject: [petsc-users] Retrieve a partial solution vector from AX=B In-Reply-To: References: Message-ID: Tibo, Unless your matrix has special structure that allows special algorithm for computing only 1st few solution components, you have to compute entire solution entires. Hong On Sat, Sep 29, 2012 at 2:55 PM, wrote: > Hi everyone, > > I am using petsc to solve a system AX=B. I use the classical user approach > with a KSP context. In particular I use a LU factorization once (using the > external package MUMPS here, but it could be something else) and then use > KSP solve to solve for multiple different right hand sides. > > However in my problem, I am only interested in a few elements (Let's say > as an example only the first 10 elements out of 100000) of the solution > vector. > > Is there a way in petsc to ask to only retrieve the first few elements of > the solution vector, or do I have to perform the full vector calculation > and then "manually" retrieve the elements I want ? > > Thank you for your help, > > Tibo > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Lukasz.Kaczmarczyk at glasgow.ac.uk Sat Sep 29 20:21:30 2012 From: Lukasz.Kaczmarczyk at glasgow.ac.uk (Lukasz Kaczmarczyk) Date: Sun, 30 Sep 2012 02:21:30 +0100 Subject: [petsc-users] MatPartitioningSetVertexWeights Message-ID: <935732C9-1323-4E79-B91F-CA8A4965E6B4@glasgow.ac.uk> Hello, For which partitioners MatPartitioningSetVertexWeights is working. With parmetis its looks that setting vertices weight has no effect. Could you clarify usage this parameters? Thanks, Lukasz From gdiso at ustc.edu Sun Sep 30 01:14:03 2012 From: gdiso at ustc.edu (Gong Ding) Date: Sun, 30 Sep 2012 14:14:03 +0800 (CST) Subject: [petsc-users] How to set newton iteration as LU factor once and use LU result as PC Message-ID: <26448843.346251348985643568.JavaMail.coremail@mail.ustc.edu> Hi, I'd like to try following strategy: For a nonlinear solver, at the first Newton step, do a complete LU factorization, and use the factorized matrix as preconditioner matrix in the following iteration. How to implement this in petsc? Gong Ding From jedbrown at mcs.anl.gov Sun Sep 30 12:20:40 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 30 Sep 2012 12:20:40 -0500 Subject: [petsc-users] How to set newton iteration as LU factor once and use LU result as PC In-Reply-To: <26448843.346251348985643568.JavaMail.coremail@mail.ustc.edu> References: <26448843.346251348985643568.JavaMail.coremail@mail.ustc.edu> Message-ID: -snes_lag_preconditioner LAG where LAG is the number of iterations to lag the factorization. On Sun, Sep 30, 2012 at 1:14 AM, Gong Ding wrote: > Hi, > I'd like to try following strategy: > For a nonlinear solver, at the first Newton step, do a complete LU > factorization, and use the factorized matrix as preconditioner matrix in > the following iteration. > > How to implement this in petsc? > > Gong Ding > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Lukasz.Kaczmarczyk at glasgow.ac.uk Sun Sep 30 16:08:24 2012 From: Lukasz.Kaczmarczyk at glasgow.ac.uk (Lukasz Kaczmarczyk) Date: Sun, 30 Sep 2012 22:08:24 +0100 Subject: [petsc-users] MatPartitioningSetVertexWeights In-Reply-To: <935732C9-1323-4E79-B91F-CA8A4965E6B4@glasgow.ac.uk> References: <935732C9-1323-4E79-B91F-CA8A4965E6B4@glasgow.ac.uk> Message-ID: Hello, In petsc file pmetsi.c (petsc-3.3), the flag wgtflag=0 (line 60), what indicate that there are no weights (vwgt and adjwgt are both NULL). This makes use MatPartitioningSetVertexWeights without any effect. Kind regards, Lukasz On 29 Sep 2012, at 18:21, Lukasz Kaczmarczyk wrote: > Hello, > > For which partitioners MatPartitioningSetVertexWeights is working. With parmetis its looks that setting vertices weight has no effect. Could you clarify usage this parameters? > > Thanks, > Lukasz From zonexo at gmail.com Sun Sep 30 16:26:27 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Sun, 30 Sep 2012 23:26:27 +0200 Subject: [petsc-users] Enquiry regarding log summary results In-Reply-To: References: <5064051D.1060903@gmail.com> Message-ID: <5068B903.8060400@gmail.com> On 27/9/2012 1:44 PM, Matthew Knepley wrote: > On Thu, Sep 27, 2012 at 3:49 AM, TAY wee-beng > wrote: > > Hi, > > I'm doing a log summary for my 3d cfd code. I have some questions: > > 1. if I'm solving 3 linear equations using ksp, is the result > given in the log summary the total of the 3 linear eqns' > performance? How can I get the performance for each individual eqn? > > > Use logging stages: > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogStagePush.html > > 2. If I run my code for 10 time steps, does the log summary gives > the total or avg performance/ratio? > > > Total. > > 3. Besides PETSc, I'm also using HYPRE's native geometric MG > (Struct) to solve my Cartesian's grid CFD poisson eqn. Is there > any way I can use PETSc's log summary to get HYPRE's performance? > If I use boomerAMG thru PETSc, can I get its performance? > > > If you mean flops, only if you count them yourself and tell PETSc > using > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogFlops.html > > This is the disadvantage of using packages that do not properly > monitor things :) > > Matt So u mean if I use boomerAMG thru PETSc, there is no proper way of evaluating its performance, beside using PetscLogFlops? > > > -- > Yours sincerely, > > TAY wee-beng > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Sep 30 16:30:03 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 30 Sep 2012 16:30:03 -0500 Subject: [petsc-users] Enquiry regarding log summary results In-Reply-To: <5068B903.8060400@gmail.com> References: <5064051D.1060903@gmail.com> <5068B903.8060400@gmail.com> Message-ID: You can measure the time spent in Hypre via PCApply and PCSetUp, but you can't get finer grained integrated profiling because it was not set up that way. On Sep 30, 2012 3:26 PM, "TAY wee-beng" wrote: > On 27/9/2012 1:44 PM, Matthew Knepley wrote: > > On Thu, Sep 27, 2012 at 3:49 AM, TAY wee-beng wrote: > >> Hi, >> >> I'm doing a log summary for my 3d cfd code. I have some questions: >> >> 1. if I'm solving 3 linear equations using ksp, is the result given in >> the log summary the total of the 3 linear eqns' performance? How can I get >> the performance for each individual eqn? >> > > Use logging stages: > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogStagePush.html > > >> 2. If I run my code for 10 time steps, does the log summary gives the >> total or avg performance/ratio? >> > > Total. > > >> 3. Besides PETSc, I'm also using HYPRE's native geometric MG (Struct) to >> solve my Cartesian's grid CFD poisson eqn. Is there any way I can use >> PETSc's log summary to get HYPRE's performance? If I use boomerAMG thru >> PETSc, can I get its performance? > > > If you mean flops, only if you count them yourself and tell PETSc using > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogFlops.html > > This is the disadvantage of using packages that do not properly monitor > things :) > > Matt > > > So u mean if I use boomerAMG thru PETSc, there is no proper way of > evaluating its performance, beside using PetscLogFlops? > > >> -- >> Yours sincerely, >> >> TAY wee-beng >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Sep 30 22:26:56 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 30 Sep 2012 22:26:56 -0500 Subject: [petsc-users] How to set newton iteration as LU factor once and use LU result as PC In-Reply-To: References: <26448843.346251348985643568.JavaMail.coremail@mail.ustc.edu> Message-ID: <3A52C808-510F-419B-B4EA-7593D13AEE17@mcs.anl.gov> From the manual page SNESSetLagPreconditioner - Determines when the preconditioner is rebuilt in the nonlinear solve. Logically Collective on SNES Input Parameters: + snes - the SNES context - lag - -1 indicates NEVER rebuild, 1 means rebuild every time the Jacobian is computed within a single nonlinear solve, 2 means every second time the Jacobian is built etc. -2 indicates rebuild preconditioner at next chance but then never rebuild after that On Sep 30, 2012, at 12:20 PM, Jed Brown wrote: > -snes_lag_preconditioner LAG > > where LAG is the number of iterations to lag the factorization. > > On Sun, Sep 30, 2012 at 1:14 AM, Gong Ding wrote: > Hi, > I'd like to try following strategy: > For a nonlinear solver, at the first Newton step, do a complete LU factorization, and use the factorized matrix as preconditioner matrix in the following iteration. > > How to implement this in petsc? > > Gong Ding > > > > > >