From Stephen.R.Ball at awe.co.uk Fri Aug 1 05:40:53 2008 From: Stephen.R.Ball at awe.co.uk (Stephen R Ball) Date: Fri, 1 Aug 2008 11:40:53 +0100 Subject: MatGetRowUpperTriangular() and MatRestoreRowUpperTriangular() via the Fortran interface? Message-ID: <881BeK025110@awe.co.uk> Hi I am trying to use PETSc routines MatGetRowUpperTriangular() and MatRestoreRowUpperTriangular() via the Fortran interface but am getting undefined references to these routines at link time. Can you tell me if these routines are supported via Fortran? Regards Stephen From knepley at gmail.com Fri Aug 1 07:12:57 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 1 Aug 2008 07:12:57 -0500 Subject: MatGetRowUpperTriangular() and MatRestoreRowUpperTriangular() via the Fortran interface? In-Reply-To: <881BeK025110@awe.co.uk> References: <881BeK025110@awe.co.uk> Message-ID: They are not currently. I will turn that on in petsc-dev. Matt On Fri, Aug 1, 2008 at 5:40 AM, Stephen R Ball wrote: > Hi > > I am trying to use PETSc routines MatGetRowUpperTriangular() and > MatRestoreRowUpperTriangular() via the Fortran interface but am getting > undefined references to these routines at link time. Can you tell me if > these routines are supported via Fortran? > > Regards > > Stephen > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From hzhang at mcs.anl.gov Fri Aug 1 09:30:07 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 1 Aug 2008 09:30:07 -0500 (CDT) Subject: Question on using MUMPS in PETSC In-Reply-To: <4892976C.1000605@gmail.com> References: <48923016.2000401@gmail.com> <48924304.7070107@gmail.com> <4892976C.1000605@gmail.com> Message-ID: Randy, The petsc interface does not create much of extra memories. The analysis phase of MUMPS solver is sequential - which might causes one process blow up with memory. I'm forwarding this email to the mumps developer for their input. Jean-Yves, What do you think about the reported problem (see attached below)? Thanks, Hong On Thu, 31 Jul 2008, Randall Mackie wrote: > Barry, > > I don't think it's the matrix - I saw the same behavior when I ran your > ex2.c program and set m=n=5000. > > Randy > > > Barry Smith wrote: >> >> If m and n are the number of rows and columns of the sparse matrix (i.e. >> it is >> tiny problem) then please >> send us matrix so we can experiment with it to petsc-maint at mcs.anl.log >> >> You can send us the matrix by simply running with -ksp_view_binary and >> sending us the file binaryoutput. >> >> Barry >> >> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote: >> >>> When m = n = small (like 50), it works fine. When I set m=n=5000, I see >>> the same thing, where one process on the localhost is taking >4 G of RAM, >>> while all other processes are taking 137 M. >>> >>> Is this the standard behavior for MUMPS? It seems strange to me. >>> >>> Randy >>> >>> >>> Matthew Knepley wrote: >>>> Does it work on KSP ex2? >>>> Matt >>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie >>>> wrote: >>>>> I've compiled PETSc with MUMPS support, and I'm trying to run a small >>>>> test >>>>> problem, but I'm having some problems. It seems to begin just fine, but >>>>> what I notice is that on one process (out of 64), the memory just keeps >>>>> going up and up and up until it crashes, while on the other processes, >>>>> the memory usage is reasonable. I'm wondering if anyone might have any >>>>> idea >>>>> why? By the way, my command file is like this: >>>>> >>>>> -ksp_type preonly >>>>> -pc_type lu >>>>> -mat_type aijmumps >>>>> -mat_mumps_cntl_4 3 >>>>> -mat_mumps_cntl_9 1 >>>>> >>>>> >>>>> Randy >>>>> >>>>> ps. This happens after the analysis stage and in the factorization >>>>> stage. >>>>> >>>>> >>> >> > > From rlmackie862 at gmail.com Fri Aug 1 10:21:34 2008 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 01 Aug 2008 08:21:34 -0700 Subject: Question on using MUMPS in PETSC In-Reply-To: References: <48923016.2000401@gmail.com> <48924304.7070107@gmail.com> <4892976C.1000605@gmail.com> Message-ID: <489329FE.4060700@gmail.com> Hi Hong, Thanks for the email - this appears to be happening in the factorization stage, and one process continues to just eat up memory. I tried running your ex2.c with m=n=5000, and I saw the same behavior. I was wondering if there was some setting I was suppose to toggle, but it sounds like this behavior is not correct. I have another program from years ago that called the MUMPS routines directly. I might try that and see what happens. Randy Hong Zhang wrote: > > Randy, > The petsc interface does not create much of extra > memories. > The analysis phase of MUMPS solver is sequential - which might causes > one process blow up with memory. > I'm forwarding this email to the mumps developer > for their input. > > Jean-Yves, > What do you think about the reported problem > (see attached below)? > > Thanks, > > Hong > > On Thu, 31 Jul 2008, Randall Mackie wrote: > >> Barry, >> >> I don't think it's the matrix - I saw the same behavior when I ran your >> ex2.c program and set m=n=5000. >> >> Randy >> >> >> Barry Smith wrote: >>> >>> If m and n are the number of rows and columns of the sparse matrix >>> (i.e. it is >>> tiny problem) then please >>> send us matrix so we can experiment with it to petsc-maint at mcs.anl.log >>> >>> You can send us the matrix by simply running with -ksp_view_binary and >>> sending us the file binaryoutput. >>> >>> Barry >>> >>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote: >>> >>>> When m = n = small (like 50), it works fine. When I set m=n=5000, I see >>>> the same thing, where one process on the localhost is taking >4 G of >>>> RAM, >>>> while all other processes are taking 137 M. >>>> >>>> Is this the standard behavior for MUMPS? It seems strange to me. >>>> >>>> Randy >>>> >>>> >>>> Matthew Knepley wrote: >>>>> Does it work on KSP ex2? >>>>> Matt >>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie >>>>> wrote: >>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run a >>>>>> small test >>>>>> problem, but I'm having some problems. It seems to begin just >>>>>> fine, but >>>>>> what I notice is that on one process (out of 64), the memory just >>>>>> keeps >>>>>> going up and up and up until it crashes, while on the other >>>>>> processes, >>>>>> the memory usage is reasonable. I'm wondering if anyone might have >>>>>> any idea >>>>>> why? By the way, my command file is like this: >>>>>> >>>>>> -ksp_type preonly >>>>>> -pc_type lu >>>>>> -mat_type aijmumps >>>>>> -mat_mumps_cntl_4 3 >>>>>> -mat_mumps_cntl_9 1 >>>>>> >>>>>> >>>>>> Randy >>>>>> >>>>>> ps. This happens after the analysis stage and in the factorization >>>>>> stage. >>>>>> >>>>>> >>>> >>> >> >> > From Hung.V.Nguyen at usace.army.mil Fri Aug 1 11:18:31 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Fri, 1 Aug 2008 11:18:31 -0500 Subject: Petsc with hypre In-Reply-To: <4C3C67D6-9EDC-4CB3-9185-C33FD7D7AAFE@mcs.anl.gov> References: <44dbb0dd0807300108q1dc0ad76g3295de107f12c7cf@mail.gmail.com> <925346A443D4E340BEB20248BAFCDBDF06A5F685@CFEVS1-IP.americas.cray.com> <4C3C67D6-9EDC-4CB3-9185-C33FD7D7AAFE@mcs.anl.gov> Message-ID: I am able to install hypre, but fail to install petsc with including hypre. 1. I have been trying to get PETSc to download and install its own version of HYPRE -- and failed with the error ***************************************************************************** **** UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ----------------------------------------------------------------------------- ---------- Error running configure on HYPRE: Could not execute 'cd /work/hvnguyen/work_project/petsc-2.3.3-p3/externalpackages/hypre-2.0.0/src;m ake distclean;./configure --prefix=/work/hvnguyen/work_project/petsc-2.3.3-p3/externalpackages/hypre-2. 0.0/CrayXT3-O CC="cc -fPIC -fastsse -O3 -Munroll=c:4 -tp k8-64 " CXX="CC -O -fPIC " F77="ftn -fPIC -fastsse -O3 -Munroll=c:4 -tp k8-64 " --with-MPI-include="/opt/xt-mpt/default/mpich2-64/P2/include" --with-MPI-lib-dirs="" --with-MPI-libs="" --with-blas-libs= --with-blas-lib-dir= --with-lapack-libs= --with-lapack-lib-dir= --with-blas=yes --with-lapack=yes --without-babel --without-mli --without-fei --without-superlu': Dist-cleaning utilities ... make[1]: Entering directory `/work/hvnguyen/work_project/petsc-2.3.3-p3/externalpackages/hypre-2.0.0/src/ utilities' rm -f *.o libHYPRE* f2c.h *blas.h *lapack.h 2. I tried to install hypre and it works (I tested their example codes). The, I try to install petsc with config option below, but -no success, see error hvnguyen:sapphire01% config/configure.py --with-batch=1 --with-mpi-shared=0 --with-memcmp-ok --sizeof_char=1 --sizeof_void_p=8 --sizeof_short=2 --sizeof_int=4 --sizeof_long=8 --sizeof_long_long=8 --sizeof_float=4 --sizeof_double=8 --bits_per_byte=8 --sizeof_MPI_Comm=4 --sizeof_MPI_Fint=4 --with-fc=ftn --with-cc=cc --with-cxx=CC -PETSC_ARCH=CrayXT3-O --with-debugging=0 --with-error-checking=0 COPTFLAGS="-fastsse -O3 -Munroll=c:4 -tp k8-64" FOPTFLAGS="-fastsse -O3 -Munroll=c:4 -tp k8-64" --with-x=0 --with-mpi-dir=/opt/xt-mpt/default/mpich2-64/P2 --with-shared=0 --with-hypre-include=/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/in clude --with-hypre-lib=[/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/lib/l ibHYPRE.a,-lHYPRE_DistributedMatrix,-lHYPRE_DistributedMatrixPilutSolver,-lHY PRE_Euclid,-lHYPRE_FEI,-lHYPRE_IJ_mv,-lHYPRE_LSI,-lHYPRE_MatrixMatrix,-lHYPRE _ParaSails,-lHYPRE_ParaSails,-lHYPRE_mli,-lHYPRE_multivector,-lHYPRE_parcsr_b lock_mv,-lHYPRE_parcsr_ls,-lHYPRE_parcsr_mv,-lHYPRE_seq_mv,-lHYPRE_sstruct_ls ,-lHYPRE_sstruct_mv,-lHYPRE_sstruct_mv,-lHYPRE_utilities -- error: TESTING: check from config.libraries(/work/hvnguyen/work_project/petsc-2.3.3-p3/python/BuildSyste m/config/libraries.py:108) ***************************************************************************** **** UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ----------------------------------------------------------------------------- ---------- --with-hypre-lib=['[/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/lib /libHYPRE.a,-lHYPRE_DistributedMatrix,-lHYPRE_DistributedMatrixPilutSolver,-l HYPRE_Euclid,-lHYPRE_FEI,-lHYPRE_IJ_mv,-lHYPRE_LSI,-lHYPRE_MatrixMatrix,-lHYP RE_ParaSails,-lHYPRE_ParaSails,-lHYPRE_mli,-lHYPRE_multivector,-lHYPRE_parcsr _block_mv,-lHYPRE_parcsr_ls,-lHYPRE_parcsr_mv,-lHYPRE_seq_mv,-lHYPRE_sstruct_ ls,-lHYPRE_sstruct_mv,-lHYPRE_sstruct_mv,-lHYPRE_utilities'] and --with-hypre-include=/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/in clude did not work ***************************************************************************** **** Thanks for your help. -Hung -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith Sent: Wednesday, July 30, 2008 10:28 AM To: petsc-users at mcs.anl.gov Cc: Keita Teranishi Subject: Re: Petsc with hypre If PETSc is built from source then it, by default, does not include hypre and other external packages. You should install PETSc yourself and include the additional config/configure.py option --download-hypre Barry On Jul 30, 2008, at 9:18 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > > Hello Keita, > >> Did this error happen with Cray's PETSc? > > I am running my code in Cray XT3 (at ERDC). Please find the current > and available modules in the system below. > > I think I amn't using Cray's PETSC since I am using the one from the > location /usr/local/usp/PETtools/CE/MATH/petsc-2.3.3-p3. > > How do I use Cray's PETSC? Which module I need to load? > > Thanks, > > -Hung > > --- current module: > hvnguyen:sapphire12% module list > Currently Loaded Modulefiles: > 1) modules/3.1.6 5) xt-libsci/1.5.52 9) xt-pbs/5.3.5 > 13) xt-catamount/1.5.52 17) Base-opts/1.5.52 > 2) MySQL/4.0.27 6) xt-mpt/1.5.52 10) xt-service/ > 1.5.52 > 14) xt-boot/1.5.52 > 3) acml/3.6 7) xt-pe/1.5.52 11) xt-libc/1.5.52 > 15) xt-crms/1.5.52 > 4) pgi/7.0.6 8) PrgEnv-pgi/1.5.52 12) xt-os/1.5.52 > 16) xt-lustre-ss/1.5.52 > hvnguyen:sapphire12% module avai > > --------------------------------------------------------------- > /opt/modulefiles > ---------------------------------------------------------------Base- > opts/1.4. > 10 craypat/3.2.3 xt-catamount/1.4.43 > xt-mpt/1.4.10 > Base-opts/1.4.38 dwarf/6.10.0 xt- > catamount/1.5.16 > xt-mpt/1.4.38 > Base-opts/1.4.43 elf/0.8.6(default) xt- > catamount/1.5.27 > xt-mpt/1.4.43 > Base-opts/1.5.16 fftw/2.1.5(default) xt- > catamount/1.5.39 > xt-mpt/1.5.16 > Base-opts/1.5.27 fftw/3.1.1 > xt-catamount/1.5.39.nic10 xt-mpt/1.5.27 > Base-opts/1.5.39 gcc/3.2.3 xt- > catamount/1.5.52 > xt-mpt/1.5.39 > Base-opts/1.5.52(default) gcc/4.1.1(default) > xt-craypat/4.0(default) xt-mpt/1.5.52 > MySQL/4.0.27 glib/2.4.2 xt-crms/ > 1.4.10 > xt-mpt-gnu/1.4.10 > PrgEnv/1.4.10 gmalloc xt-crms/ > 1.4.38 > xt-mpt-gnu/1.4.38 > PrgEnv/1.4.38 gnet/2.0.5 xt-crms/ > 1.4.43 > xt-mpt-gnu/1.4.43 > PrgEnv/1.4.43 iobuf/1.0.2 xt-crms/ > 1.5.16 > xt-mpt-gnu/1.5.16 > PrgEnv-gnu/1.4.10 iobuf/1.0.5(default) xt-crms/ > 1.5.27 > xt-mpt-gnu/1.5.27 > PrgEnv-gnu/1.4.38 iobuf/1.0.6 xt-crms/ > 1.5.39 > xt-mpt-gnu/1.5.39 > PrgEnv-gnu/1.4.43 libscifft-pgi/1.0.0(default) xt-crms/ > 1.5.52 > xt-mpt-gnu/1.5.52 > PrgEnv-gnu/1.5.16 modules/3.1.6 xt-libc/ > 1.4.10 > xt-mpt-pathscale/1.5.39 > PrgEnv-gnu/1.5.27 papi/3.2.1(default) xt-libc/ > 1.4.38 > xt-mpt-pathscale/1.5.52 > PrgEnv-gnu/1.5.39 papi/3.5.0C.1 xt-libc/ > 1.4.43 > xt-os/1.4.10 > PrgEnv-gnu/1.5.52(default) pathscale/2.5 xt-libc/ > 1.5.16 > xt-os/1.4.38 > PrgEnv-pathscale/1.5.39 pathscale/3.0(default) xt-libc/ > 1.5.27 > xt-os/1.4.43 > PrgEnv-pathscale/1.5.52 pgi/6.2.5 xt-libc/ > 1.5.39 > xt-os/1.5.16 > PrgEnv-pgi/1.4.10 pgi/7.0.2 xt-libc/ > 1.5.52 > xt-os/1.5.27 > PrgEnv-pgi/1.4.38 pgi/7.0.3 xt-libsci/ > 1.4.10 > xt-os/1.5.39 > PrgEnv-pgi/1.4.43 pgi/7.0.4 xt-libsci/ > 1.4.38 > xt-os/1.5.52 > PrgEnv-pgi/1.5.16 pgi/7.0.5 xt-libsci/ > 1.4.43 > xt-papi/3.5.99a(default) > PrgEnv-pgi/1.5.27 pgi/7.0.6(default) xt-libsci/ > 1.5.16 > xt-pbs/5.3.5 > PrgEnv-pgi/1.5.39 pgi/7.0.7 xt-libsci/ > 1.5.27 > xt-pe/1.4.38 > PrgEnv-pgi/1.5.52(default) pgi/7.1.2 xt-libsci/ > 1.5.39 > xt-pe/1.4.43 > acml/3.0 pgi/7.1.4 > xt-libsci/1.5.52(default) xt-pe/1.5.16 > acml/3.6(default) pgi32/6.1.1 xt-libsci/ > 10.2.0 > xt-pe/1.5.27 > acml/4.0.1a pkg-config/0.15.0 xt-lsfhpc/ > 6.1 > xt-pe/1.5.39 > acml-gnu/3.0 xt-boot/1.4.10 xt-lustre- > ss/1.4.10 > xt-pe/1.5.52 > acml-large_arrays/3.0 xt-boot/1.4.38 xt-lustre- > ss/1.4.38 > xt-service/1.4.10 > acml-mp/3.0 xt-boot/1.4.43 xt-lustre- > ss/1.4.43 > xt-service/1.4.38 > apprentice2/3.0.1(default) xt-boot/1.5.16 xt-lustre- > ss/1.5.16 > xt-service/1.4.43 > apprentice2/3.2.3 xt-boot/1.5.27 xt-lustre- > ss/1.5.27 > xt-service/1.5.16 > apprentice2/4.0 xt-boot/1.5.39 xt-lustre- > ss/1.5.39 > xt-service/1.5.27 > craypat/3.0.1 xt-boot/1.5.52 > xt-lustre-ss/1.5.39.bogl1 xt-service/1.5.39 > craypat/3.1.1 xt-catamount/1.4.10 > xt-lustre-ss/1.5.39.nic10 xt-service/1.5.52 > craypat/3.1.2(default) xt-catamount/1.4.38 xt-lustre- > ss/1.5.52 > > > > -----Original Message----- > From: Keita Teranishi [mailto:keita at cray.com] > Sent: Wednesday, July 30, 2008 9:09 AM > To: petsc-users at mcs.anl.gov > Cc: Nguyen, Hung V ERDC-ITL-MS > Subject: RE: Petsc with hypre > > Hi Hung, > > Did this error happen with Cray's PETSc? Our PETSc supports both > hypre and ParMetis. > Please let me know more details with your error (such as module > environment). > > > Thanks, > > ================================ > Keita Teranishi > Math Software Group > Cray, Inc. > keita at cray.com > ================================ > > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov > [mailto:owner-petsc-users at mcs.anl.gov > ] On > Behalf Of Nguyen, Hung V ERDC-ITL-MS > Sent: Wednesday, July 30, 2008 8:57 AM > To: petsc-users at mcs.anl.gov > Subject: Petsc with hypre > > > Dear, > > I tried to run PETSC with hypre on CrayXT3 system and got the error > message below. I didn't install PETSC so I don't know whether it was > installed with hypre option or not. So if I need to install PETSC with > using hypre option, then please send me the instruction for > installation (and also using parMeTiS). > > I appreciate your help, > > Regards, > > -hung > > hvnguyen:sapphire09% yod -np 16 ./fw -ksp_type richardson -pc_type > hypre -pc_hypre_type boomeramg -pc_hypre_boomeramg_max_iter 4 > -pc_hypre_boomeramg_tol 1.0e-15 > > > [10]PETSC ERROR: [9]PETSC ERROR: [13]PETSC ERROR: [11]PETSC ERROR: > [3]PETSC > ERROR: [2]PETSC ERROR: [8]PETSC ERROR: [7]PETSC ERROR: [14]PETSC > ERROR: > [0]PETSC ERROR: [5]PETSC ERROR: [12]PETSC ERROR: [15]PETSC ERROR: > [6]PETSC > ERROR: [1]PETSC ERROR: [4]PETSC ERROR: --------------------- Error > Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > [10]PETSC ERROR: [9]PETSC ERROR: [13]PETSC ERROR: [11]PETSC ERROR: > [3]PETSC > ERROR: [2]PETSC ERROR: [8]PETSC ERROR: [7]PETSC ERROR: [14]PETSC > ERROR: > [0]PETSC ERROR: [5]PETSC ERROR: [12]PETSC ERROR: [15]PETSC ERROR: > [6]PETSC > ERROR: [1]PETSC ERROR: [4]PETSC ERROR: Unknown type. Check for miss- > spelling > or missing external package needed for type! > Unknown type. Check for miss-spelling or missing external package > needed for > type! > Unknown type. Check for miss-spelling or missing external package > needed for > type! > > Unknown type. Check for miss-spelling or missing external package > needed for > type! > [10]PETSC ERROR: [9]PETSC ERROR: [13]PETSC ERROR: [11]PETSC ERROR: > [3]PETSC > ERROR: [2]PETSC ERROR: [8]PETSC ERROR: [7]PETSC ERROR: [14]PETSC > ERROR: > [0]PETSC ERROR: [5]PETSC ERROR: [12]PETSC ERROR: [15]PETSC ERROR: > [6]PETSC > ERROR: [1]PETSC ERROR: [4]PETSC ERROR: Unable to find requested PC > type > hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > [10]PETSC ERROR: [9]PETSC ERROR: [13]PETSC ERROR: [11]PETSC ERROR: > [3]PETSC > ERROR: [2]PETSC ERROR: [8]PETSC ERROR: [7]PETSC ERROR: [14]PETSC > ERROR: > [0]PETSC ERROR: [5]PETSC ERROR: [12]PETSC ERROR: [15]PETSC ERROR: > [6]PETSC > ERROR: [1]PETSC ERROR: [4]PETSC ERROR: > ------------------------------------------------------------------------ > From knepley at gmail.com Fri Aug 1 11:38:55 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 1 Aug 2008 11:38:55 -0500 Subject: Petsc with hypre In-Reply-To: References: <44dbb0dd0807300108q1dc0ad76g3295de107f12c7cf@mail.gmail.com> <925346A443D4E340BEB20248BAFCDBDF06A5F685@CFEVS1-IP.americas.cray.com> <4C3C67D6-9EDC-4CB3-9185-C33FD7D7AAFE@mcs.anl.gov> Message-ID: On Fri, Aug 1, 2008 at 11:18 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > > I am able to install hypre, but fail to install petsc with including hypre. If you do not send configure.log, we have no hope of determining what went wrong. Thus I suggest mailing petsc-maint at mcs.anl.gov with the log Matt > 1. I have been trying to get PETSc to download and install its own version of > HYPRE -- and failed with the error > > > ***************************************************************************** > **** > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ----------------------------------------------------------------------------- > ---------- > Error running configure on HYPRE: Could not execute 'cd > /work/hvnguyen/work_project/petsc-2.3.3-p3/externalpackages/hypre-2.0.0/src;m > ake distclean;./configure > --prefix=/work/hvnguyen/work_project/petsc-2.3.3-p3/externalpackages/hypre-2. > 0.0/CrayXT3-O CC="cc -fPIC -fastsse -O3 -Munroll=c:4 -tp k8-64 " CXX="CC -O > -fPIC " F77="ftn -fPIC -fastsse -O3 -Munroll=c:4 -tp k8-64 " > --with-MPI-include="/opt/xt-mpt/default/mpich2-64/P2/include" > --with-MPI-lib-dirs="" --with-MPI-libs="" --with-blas-libs= > --with-blas-lib-dir= --with-lapack-libs= --with-lapack-lib-dir= > --with-blas=yes --with-lapack=yes --without-babel --without-mli --without-fei > --without-superlu': > Dist-cleaning utilities ... > make[1]: Entering directory > `/work/hvnguyen/work_project/petsc-2.3.3-p3/externalpackages/hypre-2.0.0/src/ > utilities' > rm -f *.o libHYPRE* f2c.h *blas.h *lapack.h > > > 2. I tried to install hypre and it works (I tested their example codes). The, > I try to install petsc with config option below, but -no success, see error > > hvnguyen:sapphire01% config/configure.py --with-batch=1 --with-mpi-shared=0 > --with-memcmp-ok --sizeof_char=1 --sizeof_void_p=8 --sizeof_short=2 > --sizeof_int=4 --sizeof_long=8 --sizeof_long_long=8 --sizeof_float=4 > --sizeof_double=8 --bits_per_byte=8 --sizeof_MPI_Comm=4 --sizeof_MPI_Fint=4 > --with-fc=ftn --with-cc=cc --with-cxx=CC -PETSC_ARCH=CrayXT3-O > --with-debugging=0 --with-error-checking=0 COPTFLAGS="-fastsse -O3 > -Munroll=c:4 -tp k8-64" FOPTFLAGS="-fastsse -O3 -Munroll=c:4 -tp k8-64" > --with-x=0 --with-mpi-dir=/opt/xt-mpt/default/mpich2-64/P2 --with-shared=0 > --with-hypre-include=/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/in > clude > --with-hypre-lib=[/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/lib/l > ibHYPRE.a,-lHYPRE_DistributedMatrix,-lHYPRE_DistributedMatrixPilutSolver,-lHY > PRE_Euclid,-lHYPRE_FEI,-lHYPRE_IJ_mv,-lHYPRE_LSI,-lHYPRE_MatrixMatrix,-lHYPRE > _ParaSails,-lHYPRE_ParaSails,-lHYPRE_mli,-lHYPRE_multivector,-lHYPRE_parcsr_b > lock_mv,-lHYPRE_parcsr_ls,-lHYPRE_parcsr_mv,-lHYPRE_seq_mv,-lHYPRE_sstruct_ls > ,-lHYPRE_sstruct_mv,-lHYPRE_sstruct_mv,-lHYPRE_utilities > > > -- error: > TESTING: check from > config.libraries(/work/hvnguyen/work_project/petsc-2.3.3-p3/python/BuildSyste > m/config/libraries.py:108) > ***************************************************************************** > **** > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ----------------------------------------------------------------------------- > ---------- > --with-hypre-lib=['[/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/lib > /libHYPRE.a,-lHYPRE_DistributedMatrix,-lHYPRE_DistributedMatrixPilutSolver,-l > HYPRE_Euclid,-lHYPRE_FEI,-lHYPRE_IJ_mv,-lHYPRE_LSI,-lHYPRE_MatrixMatrix,-lHYP > RE_ParaSails,-lHYPRE_ParaSails,-lHYPRE_mli,-lHYPRE_multivector,-lHYPRE_parcsr > _block_mv,-lHYPRE_parcsr_ls,-lHYPRE_parcsr_mv,-lHYPRE_seq_mv,-lHYPRE_sstruct_ > ls,-lHYPRE_sstruct_mv,-lHYPRE_sstruct_mv,-lHYPRE_utilities'] and > --with-hypre-include=/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/in > clude did not work > ***************************************************************************** > **** > > > Thanks for your help. > > -Hung -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From adrian at cray.com Fri Aug 1 13:29:02 2008 From: adrian at cray.com (Adrian Tate) Date: Fri, 1 Aug 2008 13:29:02 -0500 Subject: Petsc with hypre Message-ID: <925346A443D4E340BEB20248BAFCDBDF6CB5BA@CFEVS1-IP.americas.cray.com> Dear Hung We have asked your site staff to install the Cray Petsc package on sapphire, so you also now have the option of using that. If you have problems using that package please email Keita Teranishi and myself. Adrian Tate --- Math SW Group Lead Cray Inc. (206) 349 5868 ----- Original Message ----- From: owner-petsc-users at mcs.anl.gov To: petsc-users at mcs.anl.gov Cc: Keita Teranishi Sent: Fri Aug 01 11:18:31 2008 Subject: RE: Petsc with hypre I am able to install hypre, but fail to install petsc with including hypre. 1. I have been trying to get PETSc to download and install its own version of HYPRE -- and failed with the error ***************************************************************************** **** UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ----------------------------------------------------------------------------- ---------- Error running configure on HYPRE: Could not execute 'cd /work/hvnguyen/work_project/petsc-2.3.3-p3/externalpackages/hypre-2.0.0/src;m ake distclean;./configure --prefix=/work/hvnguyen/work_project/petsc-2.3.3-p3/externalpackages/hypre-2. 0.0/CrayXT3-O CC="cc -fPIC -fastsse -O3 -Munroll=c:4 -tp k8-64 " CXX="CC -O -fPIC " F77="ftn -fPIC -fastsse -O3 -Munroll=c:4 -tp k8-64 " --with-MPI-include="/opt/xt-mpt/default/mpich2-64/P2/include" --with-MPI-lib-dirs="" --with-MPI-libs="" --with-blas-libs= --with-blas-lib-dir= --with-lapack-libs= --with-lapack-lib-dir= --with-blas=yes --with-lapack=yes --without-babel --without-mli --without-fei --without-superlu': Dist-cleaning utilities ... make[1]: Entering directory `/work/hvnguyen/work_project/petsc-2.3.3-p3/externalpackages/hypre-2.0.0/src/ utilities' rm -f *.o libHYPRE* f2c.h *blas.h *lapack.h 2. I tried to install hypre and it works (I tested their example codes). The, I try to install petsc with config option below, but -no success, see error hvnguyen:sapphire01% config/configure.py --with-batch=1 --with-mpi-shared=0 --with-memcmp-ok --sizeof_char=1 --sizeof_void_p=8 --sizeof_short=2 --sizeof_int=4 --sizeof_long=8 --sizeof_long_long=8 --sizeof_float=4 --sizeof_double=8 --bits_per_byte=8 --sizeof_MPI_Comm=4 --sizeof_MPI_Fint=4 --with-fc=ftn --with-cc=cc --with-cxx=CC -PETSC_ARCH=CrayXT3-O --with-debugging=0 --with-error-checking=0 COPTFLAGS="-fastsse -O3 -Munroll=c:4 -tp k8-64" FOPTFLAGS="-fastsse -O3 -Munroll=c:4 -tp k8-64" --with-x=0 --with-mpi-dir=/opt/xt-mpt/default/mpich2-64/P2 --with-shared=0 --with-hypre-include=/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/in clude --with-hypre-lib=[/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/lib/l ibHYPRE.a,-lHYPRE_DistributedMatrix,-lHYPRE_DistributedMatrixPilutSolver,-lHY PRE_Euclid,-lHYPRE_FEI,-lHYPRE_IJ_mv,-lHYPRE_LSI,-lHYPRE_MatrixMatrix,-lHYPRE _ParaSails,-lHYPRE_ParaSails,-lHYPRE_mli,-lHYPRE_multivector,-lHYPRE_parcsr_b lock_mv,-lHYPRE_parcsr_ls,-lHYPRE_parcsr_mv,-lHYPRE_seq_mv,-lHYPRE_sstruct_ls ,-lHYPRE_sstruct_mv,-lHYPRE_sstruct_mv,-lHYPRE_utilities -- error: TESTING: check from config.libraries(/work/hvnguyen/work_project/petsc-2.3.3-p3/python/BuildSyste m/config/libraries.py:108) ***************************************************************************** **** UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ----------------------------------------------------------------------------- ---------- --with-hypre-lib=['[/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/lib /libHYPRE.a,-lHYPRE_DistributedMatrix,-lHYPRE_DistributedMatrixPilutSolver,-l HYPRE_Euclid,-lHYPRE_FEI,-lHYPRE_IJ_mv,-lHYPRE_LSI,-lHYPRE_MatrixMatrix,-lHYP RE_ParaSails,-lHYPRE_ParaSails,-lHYPRE_mli,-lHYPRE_multivector,-lHYPRE_parcsr _block_mv,-lHYPRE_parcsr_ls,-lHYPRE_parcsr_mv,-lHYPRE_seq_mv,-lHYPRE_sstruct_ ls,-lHYPRE_sstruct_mv,-lHYPRE_sstruct_mv,-lHYPRE_utilities'] and --with-hypre-include=/work/hvnguyen/work_project/hypre-2.0.0/small/fastsse/in clude did not work ***************************************************************************** **** Thanks for your help. -Hung -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith Sent: Wednesday, July 30, 2008 10:28 AM To: petsc-users at mcs.anl.gov Cc: Keita Teranishi Subject: Re: Petsc with hypre If PETSc is built from source then it, by default, does not include hypre and other external packages. You should install PETSc yourself and include the additional config/configure.py option --download-hypre Barry On Jul 30, 2008, at 9:18 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > > Hello Keita, > >> Did this error happen with Cray's PETSc? > > I am running my code in Cray XT3 (at ERDC). Please find the current > and available modules in the system below. > > I think I amn't using Cray's PETSC since I am using the one from the > location /usr/local/usp/PETtools/CE/MATH/petsc-2.3.3-p3. > > How do I use Cray's PETSC? Which module I need to load? > > Thanks, > > -Hung > > --- current module: > hvnguyen:sapphire12% module list > Currently Loaded Modulefiles: > 1) modules/3.1.6 5) xt-libsci/1.5.52 9) xt-pbs/5.3.5 > 13) xt-catamount/1.5.52 17) Base-opts/1.5.52 > 2) MySQL/4.0.27 6) xt-mpt/1.5.52 10) xt-service/ > 1.5.52 > 14) xt-boot/1.5.52 > 3) acml/3.6 7) xt-pe/1.5.52 11) xt-libc/1.5.52 > 15) xt-crms/1.5.52 > 4) pgi/7.0.6 8) PrgEnv-pgi/1.5.52 12) xt-os/1.5.52 > 16) xt-lustre-ss/1.5.52 > hvnguyen:sapphire12% module avai > > --------------------------------------------------------------- > /opt/modulefiles > ---------------------------------------------------------------Base- > opts/1.4. > 10 craypat/3.2.3 xt-catamount/1.4.43 > xt-mpt/1.4.10 > Base-opts/1.4.38 dwarf/6.10.0 xt- > catamount/1.5.16 > xt-mpt/1.4.38 > Base-opts/1.4.43 elf/0.8.6(default) xt- > catamount/1.5.27 > xt-mpt/1.4.43 > Base-opts/1.5.16 fftw/2.1.5(default) xt- > catamount/1.5.39 > xt-mpt/1.5.16 > Base-opts/1.5.27 fftw/3.1.1 > xt-catamount/1.5.39.nic10 xt-mpt/1.5.27 > Base-opts/1.5.39 gcc/3.2.3 xt- > catamount/1.5.52 > xt-mpt/1.5.39 > Base-opts/1.5.52(default) gcc/4.1.1(default) > xt-craypat/4.0(default) xt-mpt/1.5.52 > MySQL/4.0.27 glib/2.4.2 xt-crms/ > 1.4.10 > xt-mpt-gnu/1.4.10 > PrgEnv/1.4.10 gmalloc xt-crms/ > 1.4.38 > xt-mpt-gnu/1.4.38 > PrgEnv/1.4.38 gnet/2.0.5 xt-crms/ > 1.4.43 > xt-mpt-gnu/1.4.43 > PrgEnv/1.4.43 iobuf/1.0.2 xt-crms/ > 1.5.16 > xt-mpt-gnu/1.5.16 > PrgEnv-gnu/1.4.10 iobuf/1.0.5(default) xt-crms/ > 1.5.27 > xt-mpt-gnu/1.5.27 > PrgEnv-gnu/1.4.38 iobuf/1.0.6 xt-crms/ > 1.5.39 > xt-mpt-gnu/1.5.39 > PrgEnv-gnu/1.4.43 libscifft-pgi/1.0.0(default) xt-crms/ > 1.5.52 > xt-mpt-gnu/1.5.52 > PrgEnv-gnu/1.5.16 modules/3.1.6 xt-libc/ > 1.4.10 > xt-mpt-pathscale/1.5.39 > PrgEnv-gnu/1.5.27 papi/3.2.1(default) xt-libc/ > 1.4.38 > xt-mpt-pathscale/1.5.52 > PrgEnv-gnu/1.5.39 papi/3.5.0C.1 xt-libc/ > 1.4.43 > xt-os/1.4.10 > PrgEnv-gnu/1.5.52(default) pathscale/2.5 xt-libc/ > 1.5.16 > xt-os/1.4.38 > PrgEnv-pathscale/1.5.39 pathscale/3.0(default) xt-libc/ > 1.5.27 > xt-os/1.4.43 > PrgEnv-pathscale/1.5.52 pgi/6.2.5 xt-libc/ > 1.5.39 > xt-os/1.5.16 > PrgEnv-pgi/1.4.10 pgi/7.0.2 xt-libc/ > 1.5.52 > xt-os/1.5.27 > PrgEnv-pgi/1.4.38 pgi/7.0.3 xt-libsci/ > 1.4.10 > xt-os/1.5.39 > PrgEnv-pgi/1.4.43 pgi/7.0.4 xt-libsci/ > 1.4.38 > xt-os/1.5.52 > PrgEnv-pgi/1.5.16 pgi/7.0.5 xt-libsci/ > 1.4.43 > xt-papi/3.5.99a(default) > PrgEnv-pgi/1.5.27 pgi/7.0.6(default) xt-libsci/ > 1.5.16 > xt-pbs/5.3.5 > PrgEnv-pgi/1.5.39 pgi/7.0.7 xt-libsci/ > 1.5.27 > xt-pe/1.4.38 > PrgEnv-pgi/1.5.52(default) pgi/7.1.2 xt-libsci/ > 1.5.39 > xt-pe/1.4.43 > acml/3.0 pgi/7.1.4 > xt-libsci/1.5.52(default) xt-pe/1.5.16 > acml/3.6(default) pgi32/6.1.1 xt-libsci/ > 10.2.0 > xt-pe/1.5.27 > acml/4.0.1a pkg-config/0.15.0 xt-lsfhpc/ > 6.1 > xt-pe/1.5.39 > acml-gnu/3.0 xt-boot/1.4.10 xt-lustre- > ss/1.4.10 > xt-pe/1.5.52 > acml-large_arrays/3.0 xt-boot/1.4.38 xt-lustre- > ss/1.4.38 > xt-service/1.4.10 > acml-mp/3.0 xt-boot/1.4.43 xt-lustre- > ss/1.4.43 > xt-service/1.4.38 > apprentice2/3.0.1(default) xt-boot/1.5.16 xt-lustre- > ss/1.5.16 > xt-service/1.4.43 > apprentice2/3.2.3 xt-boot/1.5.27 xt-lustre- > ss/1.5.27 > xt-service/1.5.16 > apprentice2/4.0 xt-boot/1.5.39 xt-lustre- > ss/1.5.39 > xt-service/1.5.27 > craypat/3.0.1 xt-boot/1.5.52 > xt-lustre-ss/1.5.39.bogl1 xt-service/1.5.39 > craypat/3.1.1 xt-catamount/1.4.10 > xt-lustre-ss/1.5.39.nic10 xt-service/1.5.52 > craypat/3.1.2(default) xt-catamount/1.4.38 xt-lustre- > ss/1.5.52 > > > > -----Original Message----- > From: Keita Teranishi [mailto:keita at cray.com] > Sent: Wednesday, July 30, 2008 9:09 AM > To: petsc-users at mcs.anl.gov > Cc: Nguyen, Hung V ERDC-ITL-MS > Subject: RE: Petsc with hypre > > Hi Hung, > > Did this error happen with Cray's PETSc? Our PETSc supports both > hypre and ParMetis. > Please let me know more details with your error (such as module > environment). > > > Thanks, > > ================================ > Keita Teranishi > Math Software Group > Cray, Inc. > keita at cray.com > ================================ > > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov > [mailto:owner-petsc-users at mcs.anl.gov > ] On > Behalf Of Nguyen, Hung V ERDC-ITL-MS > Sent: Wednesday, July 30, 2008 8:57 AM > To: petsc-users at mcs.anl.gov > Subject: Petsc with hypre > > > Dear, > > I tried to run PETSC with hypre on CrayXT3 system and got the error > message below. I didn't install PETSC so I don't know whether it was > installed with hypre option or not. So if I need to install PETSC with > using hypre option, then please send me the instruction for > installation (and also using parMeTiS). > > I appreciate your help, > > Regards, > > -hung > > hvnguyen:sapphire09% yod -np 16 ./fw -ksp_type richardson -pc_type > hypre -pc_hypre_type boomeramg -pc_hypre_boomeramg_max_iter 4 > -pc_hypre_boomeramg_tol 1.0e-15 > > > [10]PETSC ERROR: [9]PETSC ERROR: [13]PETSC ERROR: [11]PETSC ERROR: > [3]PETSC > ERROR: [2]PETSC ERROR: [8]PETSC ERROR: [7]PETSC ERROR: [14]PETSC > ERROR: > [0]PETSC ERROR: [5]PETSC ERROR: [12]PETSC ERROR: [15]PETSC ERROR: > [6]PETSC > ERROR: [1]PETSC ERROR: [4]PETSC ERROR: --------------------- Error > Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > [10]PETSC ERROR: [9]PETSC ERROR: [13]PETSC ERROR: [11]PETSC ERROR: > [3]PETSC > ERROR: [2]PETSC ERROR: [8]PETSC ERROR: [7]PETSC ERROR: [14]PETSC > ERROR: > [0]PETSC ERROR: [5]PETSC ERROR: [12]PETSC ERROR: [15]PETSC ERROR: > [6]PETSC > ERROR: [1]PETSC ERROR: [4]PETSC ERROR: Unknown type. Check for miss- > spelling > or missing external package needed for type! > Unknown type. Check for miss-spelling or missing external package > needed for > type! > Unknown type. Check for miss-spelling or missing external package > needed for > type! > > Unknown type. Check for miss-spelling or missing external package > needed for > type! > [10]PETSC ERROR: [9]PETSC ERROR: [13]PETSC ERROR: [11]PETSC ERROR: > [3]PETSC > ERROR: [2]PETSC ERROR: [8]PETSC ERROR: [7]PETSC ERROR: [14]PETSC > ERROR: > [0]PETSC ERROR: [5]PETSC ERROR: [12]PETSC ERROR: [15]PETSC ERROR: > [6]PETSC > ERROR: [1]PETSC ERROR: [4]PETSC ERROR: Unable to find requested PC > type > hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > Unable to find requested PC type hypre! > [10]PETSC ERROR: [9]PETSC ERROR: [13]PETSC ERROR: [11]PETSC ERROR: > [3]PETSC > ERROR: [2]PETSC ERROR: [8]PETSC ERROR: [7]PETSC ERROR: [14]PETSC > ERROR: > [0]PETSC ERROR: [5]PETSC ERROR: [12]PETSC ERROR: [15]PETSC ERROR: > [6]PETSC > ERROR: [1]PETSC ERROR: [4]PETSC ERROR: > ------------------------------------------------------------------------ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Aug 1 15:26:22 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 1 Aug 2008 15:26:22 -0500 Subject: a new release soon? In-Reply-To: <48af9a710807310408m1ac365a6kb6760b91efbfbe31@mail.gmail.com> References: <48af9a710807310408m1ac365a6kb6760b91efbfbe31@mail.gmail.com> Message-ID: You should start tracking the head. Barry On Jul 31, 2008, at 6:08 AM, Chen Sh en wrote: > Hi all, > > I remember reading in a thread in the list, talking about an > upcoming release. > Is it coming soon? If that is the case, we shall start tracking the > tip of the petsc-devel tree. > Thank you very much. > > regards, > shenchen From Jean-Yves.L.Excellent at ens-lyon.fr Fri Aug 1 15:51:03 2008 From: Jean-Yves.L.Excellent at ens-lyon.fr (Jean-Yves L Excellent) Date: Fri, 01 Aug 2008 22:51:03 +0200 Subject: [mumps-dev] Re: Question on using MUMPS in PETSC In-Reply-To: References: <48923016.2000401@gmail.com> <48924304.7070107@gmail.com> <4892976C.1000605@gmail.com> Message-ID: <48937737.7000405@ens-lyon.fr> Hi, Clearly in MUMPS processor 0 uses more memory during the analysis step because the analysis is sequential. So until we provide a parallel analysis, processor 0 is gathering the graph of the matrix from all other processors to perform the analysis. But that memory is freed at the end of the analysis so it should not affect the factorization. Thanks for letting us know if you have more information. Regards, Jean-Yves Hong Zhang wrote: > > Randy, > The petsc interface does not create much of extra > memories. > The analysis phase of MUMPS solver is sequential - which might causes > one process blow up with memory. > I'm forwarding this email to the mumps developer > for their input. > > Jean-Yves, > What do you think about the reported problem > (see attached below)? > > Thanks, > > Hong > > On Thu, 31 Jul 2008, Randall Mackie wrote: > >> Barry, >> >> I don't think it's the matrix - I saw the same behavior when I ran your >> ex2.c program and set m=n=5000. >> >> Randy >> >> >> Barry Smith wrote: >>> >>> If m and n are the number of rows and columns of the sparse matrix >>> (i.e. it is >>> tiny problem) then please >>> send us matrix so we can experiment with it to petsc-maint at mcs.anl.log >>> >>> You can send us the matrix by simply running with -ksp_view_binary and >>> sending us the file binaryoutput. >>> >>> Barry >>> >>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote: >>> >>>> When m = n = small (like 50), it works fine. When I set m=n=5000, I see >>>> the same thing, where one process on the localhost is taking >4 G of >>>> RAM, >>>> while all other processes are taking 137 M. >>>> >>>> Is this the standard behavior for MUMPS? It seems strange to me. >>>> >>>> Randy >>>> >>>> >>>> Matthew Knepley wrote: >>>>> Does it work on KSP ex2? >>>>> Matt >>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie >>>>> wrote: >>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run a >>>>>> small test >>>>>> problem, but I'm having some problems. It seems to begin just >>>>>> fine, but >>>>>> what I notice is that on one process (out of 64), the memory just >>>>>> keeps >>>>>> going up and up and up until it crashes, while on the other >>>>>> processes, >>>>>> the memory usage is reasonable. I'm wondering if anyone might have >>>>>> any idea >>>>>> why? By the way, my command file is like this: >>>>>> >>>>>> -ksp_type preonly >>>>>> -pc_type lu >>>>>> -mat_type aijmumps >>>>>> -mat_mumps_cntl_4 3 >>>>>> -mat_mumps_cntl_9 1 >>>>>> >>>>>> >>>>>> Randy >>>>>> >>>>>> ps. This happens after the analysis stage and in the factorization >>>>>> stage. >>>>>> >>>>>> >>>> >>> From rlmackie862 at gmail.com Fri Aug 1 17:49:53 2008 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 01 Aug 2008 15:49:53 -0700 Subject: [mumps-dev] Re: Question on using MUMPS in PETSC In-Reply-To: <48937737.7000405@ens-lyon.fr> References: <48923016.2000401@gmail.com> <48924304.7070107@gmail.com> <4892976C.1000605@gmail.com> <48937737.7000405@ens-lyon.fr> Message-ID: <48939311.8000402@gmail.com> In fact, during the Analysis step, the max amount of memory 300 Mbytes is used by one process. However, during the Factorization stage, that same process then starts to increase in memory, with all the other processes staying the same. I've re-run this several times using different numbers of processors, and I keep getting the same behavior. Randy Jean-Yves L Excellent wrote: > Hi, > > Clearly in MUMPS processor 0 uses more memory during > the analysis step because the analysis is sequential. > So until we provide a parallel analysis, processor 0 > is gathering the graph of the matrix from all other > processors to perform the analysis. But that memory > is freed at the end of the analysis so it should > not affect the factorization. > > Thanks for letting us know if you have more information. > > Regards, > Jean-Yves > > Hong Zhang wrote: >> >> Randy, >> The petsc interface does not create much of extra >> memories. >> The analysis phase of MUMPS solver is sequential - which might causes >> one process blow up with memory. >> I'm forwarding this email to the mumps developer >> for their input. >> >> Jean-Yves, >> What do you think about the reported problem >> (see attached below)? >> >> Thanks, >> >> Hong >> >> On Thu, 31 Jul 2008, Randall Mackie wrote: >> >>> Barry, >>> >>> I don't think it's the matrix - I saw the same behavior when I ran your >>> ex2.c program and set m=n=5000. >>> >>> Randy >>> >>> >>> Barry Smith wrote: >>>> >>>> If m and n are the number of rows and columns of the sparse >>>> matrix (i.e. it is >>>> tiny problem) then please >>>> send us matrix so we can experiment with it to petsc-maint at mcs.anl.log >>>> >>>> You can send us the matrix by simply running with -ksp_view_binary >>>> and >>>> sending us the file binaryoutput. >>>> >>>> Barry >>>> >>>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote: >>>> >>>>> When m = n = small (like 50), it works fine. When I set m=n=5000, I >>>>> see >>>>> the same thing, where one process on the localhost is taking >4 G >>>>> of RAM, >>>>> while all other processes are taking 137 M. >>>>> >>>>> Is this the standard behavior for MUMPS? It seems strange to me. >>>>> >>>>> Randy >>>>> >>>>> >>>>> Matthew Knepley wrote: >>>>>> Does it work on KSP ex2? >>>>>> Matt >>>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie >>>>>> wrote: >>>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run a >>>>>>> small test >>>>>>> problem, but I'm having some problems. It seems to begin just >>>>>>> fine, but >>>>>>> what I notice is that on one process (out of 64), the memory just >>>>>>> keeps >>>>>>> going up and up and up until it crashes, while on the other >>>>>>> processes, >>>>>>> the memory usage is reasonable. I'm wondering if anyone might >>>>>>> have any idea >>>>>>> why? By the way, my command file is like this: >>>>>>> >>>>>>> -ksp_type preonly >>>>>>> -pc_type lu >>>>>>> -mat_type aijmumps >>>>>>> -mat_mumps_cntl_4 3 >>>>>>> -mat_mumps_cntl_9 1 >>>>>>> >>>>>>> >>>>>>> Randy >>>>>>> >>>>>>> ps. This happens after the analysis stage and in the >>>>>>> factorization stage. >>>>>>> >>>>>>> >>>>> >>>> > > > From bsmith at mcs.anl.gov Fri Aug 1 19:44:15 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 1 Aug 2008 19:44:15 -0500 Subject: [mumps-dev] Re: Question on using MUMPS in PETSC In-Reply-To: <48939311.8000402@gmail.com> References: <48923016.2000401@gmail.com> <48924304.7070107@gmail.com> <4892976C.1000605@gmail.com> <48937737.7000405@ens-lyon.fr> <48939311.8000402@gmail.com> Message-ID: <88AD5530-616B-4852-90F0-F8FA6C6BDE12@mcs.anl.gov> Are you sure you are not constructing the original matrix with all its rows and columns on the first process? Barry On Aug 1, 2008, at 5:49 PM, Randall Mackie wrote: > In fact, during the Analysis step, the max amount of memory 300 > Mbytes is > used by one process. However, during the Factorization stage, that > same process > then starts to increase in memory, with all the other processes > staying the same. > > I've re-run this several times using different numbers of > processors, and I > keep getting the same behavior. > > > > Randy > > > Jean-Yves L Excellent wrote: >> Hi, >> Clearly in MUMPS processor 0 uses more memory during >> the analysis step because the analysis is sequential. >> So until we provide a parallel analysis, processor 0 >> is gathering the graph of the matrix from all other >> processors to perform the analysis. But that memory >> is freed at the end of the analysis so it should >> not affect the factorization. >> Thanks for letting us know if you have more information. >> Regards, >> Jean-Yves >> Hong Zhang wrote: >>> >>> Randy, >>> The petsc interface does not create much of extra >>> memories. >>> The analysis phase of MUMPS solver is sequential - which might >>> causes one process blow up with memory. >>> I'm forwarding this email to the mumps developer >>> for their input. >>> >>> Jean-Yves, >>> What do you think about the reported problem >>> (see attached below)? >>> >>> Thanks, >>> >>> Hong >>> >>> On Thu, 31 Jul 2008, Randall Mackie wrote: >>> >>>> Barry, >>>> >>>> I don't think it's the matrix - I saw the same behavior when I >>>> ran your >>>> ex2.c program and set m=n=5000. >>>> >>>> Randy >>>> >>>> >>>> Barry Smith wrote: >>>>> >>>>> If m and n are the number of rows and columns of the sparse >>>>> matrix (i.e. it is >>>>> tiny problem) then please >>>>> send us matrix so we can experiment with it to petsc-maint at mcs.anl.log >>>>> >>>>> You can send us the matrix by simply running with - >>>>> ksp_view_binary and >>>>> sending us the file binaryoutput. >>>>> >>>>> Barry >>>>> >>>>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote: >>>>> >>>>>> When m = n = small (like 50), it works fine. When I set >>>>>> m=n=5000, I see >>>>>> the same thing, where one process on the localhost is taking >4 >>>>>> G of RAM, >>>>>> while all other processes are taking 137 M. >>>>>> >>>>>> Is this the standard behavior for MUMPS? It seems strange to me. >>>>>> >>>>>> Randy >>>>>> >>>>>> >>>>>> Matthew Knepley wrote: >>>>>>> Does it work on KSP ex2? >>>>>>> Matt >>>>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie >>>>>> > wrote: >>>>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run >>>>>>>> a small test >>>>>>>> problem, but I'm having some problems. It seems to begin just >>>>>>>> fine, but >>>>>>>> what I notice is that on one process (out of 64), the memory >>>>>>>> just keeps >>>>>>>> going up and up and up until it crashes, while on the other >>>>>>>> processes, >>>>>>>> the memory usage is reasonable. I'm wondering if anyone might >>>>>>>> have any idea >>>>>>>> why? By the way, my command file is like this: >>>>>>>> >>>>>>>> -ksp_type preonly >>>>>>>> -pc_type lu >>>>>>>> -mat_type aijmumps >>>>>>>> -mat_mumps_cntl_4 3 >>>>>>>> -mat_mumps_cntl_9 1 >>>>>>>> >>>>>>>> >>>>>>>> Randy >>>>>>>> >>>>>>>> ps. This happens after the analysis stage and in the >>>>>>>> factorization stage. >>>>>>>> >>>>>>>> >>>>>> >>>>> > From rlmackie862 at gmail.com Fri Aug 1 19:49:37 2008 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 01 Aug 2008 17:49:37 -0700 Subject: [mumps-dev] Re: Question on using MUMPS in PETSC In-Reply-To: <88AD5530-616B-4852-90F0-F8FA6C6BDE12@mcs.anl.gov> References: <48923016.2000401@gmail.com> <48924304.7070107@gmail.com> <4892976C.1000605@gmail.com> <48937737.7000405@ens-lyon.fr> <48939311.8000402@gmail.com> <88AD5530-616B-4852-90F0-F8FA6C6BDE12@mcs.anl.gov> Message-ID: <4893AF21.8050508@gmail.com> Barry, No, this is the same program I've used quite successfully using iterative methods within PETSc for years. Each processors portion of the matrix is constructed on the individual processors. In fact, I downloaded and recompiled PETSc to use SUPERLU, and using the exact same program, but changing the matrix type from aijmumps to superlu_dist, and it worked just fine. So, I'm not sure why MUMPS is not working. Randy Barry Smith wrote: > > Are you sure you are not constructing the original matrix with all its > rows and columns > on the first process? > > Barry > On Aug 1, 2008, at 5:49 PM, Randall Mackie wrote: > >> In fact, during the Analysis step, the max amount of memory 300 Mbytes is >> used by one process. However, during the Factorization stage, that >> same process >> then starts to increase in memory, with all the other processes >> staying the same. >> >> I've re-run this several times using different numbers of processors, >> and I >> keep getting the same behavior. >> >> >> >> Randy >> >> >> Jean-Yves L Excellent wrote: >>> Hi, >>> Clearly in MUMPS processor 0 uses more memory during >>> the analysis step because the analysis is sequential. >>> So until we provide a parallel analysis, processor 0 >>> is gathering the graph of the matrix from all other >>> processors to perform the analysis. But that memory >>> is freed at the end of the analysis so it should >>> not affect the factorization. >>> Thanks for letting us know if you have more information. >>> Regards, >>> Jean-Yves >>> Hong Zhang wrote: >>>> >>>> Randy, >>>> The petsc interface does not create much of extra >>>> memories. >>>> The analysis phase of MUMPS solver is sequential - which might >>>> causes one process blow up with memory. >>>> I'm forwarding this email to the mumps developer >>>> for their input. >>>> >>>> Jean-Yves, >>>> What do you think about the reported problem >>>> (see attached below)? >>>> >>>> Thanks, >>>> >>>> Hong >>>> >>>> On Thu, 31 Jul 2008, Randall Mackie wrote: >>>> >>>>> Barry, >>>>> >>>>> I don't think it's the matrix - I saw the same behavior when I ran >>>>> your >>>>> ex2.c program and set m=n=5000. >>>>> >>>>> Randy >>>>> >>>>> >>>>> Barry Smith wrote: >>>>>> >>>>>> If m and n are the number of rows and columns of the sparse >>>>>> matrix (i.e. it is >>>>>> tiny problem) then please >>>>>> send us matrix so we can experiment with it to >>>>>> petsc-maint at mcs.anl.log >>>>>> >>>>>> You can send us the matrix by simply running with >>>>>> -ksp_view_binary and >>>>>> sending us the file binaryoutput. >>>>>> >>>>>> Barry >>>>>> >>>>>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote: >>>>>> >>>>>>> When m = n = small (like 50), it works fine. When I set m=n=5000, >>>>>>> I see >>>>>>> the same thing, where one process on the localhost is taking >4 G >>>>>>> of RAM, >>>>>>> while all other processes are taking 137 M. >>>>>>> >>>>>>> Is this the standard behavior for MUMPS? It seems strange to me. >>>>>>> >>>>>>> Randy >>>>>>> >>>>>>> >>>>>>> Matthew Knepley wrote: >>>>>>>> Does it work on KSP ex2? >>>>>>>> Matt >>>>>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie >>>>>>>> wrote: >>>>>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run a >>>>>>>>> small test >>>>>>>>> problem, but I'm having some problems. It seems to begin just >>>>>>>>> fine, but >>>>>>>>> what I notice is that on one process (out of 64), the memory >>>>>>>>> just keeps >>>>>>>>> going up and up and up until it crashes, while on the other >>>>>>>>> processes, >>>>>>>>> the memory usage is reasonable. I'm wondering if anyone might >>>>>>>>> have any idea >>>>>>>>> why? By the way, my command file is like this: >>>>>>>>> >>>>>>>>> -ksp_type preonly >>>>>>>>> -pc_type lu >>>>>>>>> -mat_type aijmumps >>>>>>>>> -mat_mumps_cntl_4 3 >>>>>>>>> -mat_mumps_cntl_9 1 >>>>>>>>> >>>>>>>>> >>>>>>>>> Randy >>>>>>>>> >>>>>>>>> ps. This happens after the analysis stage and in the >>>>>>>>> factorization stage. >>>>>>>>> >>>>>>>>> >>>>>>> >>>>>> >> > From schuang at ats.ucla.edu Fri Aug 1 19:52:00 2008 From: schuang at ats.ucla.edu (Shao-Ching Huang) Date: Fri, 1 Aug 2008 17:52:00 -0700 Subject: KSPSetNullSpace and CG for singular systems Message-ID: <20080802005200.GA27199@ats.ucla.edu> Hi, I am trying to use CG to solve a singular systems (i.e. Poisson equation with periodic conditions on all boundaries). The code works when I use GMRES, but it diverges when I switch CG. Since the null space is a constant vector, in the code I have: KSP ksp; MatNullSpace nullspace; ... MatNullSpaceCreate(MPI_COMM_WORLD, PETSC_TRUE, 0, PETSC_NULL, &nullspace); KSPSetNullSpace(ksp, nullspace); MatNullSpaceRemove(nullspace, f->p_rhs, PETSC_NULL); KSPSetFromOptions(ksp); KSPSetUp(ksp); KSPSolve(ksp, f->p_rhs, f->phi); [f->p_rhs is the RHS vector. f->phi is the solution vector.] When I use "-ksp_type gmres", it converges (shown by -ksp_monitor). However, when I switch to "-ksp_type cg", it diverges. In this case (cg), "-ksp_converged_reason" says: Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 1 I also try adding "-sub_pc_factor_shift_nonzero 0.0000000001" but the CG case still fails. Am I missing some step in handling the null space when using CG? Thanks, Shao-Ching Huang From bsmith at mcs.anl.gov Fri Aug 1 20:56:23 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 1 Aug 2008 20:56:23 -0500 Subject: KSPSetNullSpace and CG for singular systems In-Reply-To: <20080802005200.GA27199@ats.ucla.edu> References: <20080802005200.GA27199@ats.ucla.edu> Message-ID: <07AC88D4-67E4-45A7-AA21-4FA88A7F1EB7@mcs.anl.gov> It is not the null space that is the problem. You are right to try the shift but 1) first run on one process using -pc_type icc -ksp_type cg - pc_factor_shift_positive_definite this will keep increasing the shift until it produces a positive- definite preconditioner. Run also with -ksp_view to confirm that it is using all the options you provided. (you can run with -help to see the option names) 2) If you have the sequential converging then use -pc_type bjacobi - sub_pc_type icc -pc_factor_shift_positive_definite Barry On Aug 1, 2008, at 7:52 PM, Shao-Ching Huang wrote: > Hi, > > I am trying to use CG to solve a singular systems (i.e. Poisson > equation with periodic conditions on all boundaries). > > The code works when I use GMRES, but it diverges when I switch CG. > > Since the null space is a constant vector, in the code I have: > > KSP ksp; > MatNullSpace nullspace; > ... > MatNullSpaceCreate(MPI_COMM_WORLD, PETSC_TRUE, 0, PETSC_NULL, > &nullspace); > KSPSetNullSpace(ksp, nullspace); > MatNullSpaceRemove(nullspace, f->p_rhs, PETSC_NULL); > > KSPSetFromOptions(ksp); > KSPSetUp(ksp); > KSPSolve(ksp, f->p_rhs, f->phi); > > [f->p_rhs is the RHS vector. f->phi is the solution vector.] > > When I use "-ksp_type gmres", it converges (shown by -ksp_monitor). > > However, when I switch to "-ksp_type cg", it diverges. > In this case (cg), "-ksp_converged_reason" says: > > Linear solve did not converge due to DIVERGED_INDEFINITE_PC > iterations 1 > > I also try adding "-sub_pc_factor_shift_nonzero 0.0000000001" but the > CG case still fails. > > Am I missing some step in handling the null space when using CG? > > Thanks, > > Shao-Ching Huang > > From stephan.kramer at imperial.ac.uk Sat Aug 2 05:04:06 2008 From: stephan.kramer at imperial.ac.uk (Stephan Kramer) Date: Sat, 02 Aug 2008 11:04:06 +0100 Subject: KSPSetNullSpace and CG for singular systems In-Reply-To: <20080802005200.GA27199@ats.ucla.edu> References: <20080802005200.GA27199@ats.ucla.edu> Message-ID: <48943116.1080006@imperial.ac.uk> Shao-Ching Huang wrote: > Hi, > > I am trying to use CG to solve a singular systems (i.e. Poisson > equation with periodic conditions on all boundaries). > > The code works when I use GMRES, but it diverges when I switch CG. > > Since the null space is a constant vector, in the code I have: > I might be missing something here, but aren't linearly varying vectors in your null space as well? Cheers Stephan > KSP ksp; > MatNullSpace nullspace; > ... > MatNullSpaceCreate(MPI_COMM_WORLD, PETSC_TRUE, 0, PETSC_NULL, &nullspace); > KSPSetNullSpace(ksp, nullspace); > MatNullSpaceRemove(nullspace, f->p_rhs, PETSC_NULL); > > KSPSetFromOptions(ksp); > KSPSetUp(ksp); > KSPSolve(ksp, f->p_rhs, f->phi); > > [f->p_rhs is the RHS vector. f->phi is the solution vector.] > > When I use "-ksp_type gmres", it converges (shown by -ksp_monitor). > > However, when I switch to "-ksp_type cg", it diverges. > In this case (cg), "-ksp_converged_reason" says: > > Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 1 > > I also try adding "-sub_pc_factor_shift_nonzero 0.0000000001" but the > CG case still fails. > > Am I missing some step in handling the null space when using CG? > > Thanks, > > Shao-Ching Huang > > > > From hzhang at mcs.anl.gov Sat Aug 2 09:33:29 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sat, 2 Aug 2008 09:33:29 -0500 (CDT) Subject: [mumps-dev] Re: Question on using MUMPS in PETSC In-Reply-To: <4893AF21.8050508@gmail.com> References: <48923016.2000401@gmail.com> <48924304.7070107@gmail.com> <4892976C.1000605@gmail.com> <48937737.7000405@ens-lyon.fr> <48939311.8000402@gmail.com> <88AD5530-616B-4852-90F0-F8FA6C6BDE12@mcs.anl.gov> <4893AF21.8050508@gmail.com> Message-ID: Randy, I'll check it. Did you use /src/ksp/ksp/examples/tutorials/ex2.c ? Can you give me the petsc version, num of processors, and the runtime options used. Thanks, Hong On Fri, 1 Aug 2008, Randall Mackie wrote: > Barry, > > No, this is the same program I've used quite successfully using iterative > methods within PETSc for years. Each processors portion of the matrix > is constructed on the individual processors. > > In fact, I downloaded and recompiled PETSc to use SUPERLU, and using the > exact > same program, but changing the matrix type from aijmumps to superlu_dist, > and it worked just fine. > > So, I'm not sure why MUMPS is not working. > > Randy > > > Barry Smith wrote: >> >> Are you sure you are not constructing the original matrix with all its >> rows and columns >> on the first process? >> >> Barry >> On Aug 1, 2008, at 5:49 PM, Randall Mackie wrote: >> >>> In fact, during the Analysis step, the max amount of memory 300 Mbytes is >>> used by one process. However, during the Factorization stage, that same >>> process >>> then starts to increase in memory, with all the other processes staying >>> the same. >>> >>> I've re-run this several times using different numbers of processors, and >>> I >>> keep getting the same behavior. >>> >>> >>> >>> Randy >>> >>> >>> Jean-Yves L Excellent wrote: >>>> Hi, >>>> Clearly in MUMPS processor 0 uses more memory during >>>> the analysis step because the analysis is sequential. >>>> So until we provide a parallel analysis, processor 0 >>>> is gathering the graph of the matrix from all other >>>> processors to perform the analysis. But that memory >>>> is freed at the end of the analysis so it should >>>> not affect the factorization. >>>> Thanks for letting us know if you have more information. >>>> Regards, >>>> Jean-Yves >>>> Hong Zhang wrote: >>>>> >>>>> Randy, >>>>> The petsc interface does not create much of extra >>>>> memories. >>>>> The analysis phase of MUMPS solver is sequential - which might causes >>>>> one process blow up with memory. >>>>> I'm forwarding this email to the mumps developer >>>>> for their input. >>>>> >>>>> Jean-Yves, >>>>> What do you think about the reported problem >>>>> (see attached below)? >>>>> >>>>> Thanks, >>>>> >>>>> Hong >>>>> >>>>> On Thu, 31 Jul 2008, Randall Mackie wrote: >>>>> >>>>>> Barry, >>>>>> >>>>>> I don't think it's the matrix - I saw the same behavior when I ran your >>>>>> ex2.c program and set m=n=5000. >>>>>> >>>>>> Randy >>>>>> >>>>>> >>>>>> Barry Smith wrote: >>>>>>> >>>>>>> If m and n are the number of rows and columns of the sparse matrix >>>>>>> (i.e. it is >>>>>>> tiny problem) then please >>>>>>> send us matrix so we can experiment with it to petsc-maint at mcs.anl.log >>>>>>> >>>>>>> You can send us the matrix by simply running with -ksp_view_binary >>>>>>> and >>>>>>> sending us the file binaryoutput. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote: >>>>>>> >>>>>>>> When m = n = small (like 50), it works fine. When I set m=n=5000, I >>>>>>>> see >>>>>>>> the same thing, where one process on the localhost is taking >4 G of >>>>>>>> RAM, >>>>>>>> while all other processes are taking 137 M. >>>>>>>> >>>>>>>> Is this the standard behavior for MUMPS? It seems strange to me. >>>>>>>> >>>>>>>> Randy >>>>>>>> >>>>>>>> >>>>>>>> Matthew Knepley wrote: >>>>>>>>> Does it work on KSP ex2? >>>>>>>>> Matt >>>>>>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie >>>>>>>>> wrote: >>>>>>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run a >>>>>>>>>> small test >>>>>>>>>> problem, but I'm having some problems. It seems to begin just fine, >>>>>>>>>> but >>>>>>>>>> what I notice is that on one process (out of 64), the memory just >>>>>>>>>> keeps >>>>>>>>>> going up and up and up until it crashes, while on the other >>>>>>>>>> processes, >>>>>>>>>> the memory usage is reasonable. I'm wondering if anyone might have >>>>>>>>>> any idea >>>>>>>>>> why? By the way, my command file is like this: >>>>>>>>>> >>>>>>>>>> -ksp_type preonly >>>>>>>>>> -pc_type lu >>>>>>>>>> -mat_type aijmumps >>>>>>>>>> -mat_mumps_cntl_4 3 >>>>>>>>>> -mat_mumps_cntl_9 1 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Randy >>>>>>>>>> >>>>>>>>>> ps. This happens after the analysis stage and in the factorization >>>>>>>>>> stage. >>>>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>> >>> >> > > From schuang at ats.ucla.edu Sat Aug 2 19:54:08 2008 From: schuang at ats.ucla.edu (Shao-Ching Huang) Date: Sat, 2 Aug 2008 17:54:08 -0700 Subject: KSPSetNullSpace and CG for singular systems In-Reply-To: <48943116.1080006@imperial.ac.uk> References: <20080802005200.GA27199@ats.ucla.edu> <48943116.1080006@imperial.ac.uk> Message-ID: <20080803005408.GC30633@ats.ucla.edu> On Sat, Aug 02, 2008 at 11:04:06AM +0100, Stephan Kramer wrote: >> I am trying to use CG to solve a singular systems (i.e. Poisson >> equation with periodic conditions on all boundaries). >> >> The code works when I use GMRES, but it diverges when I switch CG. >> >> Since the null space is a constant vector, in the code I have: >> > I might be missing something here, but aren't linearly varying vectors > in your null space as well? By 'constant vector', I meant that the null space vector has the form: c * [ 1 1 ... 1 ]^T where c is an arbitray real number (^T = transpose). Linearly varying (in space) vectors do not satisfy the periodic boundary condition -- so they are not part of the solutions. Thanks, Shao-Ching From schuang at ats.ucla.edu Sun Aug 3 20:23:47 2008 From: schuang at ats.ucla.edu (Shao-Ching Huang) Date: Sun, 3 Aug 2008 18:23:47 -0700 Subject: KSPSetNullSpace and CG for singular systems In-Reply-To: <07AC88D4-67E4-45A7-AA21-4FA88A7F1EB7@mcs.anl.gov> References: <20080802005200.GA27199@ats.ucla.edu> <07AC88D4-67E4-45A7-AA21-4FA88A7F1EB7@mcs.anl.gov> Message-ID: <20080804012347.GA2618@ats.ucla.edu> Hi Barry: Thanks you for your suggestions. Now both sequential and parallel cases work when I use CG. Shao-Ching On Fri, Aug 01, 2008 at 08:56:23PM -0500, Barry Smith wrote: > > It is not the null space that is the problem. > > You are right to try the shift but > 1) first run on one process using -pc_type icc -ksp_type cg - > pc_factor_shift_positive_definite > this will keep increasing the shift until it produces a positive- > definite preconditioner. > > Run also with -ksp_view to confirm that it is using all the options you > provided. (you can run with -help to > see the option names) > > 2) If you have the sequential converging then use -pc_type bjacobi - > sub_pc_type icc -pc_factor_shift_positive_definite > > Barry > > On Aug 1, 2008, at 7:52 PM, Shao-Ching Huang wrote: > >> Hi, >> >> I am trying to use CG to solve a singular systems (i.e. Poisson >> equation with periodic conditions on all boundaries). >> >> The code works when I use GMRES, but it diverges when I switch CG. >> >> Since the null space is a constant vector, in the code I have: >> >> KSP ksp; >> MatNullSpace nullspace; >> ... >> MatNullSpaceCreate(MPI_COMM_WORLD, PETSC_TRUE, 0, PETSC_NULL, >> &nullspace); >> KSPSetNullSpace(ksp, nullspace); >> MatNullSpaceRemove(nullspace, f->p_rhs, PETSC_NULL); >> >> KSPSetFromOptions(ksp); >> KSPSetUp(ksp); >> KSPSolve(ksp, f->p_rhs, f->phi); >> >> [f->p_rhs is the RHS vector. f->phi is the solution vector.] >> >> When I use "-ksp_type gmres", it converges (shown by -ksp_monitor). >> >> However, when I switch to "-ksp_type cg", it diverges. >> In this case (cg), "-ksp_converged_reason" says: >> >> Linear solve did not converge due to DIVERGED_INDEFINITE_PC >> iterations 1 >> >> I also try adding "-sub_pc_factor_shift_nonzero 0.0000000001" but the >> CG case still fails. >> >> Am I missing some step in handling the null space when using CG? >> >> Thanks, >> >> Shao-Ching Huang >> >> From lua.byhh at gmail.com Sun Aug 3 22:09:48 2008 From: lua.byhh at gmail.com (berry) Date: Mon, 4 Aug 2008 11:09:48 +0800 Subject: KSPSetNullSpace and CG for singular systems In-Reply-To: <20080804012347.GA2618@ats.ucla.edu> References: <20080802005200.GA27199@ats.ucla.edu> <07AC88D4-67E4-45A7-AA21-4FA88A7F1EB7@mcs.anl.gov> <20080804012347.GA2618@ats.ucla.edu> Message-ID: hi, shao-ching Why not try to change the singular matrix to be non-singular ? I mean set the last column and row of the matrix to be zero except diagonal element( which is set to 1). On Mon, Aug 4, 2008 at 9:23 AM, Shao-Ching Huang wrote: > Hi Barry: > > Thanks you for your suggestions. Now both sequential and parallel > cases work when I use CG. > > Shao-Ching > > > On Fri, Aug 01, 2008 at 08:56:23PM -0500, Barry Smith wrote: > > > > It is not the null space that is the problem. > > > > You are right to try the shift but > > 1) first run on one process using -pc_type icc -ksp_type cg - > > pc_factor_shift_positive_definite > > this will keep increasing the shift until it produces a positive- > > definite preconditioner. > > > > Run also with -ksp_view to confirm that it is using all the options you > > provided. (you can run with -help to > > see the option names) > > > > 2) If you have the sequential converging then use -pc_type bjacobi - > > sub_pc_type icc -pc_factor_shift_positive_definite > > > > Barry > > > > On Aug 1, 2008, at 7:52 PM, Shao-Ching Huang wrote: > > > >> Hi, > >> > >> I am trying to use CG to solve a singular systems (i.e. Poisson > >> equation with periodic conditions on all boundaries). > >> > >> The code works when I use GMRES, but it diverges when I switch CG. > >> > >> Since the null space is a constant vector, in the code I have: > >> > >> KSP ksp; > >> MatNullSpace nullspace; > >> ... > >> MatNullSpaceCreate(MPI_COMM_WORLD, PETSC_TRUE, 0, PETSC_NULL, > >> &nullspace); > >> KSPSetNullSpace(ksp, nullspace); > >> MatNullSpaceRemove(nullspace, f->p_rhs, PETSC_NULL); > >> > >> KSPSetFromOptions(ksp); > >> KSPSetUp(ksp); > >> KSPSolve(ksp, f->p_rhs, f->phi); > >> > >> [f->p_rhs is the RHS vector. f->phi is the solution vector.] > >> > >> When I use "-ksp_type gmres", it converges (shown by -ksp_monitor). > >> > >> However, when I switch to "-ksp_type cg", it diverges. > >> In this case (cg), "-ksp_converged_reason" says: > >> > >> Linear solve did not converge due to DIVERGED_INDEFINITE_PC > >> iterations 1 > >> > >> I also try adding "-sub_pc_factor_shift_nonzero 0.0000000001" but the > >> CG case still fails. > >> > >> Am I missing some step in handling the null space when using CG? > >> > >> Thanks, > >> > >> Shao-Ching Huang > >> > >> > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Aug 3 22:49:22 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 3 Aug 2008 22:49:22 -0500 Subject: KSPSetNullSpace and CG for singular systems In-Reply-To: References: <20080802005200.GA27199@ats.ucla.edu> <07AC88D4-67E4-45A7-AA21-4FA88A7F1EB7@mcs.anl.gov> <20080804012347.GA2618@ats.ucla.edu> Message-ID: One reason not to use this approach is that it can result in a very ill-conditioned linear system that may be difficult for an iterative solver. Barry On Aug 3, 2008, at 10:09 PM, berry wrote: > hi, shao-ching > > Why not try to change the singular matrix to be non-singular ? > I mean set the last column and row of the matrix to be zero except > diagonal element( which is set to 1). > > > > On Mon, Aug 4, 2008 at 9:23 AM, Shao-Ching Huang > wrote: > Hi Barry: > > Thanks you for your suggestions. Now both sequential and parallel > cases work when I use CG. > > Shao-Ching > > > On Fri, Aug 01, 2008 at 08:56:23PM -0500, Barry Smith wrote: > > > > It is not the null space that is the problem. > > > > You are right to try the shift but > > 1) first run on one process using -pc_type icc -ksp_type cg - > > pc_factor_shift_positive_definite > > this will keep increasing the shift until it produces a positive- > > definite preconditioner. > > > > Run also with -ksp_view to confirm that it is using all the > options you > > provided. (you can run with -help to > > see the option names) > > > > 2) If you have the sequential converging then use -pc_type bjacobi - > > sub_pc_type icc -pc_factor_shift_positive_definite > > > > Barry > > > > On Aug 1, 2008, at 7:52 PM, Shao-Ching Huang wrote: > > > >> Hi, > >> > >> I am trying to use CG to solve a singular systems (i.e. Poisson > >> equation with periodic conditions on all boundaries). > >> > >> The code works when I use GMRES, but it diverges when I switch CG. > >> > >> Since the null space is a constant vector, in the code I have: > >> > >> KSP ksp; > >> MatNullSpace nullspace; > >> ... > >> MatNullSpaceCreate(MPI_COMM_WORLD, PETSC_TRUE, 0, PETSC_NULL, > >> &nullspace); > >> KSPSetNullSpace(ksp, nullspace); > >> MatNullSpaceRemove(nullspace, f->p_rhs, PETSC_NULL); > >> > >> KSPSetFromOptions(ksp); > >> KSPSetUp(ksp); > >> KSPSolve(ksp, f->p_rhs, f->phi); > >> > >> [f->p_rhs is the RHS vector. f->phi is the solution vector.] > >> > >> When I use "-ksp_type gmres", it converges (shown by -ksp_monitor). > >> > >> However, when I switch to "-ksp_type cg", it diverges. > >> In this case (cg), "-ksp_converged_reason" says: > >> > >> Linear solve did not converge due to DIVERGED_INDEFINITE_PC > >> iterations 1 > >> > >> I also try adding "-sub_pc_factor_shift_nonzero 0.0000000001" but > the > >> CG case still fails. > >> > >> Am I missing some step in handling the null space when using CG? > >> > >> Thanks, > >> > >> Shao-Ching Huang > >> > >> > > > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China From rlmackie862 at gmail.com Mon Aug 4 12:56:40 2008 From: rlmackie862 at gmail.com (Randall Mackie) Date: Mon, 04 Aug 2008 10:56:40 -0700 Subject: [mumps-dev] Re: Question on using MUMPS in PETSC In-Reply-To: References: <48923016.2000401@gmail.com> <48924304.7070107@gmail.com> <4892976C.1000605@gmail.com> <48937737.7000405@ens-lyon.fr> <48939311.8000402@gmail.com> <88AD5530-616B-4852-90F0-F8FA6C6BDE12@mcs.anl.gov> <4893AF21.8050508@gmail.com> Message-ID: <489742D8.3010306@gmail.com> Hi Hong, I am using PETSc 2.3.3-p11. I was running on 64 processors (8 8-core Intel CPUS). I was using options: -ksp_type preonly -pc_type lu -mat_type aijmumps -mat_mumps_sym 0 -mat_mumps_icntl_4 3 -mat_mumps_icntl_9 1 With Mumps, both my code and ex2.c (with m=n=5000) would just keep allocating memory to one process until it ran out of memory. SuperLU worked fine on my problem (I didn't try it with ex2.c), taking only 1 Gbyte per process, and the results were exactly right. Randy Hong Zhang wrote: > > Randy, > > I'll check it. > Did you use > /src/ksp/ksp/examples/tutorials/ex2.c ? > > Can you give me the petsc version, num of processors, and the runtime > options used. > > Thanks, > > Hong > > On Fri, 1 Aug 2008, Randall Mackie wrote: > >> Barry, >> >> No, this is the same program I've used quite successfully using iterative >> methods within PETSc for years. Each processors portion of the matrix >> is constructed on the individual processors. >> >> In fact, I downloaded and recompiled PETSc to use SUPERLU, and using >> the exact >> same program, but changing the matrix type from aijmumps to superlu_dist, >> and it worked just fine. >> >> So, I'm not sure why MUMPS is not working. >> >> Randy >> >> >> Barry Smith wrote: >>> >>> Are you sure you are not constructing the original matrix with all >>> its rows and columns >>> on the first process? >>> >>> Barry >>> On Aug 1, 2008, at 5:49 PM, Randall Mackie wrote: >>> >>>> In fact, during the Analysis step, the max amount of memory 300 >>>> Mbytes is >>>> used by one process. However, during the Factorization stage, that >>>> same process >>>> then starts to increase in memory, with all the other processes >>>> staying the same. >>>> >>>> I've re-run this several times using different numbers of >>>> processors, and I >>>> keep getting the same behavior. >>>> >>>> >>>> >>>> Randy >>>> >>>> >>>> Jean-Yves L Excellent wrote: >>>>> Hi, >>>>> Clearly in MUMPS processor 0 uses more memory during >>>>> the analysis step because the analysis is sequential. >>>>> So until we provide a parallel analysis, processor 0 >>>>> is gathering the graph of the matrix from all other >>>>> processors to perform the analysis. But that memory >>>>> is freed at the end of the analysis so it should >>>>> not affect the factorization. >>>>> Thanks for letting us know if you have more information. >>>>> Regards, >>>>> Jean-Yves >>>>> Hong Zhang wrote: >>>>>> >>>>>> Randy, >>>>>> The petsc interface does not create much of extra >>>>>> memories. >>>>>> The analysis phase of MUMPS solver is sequential - which might >>>>>> causes one process blow up with memory. >>>>>> I'm forwarding this email to the mumps developer >>>>>> for their input. >>>>>> >>>>>> Jean-Yves, >>>>>> What do you think about the reported problem >>>>>> (see attached below)? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Hong >>>>>> >>>>>> On Thu, 31 Jul 2008, Randall Mackie wrote: >>>>>> >>>>>>> Barry, >>>>>>> >>>>>>> I don't think it's the matrix - I saw the same behavior when I >>>>>>> ran your >>>>>>> ex2.c program and set m=n=5000. >>>>>>> >>>>>>> Randy >>>>>>> >>>>>>> >>>>>>> Barry Smith wrote: >>>>>>>> >>>>>>>> If m and n are the number of rows and columns of the sparse >>>>>>>> matrix (i.e. it is >>>>>>>> tiny problem) then please >>>>>>>> send us matrix so we can experiment with it to >>>>>>>> petsc-maint at mcs.anl.log >>>>>>>> >>>>>>>> You can send us the matrix by simply running with >>>>>>>> -ksp_view_binary and >>>>>>>> sending us the file binaryoutput. >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> On Jul 31, 2008, at 5:56 PM, Randall Mackie wrote: >>>>>>>> >>>>>>>>> When m = n = small (like 50), it works fine. When I set >>>>>>>>> m=n=5000, I see >>>>>>>>> the same thing, where one process on the localhost is taking >4 >>>>>>>>> G of RAM, >>>>>>>>> while all other processes are taking 137 M. >>>>>>>>> >>>>>>>>> Is this the standard behavior for MUMPS? It seems strange to me. >>>>>>>>> >>>>>>>>> Randy >>>>>>>>> >>>>>>>>> >>>>>>>>> Matthew Knepley wrote: >>>>>>>>>> Does it work on KSP ex2? >>>>>>>>>> Matt >>>>>>>>>> On Thu, Jul 31, 2008 at 4:35 PM, Randall Mackie >>>>>>>>>> wrote: >>>>>>>>>>> I've compiled PETSc with MUMPS support, and I'm trying to run >>>>>>>>>>> a small test >>>>>>>>>>> problem, but I'm having some problems. It seems to begin just >>>>>>>>>>> fine, but >>>>>>>>>>> what I notice is that on one process (out of 64), the memory >>>>>>>>>>> just keeps >>>>>>>>>>> going up and up and up until it crashes, while on the other >>>>>>>>>>> processes, >>>>>>>>>>> the memory usage is reasonable. I'm wondering if anyone might >>>>>>>>>>> have any idea >>>>>>>>>>> why? By the way, my command file is like this: >>>>>>>>>>> >>>>>>>>>>> -ksp_type preonly >>>>>>>>>>> -pc_type lu >>>>>>>>>>> -mat_type aijmumps >>>>>>>>>>> -mat_mumps_cntl_4 3 >>>>>>>>>>> -mat_mumps_cntl_9 1 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Randy >>>>>>>>>>> >>>>>>>>>>> ps. This happens after the analysis stage and in the >>>>>>>>>>> factorization stage. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>> >>>> >>> >> >> > From hartonoa at cse.ohio-state.edu Tue Aug 5 17:24:07 2008 From: hartonoa at cse.ohio-state.edu (Albert Hartono) Date: Tue, 5 Aug 2008 18:24:07 -0400 (EDT) Subject: question -- configure PETSC to exploit multicore machine Message-ID: Hello, I'm new to PETSC and I'm running PETSC on a single node machine that has eight cores. And executing PETSC programs is done interactively (not via batch requests). Is there an easy way to configure/adjust PETSC so that it can run eight MPI processes on that machine (i.e. one MPI process/core)? Thanks, -Albert From balay at mcs.anl.gov Tue Aug 5 17:33:27 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 5 Aug 2008 17:33:27 -0500 (CDT) Subject: question -- configure PETSC to exploit multicore machine In-Reply-To: References: Message-ID: On Tue, 5 Aug 2008, Albert Hartono wrote: > Hello, > > I'm new to PETSC and I'm running PETSC on a single node machine that has > eight cores. And executing PETSC programs is done interactively (not via > batch requests). > > Is there an easy way to configure/adjust PETSC so that it can run eight > MPI processes on that machine (i.e. one MPI process/core)? Just install PETSc with MPI - for eg: with the configure option '--download-mpich=1' Now when you run with 'mpiexec -np 8 binary', the 8 MPI processes will be spawned on this machine - and the OS will make sure that each of the 8 cores are allocated to each of the 8 MPI processes. Satish From dimitri.lecas at c-s.fr Wed Aug 6 03:54:07 2008 From: dimitri.lecas at c-s.fr (LECAS Dimitri) Date: Wed, 6 Aug 2008 10:54:07 +0200 Subject: MAT_FLUSH_ASSEMBLY In-Reply-To: References: <20080723000954.xpd7wlsjuoswc8cc@messagerie.si.c-s.fr> Message-ID: <20080806105407.cslrflstogwsows4@messagerie.si.c-s.fr> Quoting Matthew Knepley : > 2008/7/22 LECAS Dimitri : >> Hi >> >> I'am building a matrice (MPISBAIJ) in two steps, the first one i add many >> contributions with MatSetValue and the flag ADD_VALUES, and the last step i >> put boundary conditions using the flag INSERT_VALUES. >> >> I want to understand how to use MatAssemblyBegin and MatAssemblyEnd. Do I >> have to put between the two steps call to MatAssemblyBegin and >> MatAssemblyEnd with MAT_FLUSH_ASSEMBLY flag and after the last step call to >> MatAssemblyBegin and MatAssemblyEnd with MAT_FINAL_ASSEMBLY flag ? > > 1) FLUSH allows you to change the insertion mode, and flushes the > communication > buffers. FINAL does the same and compresses the matrix, etc. What > you propose > should work. Yes it's work. > > 2) If you are just setting some rows to the identity, you might > consider MatZeroRows(). > Yes, i'am just setting one in diagonal and zero for others extradiagonals values. I try to use MatZero, but i have got an error saying that it's not possible to call this with MPISBAIJ matrix. I use MatSetValue to do this but it add all zero value in the matrix structure. So now, i have to build my matrix in csr, and give it to Petsc via MatSetValues call. -- Dimitri Lecas ---------------------------------------------------------------- Ce message electronique et tous les fichiers joints qu'il contient (ci-apres "le message") sont confidentiels et destines exclusivement a l'usage des destinataires indiques ou des personnes dument habilitees a les recevoir a leur place. Si vous recevez ce message par erreur, merci de bien vouloir le detruire et d'en avertir l'emetteur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication totale ou partielle est interdite sauf autorisation expresse de l'emetteur. Les idees et opinions exprimees dans ce message sont celles de son auteur et ne representent pas necessairement celles de CS Communication & Systemes ou de ses filiales. Malgre toutes les dispositions prises par CS Communication & Systemes et ses filiales pour minimiser les risques de virus, les fichiers joints a ce message peuvent contenir des virus qui pourraient endommager votre systeme informatique. Il vous appartient d'effectuer vos propres controles anti-virus avant d'ouvrir les fichiers joints. CS Communication & Systemes et ses filiales declinent toute responsabilite pour toute perte ou dommage resultant de l'utilisation de ce message et/ou des fichiers joints. This e-mail and any file attached hereto (hereinafter 'the e-mail') are confidential and intended solely for the use of the adressees indicated below or the persons duly entitled to receive them in their place. If you receive this e-mail in error, please delete it and notify the sender. Any use of this e-mail not in accordance with its purpose, any dissemination or disclosure, either whole or partial, is prohibited, unless formally approved by the sender. The ideas or opinions expressed in this e-mail are solely those of its author and do not necessarily represent those of CS Communication & Systeme or its affiliates. Despite all the measures taken by CS Communication & Systeme and its affiliates in order to minimize the risks of virus, the files attached to this e-mail may contain virus which could damage your information system. You are responsible for processing your own anti-virus checking before opening any file attached hereto. Neither CS Communication & Systemes, nor its affiliates, shall be held liable for any loss or damage due to the use of this e-mail or any file attached hereto. From tyoung at ippt.gov.pl Wed Aug 6 09:44:20 2008 From: tyoung at ippt.gov.pl (Toby D. Young) Date: Wed, 6 Aug 2008 16:44:20 +0200 (CEST) Subject: Self-consistent problems In-Reply-To: References: Message-ID: Hello, Is anyone out there using PETSc/SLEPc to solve self-consistent problems? I am interested to know an efficient way to save and load vector solutions for input to the next self-consistent itteration. My problem is to solve the Schrodinger equation itterated with Poisson equation. I am guessing that a binary save, destroy matrix, binary load, then solve is the best way forward. Any suggestions are very welcomed. Best, Toby ----- Toby D. Young - Adiunkt (Assistant Professor) Department of Computational Science Institute of Fundamental Technological Research Polish Academy of Sciences Room 206, ul. Swietokrzyska 21 00-049 Warszawa, Polska +48 22 826 12 81 ext. 184 http://rav.ippt.gov.pl/~tyoung From lua.byhh at gmail.com Thu Aug 7 08:37:38 2008 From: lua.byhh at gmail.com (berry) Date: Thu, 7 Aug 2008 21:37:38 +0800 Subject: KSPSetNullSpace and CG for singular systems In-Reply-To: References: <20080802005200.GA27199@ats.ucla.edu> <07AC88D4-67E4-45A7-AA21-4FA88A7F1EB7@mcs.anl.gov> <20080804012347.GA2618@ats.ucla.edu> Message-ID: Thanks... Shao-Ching, have you solved this problem? On Mon, Aug 4, 2008 at 11:49 AM, Barry Smith wrote: > > One reason not to use this approach is that it can result in a very > ill-conditioned linear > system that may be difficult for an iterative solver. > > Barry > > > On Aug 3, 2008, at 10:09 PM, berry wrote: > > hi, shao-ching >> >> Why not try to change the singular matrix to be non-singular ? >> I mean set the last column and row of the matrix to be zero except >> diagonal element( which is set to 1). >> >> >> >> On Mon, Aug 4, 2008 at 9:23 AM, Shao-Ching Huang >> wrote: >> Hi Barry: >> >> Thanks you for your suggestions. Now both sequential and parallel >> cases work when I use CG. >> >> Shao-Ching >> >> >> On Fri, Aug 01, 2008 at 08:56:23PM -0500, Barry Smith wrote: >> > >> > It is not the null space that is the problem. >> > >> > You are right to try the shift but >> > 1) first run on one process using -pc_type icc -ksp_type cg - >> > pc_factor_shift_positive_definite >> > this will keep increasing the shift until it produces a positive- >> > definite preconditioner. >> > >> > Run also with -ksp_view to confirm that it is using all the options you >> > provided. (you can run with -help to >> > see the option names) >> > >> > 2) If you have the sequential converging then use -pc_type bjacobi - >> > sub_pc_type icc -pc_factor_shift_positive_definite >> > >> > Barry >> > >> > On Aug 1, 2008, at 7:52 PM, Shao-Ching Huang wrote: >> > >> >> Hi, >> >> >> >> I am trying to use CG to solve a singular systems (i.e. Poisson >> >> equation with periodic conditions on all boundaries). >> >> >> >> The code works when I use GMRES, but it diverges when I switch CG. >> >> >> >> Since the null space is a constant vector, in the code I have: >> >> >> >> KSP ksp; >> >> MatNullSpace nullspace; >> >> ... >> >> MatNullSpaceCreate(MPI_COMM_WORLD, PETSC_TRUE, 0, PETSC_NULL, >> >> &nullspace); >> >> KSPSetNullSpace(ksp, nullspace); >> >> MatNullSpaceRemove(nullspace, f->p_rhs, PETSC_NULL); >> >> >> >> KSPSetFromOptions(ksp); >> >> KSPSetUp(ksp); >> >> KSPSolve(ksp, f->p_rhs, f->phi); >> >> >> >> [f->p_rhs is the RHS vector. f->phi is the solution vector.] >> >> >> >> When I use "-ksp_type gmres", it converges (shown by -ksp_monitor). >> >> >> >> However, when I switch to "-ksp_type cg", it diverges. >> >> In this case (cg), "-ksp_converged_reason" says: >> >> >> >> Linear solve did not converge due to DIVERGED_INDEFINITE_PC >> >> iterations 1 >> >> >> >> I also try adding "-sub_pc_factor_shift_nonzero 0.0000000001" but the >> >> CG case still fails. >> >> >> >> Am I missing some step in handling the null space when using CG? >> >> >> >> Thanks, >> >> >> >> Shao-Ching Huang >> >> >> >> >> >> >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From schuang at ats.ucla.edu Thu Aug 7 09:15:59 2008 From: schuang at ats.ucla.edu (Shao-Ching Huang) Date: Thu, 7 Aug 2008 07:15:59 -0700 Subject: KSPSetNullSpace and CG for singular systems In-Reply-To: References: <20080802005200.GA27199@ats.ucla.edu> <07AC88D4-67E4-45A7-AA21-4FA88A7F1EB7@mcs.anl.gov> <20080804012347.GA2618@ats.ucla.edu> Message-ID: <20080807141559.GB22667@ats.ucla.edu> Yes. thanks,. Shao-Ching On Thu, Aug 07, 2008 at 09:37:38PM +0800, berry wrote: > Thanks... > > Shao-Ching, have you solved this problem? > > > On Mon, Aug 4, 2008 at 11:49 AM, Barry Smith wrote: > > > > > One reason not to use this approach is that it can result in a very > > ill-conditioned linear > > system that may be difficult for an iterative solver. > > > > Barry > > > > > > On Aug 3, 2008, at 10:09 PM, berry wrote: > > > > hi, shao-ching > >> > >> Why not try to change the singular matrix to be non-singular ? > >> I mean set the last column and row of the matrix to be zero except > >> diagonal element( which is set to 1). > >> > >> > >> > >> On Mon, Aug 4, 2008 at 9:23 AM, Shao-Ching Huang > >> wrote: > >> Hi Barry: > >> > >> Thanks you for your suggestions. Now both sequential and parallel > >> cases work when I use CG. > >> > >> Shao-Ching > >> > >> > >> On Fri, Aug 01, 2008 at 08:56:23PM -0500, Barry Smith wrote: > >> > > >> > It is not the null space that is the problem. > >> > > >> > You are right to try the shift but > >> > 1) first run on one process using -pc_type icc -ksp_type cg - > >> > pc_factor_shift_positive_definite > >> > this will keep increasing the shift until it produces a positive- > >> > definite preconditioner. > >> > > >> > Run also with -ksp_view to confirm that it is using all the options you > >> > provided. (you can run with -help to > >> > see the option names) > >> > > >> > 2) If you have the sequential converging then use -pc_type bjacobi - > >> > sub_pc_type icc -pc_factor_shift_positive_definite > >> > > >> > Barry > >> > > >> > On Aug 1, 2008, at 7:52 PM, Shao-Ching Huang wrote: > >> > > >> >> Hi, > >> >> > >> >> I am trying to use CG to solve a singular systems (i.e. Poisson > >> >> equation with periodic conditions on all boundaries). > >> >> > >> >> The code works when I use GMRES, but it diverges when I switch CG. > >> >> > >> >> Since the null space is a constant vector, in the code I have: > >> >> > >> >> KSP ksp; > >> >> MatNullSpace nullspace; > >> >> ... > >> >> MatNullSpaceCreate(MPI_COMM_WORLD, PETSC_TRUE, 0, PETSC_NULL, > >> >> &nullspace); > >> >> KSPSetNullSpace(ksp, nullspace); > >> >> MatNullSpaceRemove(nullspace, f->p_rhs, PETSC_NULL); > >> >> > >> >> KSPSetFromOptions(ksp); > >> >> KSPSetUp(ksp); > >> >> KSPSolve(ksp, f->p_rhs, f->phi); > >> >> > >> >> [f->p_rhs is the RHS vector. f->phi is the solution vector.] > >> >> > >> >> When I use "-ksp_type gmres", it converges (shown by -ksp_monitor). > >> >> > >> >> However, when I switch to "-ksp_type cg", it diverges. > >> >> In this case (cg), "-ksp_converged_reason" says: > >> >> > >> >> Linear solve did not converge due to DIVERGED_INDEFINITE_PC > >> >> iterations 1 > >> >> > >> >> I also try adding "-sub_pc_factor_shift_nonzero 0.0000000001" but the > >> >> CG case still fails. > >> >> > >> >> Am I missing some step in handling the null space when using CG? > >> >> > >> >> Thanks, > >> >> > >> >> Shao-Ching Huang > >> >> > >> >> > >> > >> > >> > >> > >> -- > >> Pang Shengyong > >> Solidification Simulation Lab, > >> State Key Lab of Mould & Die Technology, > >> Huazhong Univ. of Sci. & Tech. China > >> > > > > From rxk at cfdrc.com Thu Aug 7 18:33:07 2008 From: rxk at cfdrc.com (Ravi) Date: Thu, 7 Aug 2008 18:33:07 -0500 Subject: colloboration with an academic team Message-ID: Dear All, This is Ravi Kannan, Research Engineer at CFD Research Corporation, Huntsville. We have been using PETSc for quite a while and have had great deal of success. We are looking to ?Test and verify? numerical toolkits for linear algebra for large-scale problems using PETSc. Some of the benchmark cases include: 1. Handling large dense unsymmetrical matrices; for instance, like those encountered in radiation surface-2-surface modeling and in boundary element methods 2. Benchmark of implicit algorithms in thermal and structural analysis. 3. Scalability, precision and round-off studies on serial and parallel machines. 4. Benchmark on indefinite matrices Etc We are looking for an academic team, (with experience in PETSc usage) who would be interested in collaborating with us. Feel free to contact us Regards Ravi Ravi Kannan Research Engineer CFD Research Corporation 215 Wynn Drive, Suite 501 Huntsville, AL 35805 (256)726-4851 rxk at cfdrc.com From jinzishuai at yahoo.com Fri Aug 8 00:17:10 2008 From: jinzishuai at yahoo.com (Shi Jin) Date: Thu, 7 Aug 2008 22:17:10 -0700 (PDT) Subject: colloboration with an academic team Message-ID: <225474.13943.qm@web36202.mail.mud.yahoo.com> Dear Mr. Kannan, This is Shi Jin, a postdoc at University of Alberta, Edmonton, Canada. We have been using PETSc for years for our finite element CFD code. Currently I am developing a new code with the petsc-dev to have unstructured grid enabled parallel computing. I am interested in the kind of collaborations you are proposing. Do you have more details? Thank you very much. -- Shi Jin, PhD http://www.ualberta.ca/~sjin1/ PS. Do you know Sarma Rani at your company? He was a postdoc in my PhD group at Cornell. If so, please delivery my greetings. ----- Original Message ---- > From: Ravi > To: petsc-users at mcs.anl.gov > Cc: ajp at cfdrc.com > Sent: Thursday, August 7, 2008 5:33:07 PM > Subject: colloboration with an academic team > > Dear All, > This is Ravi Kannan, Research Engineer at CFD Research > Corporation, Huntsville. We have been using PETSc for quite a while and have > had great deal of success. > > We are looking to ?Test and verify? numerical toolkits for linear > algebra for large-scale problems using PETSc. Some of the benchmark cases > include: > > 1. Handling large dense unsymmetrical matrices; for instance, like those > encountered in radiation surface-2-surface modeling and in boundary element > methods > 2. Benchmark of implicit algorithms in thermal and structural analysis. > 3. Scalability, precision and round-off studies on serial and parallel > machines. > 4. Benchmark on indefinite matrices > Etc > > We are looking for an academic team, (with experience in PETSc > usage) who would be interested in collaborating with us. > > Feel free to contact us > > Regards > Ravi > > > Ravi Kannan > Research Engineer > CFD Research Corporation > 215 Wynn Drive, Suite 501 > Huntsville, AL 35805 > (256)726-4851 > rxk at cfdrc.com From fernandez858 at gmail.com Fri Aug 8 11:18:42 2008 From: fernandez858 at gmail.com (Michel Cancelliere) Date: Fri, 8 Aug 2008 18:18:42 +0200 Subject: Matlab and PETSc communication In-Reply-To: References: <7f18de3b0807211757g4be0dce6vc25e7c39add03c67@mail.gmail.com> <7C3139A5-983E-4774-A943-DE1C926C7634@mcs.anl.gov> <37604ab40807221156i48fa2832rcd5d6ab94371559f@mail.gmail.com> <7f18de3b0807281033h2af92238ha1561944fc12eedc@mail.gmail.com> Message-ID: <7f18de3b0808080918n2341f1baib9025bd744b7aa77@mail.gmail.com> Hi, I am trying to use the function sread() but when I run the matlab program it give me this error: -One or more output arguments not assigned during call to "sread". and nothing happen... I also tried to use sreader but matlab didn't respond, am I doing something wrong? I am using Matlab R2008a, which matlab version is the most compatible with Petsc? Thank you, Michel On Mon, Jul 28, 2008 at 8:45 PM, Barry Smith wrote: > > On Jul 28, 2008, at 12:33 PM, Michel Cancelliere wrote: > > Hi, >> Thank you for your response, I think the socket communication between >> Matlab and PETSc is what I need, I took a look to the example ex12.c and >> ex12.m (Simple example to show how to PETSc programs can be run from Matlab) >> but in this case PETSc send the Vector solution that matlab "receive", is it >> possible to make the inverse? Send the variable from matlab to PETSc using >> socket communication? can PetscMatlabEngineGet do that ? How it works? >> > > You don't really want to use the Matlab Engine code, because with this > approach the PETSc program is "in charge" and just sends work requests > to Matlab. Instead you want Matlab in charge and to send work requests to > PETSc. > > The basic idea is that both Matlab and PETSc open a socket connection to > each other. On the PETSc side use PetscViewerSocketOpen() on the Matlab side > use the bin/matlab/@sreader/sreader.m class to create a (Matlab) socket > object. Now on the PETSc side you can make calls to VecView() to send > vectors to > Matlab and VecLoad() and MatLoad() to read vectors and matrices from > Matlab. On the Matlab side use PetscBinaryRead() and PetscBinaryWrite() to > read vectors from PETSc and to write vectors and matrices to PETSc. Your > PETSc program may end up looking something like > > PetscViewerSocketOpen(..... &socket); > > do forever: > VecLoad(socket, &b) > MatLoad(socket,....,&A) > > KSPSetOperators(ksp,A,A,....) > KSPSolve(ksp,b,x) > VecView(socket,x) > > You Matlab program could look something like > > socket = sread() > > do forever: > > Create your vector and matrix > PetscBinaryWrite(socket,b) > PetscBinaryWrite(socket,A) > x = PetscBinaryRead(socket) > > The details and correct syntax are obviously not given. > > Good luck, > > Barry > > > >> >> Aron, I was studying your recommendation but it means that I have to write >> all the function from my Matlab Code to C?, because it should take me a lot >> of time and this work is part of my MSc Thesis and maybe I need a less time >> consuming solution. >> >> Thank you in advance, >> >> Michel >> >> On Tue, Jul 22, 2008 at 8:56 PM, Aron Ahmadia >> wrote: >> Michel, >> >> I would recommend investing the time to write your C/C++ wrapper code a >> little higher up around the Newton iteration, since PETSc provides a great >> abstraction interface for it. Then you could write code to build the matrix >> (or assemble a matrix-free routine!) in C/C++, and pass the parameters in >> from there. >> >> ~Aron >> >> >> On Tue, Jul 22, 2008 at 12:12 AM, Matthew Knepley >> wrote: >> On Mon, Jul 21, 2008 at 10:14 PM, Barry Smith wrote: >> > >> > On Jul 21, 2008, at 7:57 PM, Michel Cancelliere wrote: >> > >> >> Hi, I am a new user of PETSc. I am working in Reservoir Simulation and >> I >> >> have been developing the simulator inside Matlab. I have some question >> in >> >> order to understand better my possibilities of success in what I want >> to do: >> >> >> >> ? I want to solve the linear system obtained from the inner >> >> iterations in the newton method using PETSc, is it possible to >> communicate >> >> in an efficient way PETSc with Matlab to do that? I now that I can >> write >> >> binary files and then read with PETSc but due the size of the matrix it >> is a >> >> very time-expensive way. Where i can find some examples? I look at the >> >> examples within the package but I could not find it. \ >> >> ? It is possible to call PETSc library inside Matlab? Using the >> Mex >> >> files and Matlab compiler? >> > >> > There is no code to do this. It is possible but pretty complicated to >> > write the appropriate Mex code. (Because >> > each Mex function is a separate shared library you cannot just write a >> Mex >> > function for each PETSc function since they >> > would not share the PETSc global variables. You would have to write one >> Mex >> > function that is a "gatekeeper" and calls >> > the requested PETSc function underneath. I've monkeyed with this a few >> times >> > but did not have the time/energy/intellect >> > to write code to automate this process. Give me 300,000 dollars and we >> could >> > hire someone to write this :-) >> > >> > You might look at the "newly improved" socket interface in petsc-dev >> > (http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html). >> > With this you write a stand alone C PETSc program that waits at a >> socket, >> > receive the matrix and right hand side and then >> > sends back the solution. The code for marshalling the matrices and >> vector is >> > common between the sockets and binary files. >> > On the Matlab side you create a "file" that is actually a socket >> connection. >> > See src/sys/viewer/impls/socket/matlab This may >> > take a little poking around and you asking us a couple of questions to >> get >> > it. >> > Note there is no inherent support for parallelism on the PETSc side with >> > this setup but I think it is possible. >> >> I personally think this would be much easier in Sage than in Matlab >> proper. In fact, >> with Sage you could use petsc4py directly, and directly access the >> data structures >> as numpy arrays if necessary. >> >> Matt >> >> > Barry >> > >> > >> >> Thank you very much for your time, >> >> >> >> Michel Cancelliere >> > >> > >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Aug 8 13:03:48 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 8 Aug 2008 13:03:48 -0500 Subject: Matlab and PETSc communication In-Reply-To: <7f18de3b0808080918n2341f1baib9025bd744b7aa77@mail.gmail.com> References: <7f18de3b0807211757g4be0dce6vc25e7c39add03c67@mail.gmail.com> <7C3139A5-983E-4774-A943-DE1C926C7634@mcs.anl.gov> <37604ab40807221156i48fa2832rcd5d6ab94371559f@mail.gmail.com> <7f18de3b0807281033h2af92238ha1561944fc12eedc@mail.gmail.com> <7f18de3b0808080918n2341f1baib9025bd744b7aa77@mail.gmail.com> Message-ID: This is the sread code: void mexFunction(int nlhs,mxArray *plhs[],int nrhs,const mxArray *prhs[]) { int fd,cnt,dt; PetscErrorCode ierr; /* check output parameters */ if (nlhs != 1) PETSC_MEX_ERROR("Receive requires one output argument."); if (nrhs != 3) PETSC_MEX_ERROR("Receive requires three input arguments."); fd = (int) mxGetScalar(prhs[0]); cnt = (int) mxGetScalar(prhs[1]); dt = (PetscDataType) mxGetScalar(prhs[2]); if (dt == PETSC_DOUBLE) { plhs[0] = mxCreateDoubleMatrix(1,cnt,mxREAL); ierr = PetscBinaryRead(fd,mxGetPr(plhs[0]),cnt,dt);if (ierr) PETSC_MEX_ERROR("Unable to receive double items."); } else if (dt == PETSC_INT) { plhs[0] = mxCreateNumericMatrix(1,cnt,mxINT32_CLASS,mxREAL); ierr = PetscBinaryRead(fd,mxGetPr(plhs[0]),cnt,dt);if (ierr) PETSC_MEX_ERROR("Unable to receive int items."); } else if (dt == PETSC_CHAR) { char *tmp = (char*) mxMalloc(cnt*sizeof(char)); ierr = PetscBinaryRead(fd,tmp,cnt,dt);if (ierr) PETSC_MEX_ERROR("Unable to receive char items."); plhs[0] = mxCreateStringFromNChars(tmp,cnt); mxFree(tmp); } else { PETSC_MEX_ERROR("Unknown datatype."); } return; } It seems impossible that the one output variable plhs[0] is not assigned? I've used this only with 2007 Matlab. Barry On Aug 8, 2008, at 11:18 AM, Michel Cancelliere wrote: > Hi, > > I am trying to use the function sread() but when I run the matlab > program it give me this error: > > -One or more output arguments not assigned during call to "sread". > > and nothing happen... I also tried to use sreader but matlab didn't > respond, am I doing something wrong? I am using Matlab R2008a, which > matlab version is the most compatible with Petsc? > > Thank you, > > Michel > > On Mon, Jul 28, 2008 at 8:45 PM, Barry Smith > wrote: > > On Jul 28, 2008, at 12:33 PM, Michel Cancelliere wrote: > > Hi, > > Thank you for your response, I think the socket communication > between Matlab and PETSc is what I need, I took a look to the > example ex12.c and ex12.m (Simple example to show how to PETSc > programs can be run from Matlab) but in this case PETSc send the > Vector solution that matlab "receive", is it possible to make the > inverse? Send the variable from matlab to PETSc using socket > communication? can PetscMatlabEngineGet do that ? How it works? > > You don't really want to use the Matlab Engine code, because with > this approach the PETSc program is "in charge" and just sends work > requests > to Matlab. Instead you want Matlab in charge and to send work > requests to PETSc. > > The basic idea is that both Matlab and PETSc open a socket > connection to each other. On the PETSc side use > PetscViewerSocketOpen() on the Matlab side > use the bin/matlab/@sreader/sreader.m class to create a (Matlab) > socket object. Now on the PETSc side you can make calls to VecView() > to send vectors to > Matlab and VecLoad() and MatLoad() to read vectors and matrices from > Matlab. On the Matlab side use PetscBinaryRead() and > PetscBinaryWrite() to > read vectors from PETSc and to write vectors and matrices to PETSc. > Your PETSc program may end up looking something like > > PetscViewerSocketOpen(..... &socket); > > do forever: > VecLoad(socket, &b) > MatLoad(socket,....,&A) > > KSPSetOperators(ksp,A,A,....) > KSPSolve(ksp,b,x) > VecView(socket,x) > > You Matlab program could look something like > > socket = sread() > > do forever: > > Create your vector and matrix > PetscBinaryWrite(socket,b) > PetscBinaryWrite(socket,A) > x = PetscBinaryRead(socket) > > The details and correct syntax are obviously not given. > > Good luck, > > Barry > > > > > Aron, I was studying your recommendation but it means that I have to > write all the function from my Matlab Code to C?, because it should > take me a lot of time and this work is part of my MSc Thesis and > maybe I need a less time consuming solution. > > Thank you in advance, > > Michel > > On Tue, Jul 22, 2008 at 8:56 PM, Aron Ahmadia > wrote: > Michel, > > I would recommend investing the time to write your C/C++ wrapper > code a little higher up around the Newton iteration, since PETSc > provides a great abstraction interface for it. Then you could write > code to build the matrix (or assemble a matrix-free routine!) in C/C+ > +, and pass the parameters in from there. > > ~Aron > > > On Tue, Jul 22, 2008 at 12:12 AM, Matthew Knepley > wrote: > On Mon, Jul 21, 2008 at 10:14 PM, Barry Smith > wrote: > > > > On Jul 21, 2008, at 7:57 PM, Michel Cancelliere wrote: > > > >> Hi, I am a new user of PETSc. I am working in Reservoir > Simulation and I > >> have been developing the simulator inside Matlab. I have some > question in > >> order to understand better my possibilities of success in what I > want to do: > >> > >> ? I want to solve the linear system obtained from the inner > >> iterations in the newton method using PETSc, is it possible to > communicate > >> in an efficient way PETSc with Matlab to do that? I now that I > can write > >> binary files and then read with PETSc but due the size of the > matrix it is a > >> very time-expensive way. Where i can find some examples? I look > at the > >> examples within the package but I could not find it. \ > >> ? It is possible to call PETSc library inside Matlab? > Using the Mex > >> files and Matlab compiler? > > > > There is no code to do this. It is possible but pretty > complicated to > > write the appropriate Mex code. (Because > > each Mex function is a separate shared library you cannot just > write a Mex > > function for each PETSc function since they > > would not share the PETSc global variables. You would have to > write one Mex > > function that is a "gatekeeper" and calls > > the requested PETSc function underneath. I've monkeyed with this a > few times > > but did not have the time/energy/intellect > > to write code to automate this process. Give me 300,000 dollars > and we could > > hire someone to write this :-) > > > > You might look at the "newly improved" socket interface in petsc- > dev > > (http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html). > > With this you write a stand alone C PETSc program that waits at a > socket, > > receive the matrix and right hand side and then > > sends back the solution. The code for marshalling the matrices and > vector is > > common between the sockets and binary files. > > On the Matlab side you create a "file" that is actually a socket > connection. > > See src/sys/viewer/impls/socket/matlab This may > > take a little poking around and you asking us a couple of > questions to get > > it. > > Note there is no inherent support for parallelism on the PETSc > side with > > this setup but I think it is possible. > > I personally think this would be much easier in Sage than in Matlab > proper. In fact, > with Sage you could use petsc4py directly, and directly access the > data structures > as numpy arrays if necessary. > > Matt > > > Barry > > > > > >> Thank you very much for your time, > >> > >> Michel Cancelliere > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > > > > From bsmith at mcs.anl.gov Sat Aug 9 19:23:51 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 9 Aug 2008 19:23:51 -0500 Subject: Matlab and PETSc communication In-Reply-To: <7f18de3b0808080918n2341f1baib9025bd744b7aa77@mail.gmail.com> References: <7f18de3b0807211757g4be0dce6vc25e7c39add03c67@mail.gmail.com> <7C3139A5-983E-4774-A943-DE1C926C7634@mcs.anl.gov> <37604ab40807221156i48fa2832rcd5d6ab94371559f@mail.gmail.com> <7f18de3b0807281033h2af92238ha1561944fc12eedc@mail.gmail.com> <7f18de3b0808080918n2341f1baib9025bd744b7aa77@mail.gmail.com> Message-ID: I just installed Matlab R2008a and everything seems to run fine. You open the socket connection on the Matlab end with a = sreader; then use b = PetscBinaryRead(a) to read a PETSc object, for example, sent from ex12.c (At the same time as calling sreader; you need to run ex12 or some other PETSc code that uses the socket viewer) I see that ex12.m is actually for an older version of PETSc and so has outdated commands. I've attached a replacement that should work. -------------- next part -------------- A non-text attachment was scrubbed... Name: ex12.m Type: application/octet-stream Size: 511 bytes Desc: not available URL: -------------- next part -------------- Regarding sending a Matlab object to PETSc I'll see if I can get an example going for you. Barry On Aug 8, 2008, at 11:18 AM, Michel Cancelliere wrote: > Hi, > > I am trying to use the function sread() but when I run the matlab > program it give me this error: > > -One or more output arguments not assigned during call to "sread". > > and nothing happen... I also tried to use sreader but matlab didn't > respond, am I doing something wrong? I am using Matlab R2008a, which > matlab version is the most compatible with Petsc? > > Thank you, > > Michel > > On Mon, Jul 28, 2008 at 8:45 PM, Barry Smith > wrote: > > On Jul 28, 2008, at 12:33 PM, Michel Cancelliere wrote: > > Hi, > > Thank you for your response, I think the socket communication > between Matlab and PETSc is what I need, I took a look to the > example ex12.c and ex12.m (Simple example to show how to PETSc > programs can be run from Matlab) but in this case PETSc send the > Vector solution that matlab "receive", is it possible to make the > inverse? Send the variable from matlab to PETSc using socket > communication? can PetscMatlabEngineGet do that ? How it works? > > You don't really want to use the Matlab Engine code, because with > this approach the PETSc program is "in charge" and just sends work > requests > to Matlab. Instead you want Matlab in charge and to send work > requests to PETSc. > > The basic idea is that both Matlab and PETSc open a socket > connection to each other. On the PETSc side use > PetscViewerSocketOpen() on the Matlab side > use the bin/matlab/@sreader/sreader.m class to create a (Matlab) > socket object. Now on the PETSc side you can make calls to VecView() > to send vectors to > Matlab and VecLoad() and MatLoad() to read vectors and matrices from > Matlab. On the Matlab side use PetscBinaryRead() and > PetscBinaryWrite() to > read vectors from PETSc and to write vectors and matrices to PETSc. > Your PETSc program may end up looking something like > > PetscViewerSocketOpen(..... &socket); > > do forever: > VecLoad(socket, &b) > MatLoad(socket,....,&A) > > KSPSetOperators(ksp,A,A,....) > KSPSolve(ksp,b,x) > VecView(socket,x) > > You Matlab program could look something like > > socket = sread() > > do forever: > > Create your vector and matrix > PetscBinaryWrite(socket,b) > PetscBinaryWrite(socket,A) > x = PetscBinaryRead(socket) > > The details and correct syntax are obviously not given. > > Good luck, > > Barry > > > > > Aron, I was studying your recommendation but it means that I have to > write all the function from my Matlab Code to C?, because it should > take me a lot of time and this work is part of my MSc Thesis and > maybe I need a less time consuming solution. > > Thank you in advance, > > Michel > > On Tue, Jul 22, 2008 at 8:56 PM, Aron Ahmadia > wrote: > Michel, > > I would recommend investing the time to write your C/C++ wrapper > code a little higher up around the Newton iteration, since PETSc > provides a great abstraction interface for it. Then you could write > code to build the matrix (or assemble a matrix-free routine!) in C/C+ > +, and pass the parameters in from there. > > ~Aron > > > On Tue, Jul 22, 2008 at 12:12 AM, Matthew Knepley > wrote: > On Mon, Jul 21, 2008 at 10:14 PM, Barry Smith > wrote: > > > > On Jul 21, 2008, at 7:57 PM, Michel Cancelliere wrote: > > > >> Hi, I am a new user of PETSc. I am working in Reservoir > Simulation and I > >> have been developing the simulator inside Matlab. I have some > question in > >> order to understand better my possibilities of success in what I > want to do: > >> > >> ? I want to solve the linear system obtained from the inner > >> iterations in the newton method using PETSc, is it possible to > communicate > >> in an efficient way PETSc with Matlab to do that? I now that I > can write > >> binary files and then read with PETSc but due the size of the > matrix it is a > >> very time-expensive way. Where i can find some examples? I look > at the > >> examples within the package but I could not find it. \ > >> ? It is possible to call PETSc library inside Matlab? > Using the Mex > >> files and Matlab compiler? > > > > There is no code to do this. It is possible but pretty > complicated to > > write the appropriate Mex code. (Because > > each Mex function is a separate shared library you cannot just > write a Mex > > function for each PETSc function since they > > would not share the PETSc global variables. You would have to > write one Mex > > function that is a "gatekeeper" and calls > > the requested PETSc function underneath. I've monkeyed with this a > few times > > but did not have the time/energy/intellect > > to write code to automate this process. Give me 300,000 dollars > and we could > > hire someone to write this :-) > > > > You might look at the "newly improved" socket interface in petsc- > dev > > (http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html). > > With this you write a stand alone C PETSc program that waits at a > socket, > > receive the matrix and right hand side and then > > sends back the solution. The code for marshalling the matrices and > vector is > > common between the sockets and binary files. > > On the Matlab side you create a "file" that is actually a socket > connection. > > See src/sys/viewer/impls/socket/matlab This may > > take a little poking around and you asking us a couple of > questions to get > > it. > > Note there is no inherent support for parallelism on the PETSc > side with > > this setup but I think it is possible. > > I personally think this would be much easier in Sage than in Matlab > proper. In fact, > with Sage you could use petsc4py directly, and directly access the > data structures > as numpy arrays if necessary. > > Matt > > > Barry > > > > > >> Thank you very much for your time, > >> > >> Michel Cancelliere > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > > > > From ctibirna at giref.ulaval.ca Mon Aug 11 15:07:17 2008 From: ctibirna at giref.ulaval.ca (Cristian Tibirna) Date: Mon, 11 Aug 2008 16:07:17 -0400 Subject: searching the archives Message-ID: <200808111607.22010.ctibirna@giref.ulaval.ca> Hello I'm new on this list (but not so new to PETSc ;-). Is there a way to search the archives of the list? Even google seems to be unable to do it correctly. Thank you. -- Cristian Tibirna (1-418-) 656-2131 / 4340 Laval University - Quebec, CAN ... http://www.giref.ulaval.ca/~ctibirna Research professional at GIREF ... ctibirna at giref.ulaval.ca From bsmith at mcs.anl.gov Mon Aug 11 18:03:19 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 11 Aug 2008 18:03:19 -0500 Subject: Fwd: Matlab and PETSc communication References: Message-ID: <17365385-1D5A-4543-A3D9-7296913D6B69@mcs.anl.gov> Michel, I have added an example where Matlab code sends a matrix and vector to PETSc via a socket and receives back the result. It is in src/ksp/ksp/examples/tutorials/ ex41.m and ex41.c I have also reorganized and clarified the code a bit with better names. You will have to use petsc-dev to access this http://www-unix.mcs.anl.gov/petsc/petsc-2/developers/index.html Please let us know if you have any difficulties. In theory you can run the PETSc solver in parallel, check the comment in bin/matlab/launch.m to see on how to change it to run in parallel. Please let us know at petsc-maint at mcs.anl.gov if you have any difficulties. Barry Begin forwarded message: > From: Barry Smith > Date: August 9, 2008 7:23:51 PM CDT > To: petsc-users at mcs.anl.gov > Subject: Re: Matlab and PETSc communication > Reply-To: petsc-users at mcs.anl.gov > > > I just installed Matlab R2008a and everything seems to run fine. > > You open the socket connection on the Matlab end with a = sreader; > then use b = PetscBinaryRead(a) to read a PETSc object, for > example, sent > from ex12.c (At the same time as calling sreader; you need to run > ex12 or > some other PETSc code that uses the socket viewer) > > I see that ex12.m is actually for an older version of PETSc and so > has outdated > commands. I've attached a replacement that should work. -------------- next part -------------- A non-text attachment was scrubbed... Name: ex12.m Type: application/octet-stream Size: 511 bytes Desc: not available URL: -------------- next part -------------- > > > > Regarding sending a Matlab object to PETSc I'll see if I can get > an example > going for you. > > Barry > > On Aug 8, 2008, at 11:18 AM, Michel Cancelliere wrote: > >> Hi, >> >> I am trying to use the function sread() but when I run the matlab >> program it give me this error: >> >> -One or more output arguments not assigned during call to "sread". >> >> and nothing happen... I also tried to use sreader but matlab didn't >> respond, am I doing something wrong? I am using Matlab R2008a, >> which matlab version is the most compatible with Petsc? >> >> Thank you, >> >> Michel >> >> On Mon, Jul 28, 2008 at 8:45 PM, Barry Smith >> wrote: >> >> On Jul 28, 2008, at 12:33 PM, Michel Cancelliere wrote: >> >> Hi, >> >> Thank you for your response, I think the socket communication >> between Matlab and PETSc is what I need, I took a look to the >> example ex12.c and ex12.m (Simple example to show how to PETSc >> programs can be run from Matlab) but in this case PETSc send the >> Vector solution that matlab "receive", is it possible to make the >> inverse? Send the variable from matlab to PETSc using socket >> communication? can PetscMatlabEngineGet do that ? How it works? >> >> You don't really want to use the Matlab Engine code, because with >> this approach the PETSc program is "in charge" and just sends work >> requests >> to Matlab. Instead you want Matlab in charge and to send work >> requests to PETSc. >> >> The basic idea is that both Matlab and PETSc open a socket >> connection to each other. On the PETSc side use >> PetscViewerSocketOpen() on the Matlab side >> use the bin/matlab/@sreader/sreader.m class to create a (Matlab) >> socket object. Now on the PETSc side you can make calls to >> VecView() to send vectors to >> Matlab and VecLoad() and MatLoad() to read vectors and matrices >> from Matlab. On the Matlab side use PetscBinaryRead() and >> PetscBinaryWrite() to >> read vectors from PETSc and to write vectors and matrices to PETSc. >> Your PETSc program may end up looking something like >> >> PetscViewerSocketOpen(..... &socket); >> >> do forever: >> VecLoad(socket, &b) >> MatLoad(socket,....,&A) >> >> KSPSetOperators(ksp,A,A,....) >> KSPSolve(ksp,b,x) >> VecView(socket,x) >> >> You Matlab program could look something like >> >> socket = sread() >> >> do forever: >> >> Create your vector and matrix >> PetscBinaryWrite(socket,b) >> PetscBinaryWrite(socket,A) >> x = PetscBinaryRead(socket) >> >> The details and correct syntax are obviously not given. >> >> Good luck, >> >> Barry >> >> >> >> >> Aron, I was studying your recommendation but it means that I have >> to write all the function from my Matlab Code to C?, because it >> should take me a lot of time and this work is part of my MSc Thesis >> and maybe I need a less time consuming solution. >> >> Thank you in advance, >> >> Michel >> >> On Tue, Jul 22, 2008 at 8:56 PM, Aron Ahmadia >> wrote: >> Michel, >> >> I would recommend investing the time to write your C/C++ wrapper >> code a little higher up around the Newton iteration, since PETSc >> provides a great abstraction interface for it. Then you could >> write code to build the matrix (or assemble a matrix-free routine!) >> in C/C++, and pass the parameters in from there. >> >> ~Aron >> >> >> On Tue, Jul 22, 2008 at 12:12 AM, Matthew Knepley >> wrote: >> On Mon, Jul 21, 2008 at 10:14 PM, Barry Smith >> wrote: >> > >> > On Jul 21, 2008, at 7:57 PM, Michel Cancelliere wrote: >> > >> >> Hi, I am a new user of PETSc. I am working in Reservoir >> Simulation and I >> >> have been developing the simulator inside Matlab. I have some >> question in >> >> order to understand better my possibilities of success in what I >> want to do: >> >> >> >> ? I want to solve the linear system obtained from the inner >> >> iterations in the newton method using PETSc, is it possible to >> communicate >> >> in an efficient way PETSc with Matlab to do that? I now that I >> can write >> >> binary files and then read with PETSc but due the size of the >> matrix it is a >> >> very time-expensive way. Where i can find some examples? I look >> at the >> >> examples within the package but I could not find it. \ >> >> ? It is possible to call PETSc library inside Matlab? >> Using the Mex >> >> files and Matlab compiler? >> > >> > There is no code to do this. It is possible but pretty >> complicated to >> > write the appropriate Mex code. (Because >> > each Mex function is a separate shared library you cannot just >> write a Mex >> > function for each PETSc function since they >> > would not share the PETSc global variables. You would have to >> write one Mex >> > function that is a "gatekeeper" and calls >> > the requested PETSc function underneath. I've monkeyed with this >> a few times >> > but did not have the time/energy/intellect >> > to write code to automate this process. Give me 300,000 dollars >> and we could >> > hire someone to write this :-) >> > >> > You might look at the "newly improved" socket interface in petsc- >> dev >> > (http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html). >> > With this you write a stand alone C PETSc program that waits at a >> socket, >> > receive the matrix and right hand side and then >> > sends back the solution. The code for marshalling the matrices >> and vector is >> > common between the sockets and binary files. >> > On the Matlab side you create a "file" that is actually a socket >> connection. >> > See src/sys/viewer/impls/socket/matlab This may >> > take a little poking around and you asking us a couple of >> questions to get >> > it. >> > Note there is no inherent support for parallelism on the PETSc >> side with >> > this setup but I think it is possible. >> >> I personally think this would be much easier in Sage than in Matlab >> proper. In fact, >> with Sage you could use petsc4py directly, and directly access the >> data structures >> as numpy arrays if necessary. >> >> Matt >> >> > Barry >> > >> > >> >> Thank you very much for your time, >> >> >> >> Michel Cancelliere >> > >> > >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener >> >> >> >> >> > From balay at mcs.anl.gov Tue Aug 12 02:50:33 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 12 Aug 2008 02:50:33 -0500 (CDT) Subject: searching the archives In-Reply-To: <200808111607.22010.ctibirna@giref.ulaval.ca> References: <200808111607.22010.ctibirna@giref.ulaval.ca> Message-ID: On Mon, 11 Aug 2008, Cristian Tibirna wrote: > > Hello > > I'm new on this list (but not so new to PETSc ;-). Is there a way to search > the archives of the list? Even google seems to be unable to do it correctly. I guess we don't have search feature for our mailing list archives. We should be moving from majordomo to something else pretty soon.. But I don't know if the new one will have this feature.. Satish From lua.byhh at gmail.com Tue Aug 12 05:18:45 2008 From: lua.byhh at gmail.com (berry) Date: Tue, 12 Aug 2008 18:18:45 +0800 Subject: problem in compiling petsc for visual c++ 2005 Message-ID: Hi, I am trying to compile PETSC in cygwin for Visual c++ 2005. I follow the guide exactly from here: http://www-unix.mcs.anl.gov/petsc/petsc-2/documentation/installation.html. and use below command: /usr/bin/python ./config/configure.py --with-cc='win32fe cl --nodetect' --with-fc='win32fe ifort --nodetect' --with-mpi=1 --with-f-blas-lapack=1 --with-hypre=1 PETSC installation script really starts to run. However, it fails with below errors: ================================================================================ = Configuring PETSc to compile on your system ================================================================================ = ******************************************************************************** * UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for detail s): -------------------------------------------------------------------------------- ------- Unable to determine host type using /cygdrive/c/cygwin/home/pangshengyong/soft/p etsc-2.3.3-p13/python/BuildSystem/config/packages/config.sub: Could not execute '/bin/sh /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/Build System/config/packages/config.guess': /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co nfig/packages/config.guess: line 38: sed: command not found /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co nfig/packages/config.guess: line 1272: mkdir: command not found /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co nfig/packages/config.guess: line 1272: mkdir: command not found : cannot create a temporary directory in /tmp ******************************************************************************** * It seems that the configure script miss to find the correct path. Does anyone could point me out how to set the correct path for configure script please? Btw: I have successfully compiled petsc library on cygwin for gcc. Thanks. -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Aug 12 06:34:52 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 12 Aug 2008 06:34:52 -0500 Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: References: Message-ID: On Tue, Aug 12, 2008 at 5:18 AM, berry wrote: > Hi, > > I am trying to compile PETSC in cygwin for Visual c++ 2005. I follow the > guide exactly from here: > http://www-unix.mcs.anl.gov/petsc/petsc-2/documentation/installation.html. > > > and use below command: > > /usr/bin/python ./config/configure.py --with-cc='win32fe cl --nodetect' > --with-fc='win32fe ifort --nodetect' --with-mpi=1 --with-f-blas-lapack=1 > --with-hypre=1 > > PETSC installation script really starts to run. However, it fails with > below errors: > > > > ================================================================================ > = > Configuring PETSc to compile on your system > > > ================================================================================ > = > > ******************************************************************************** > * > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > detail > s): > > -------------------------------------------------------------------------------- > ------- > Unable to determine host type using > /cygdrive/c/cygwin/home/pangshengyong/soft/p > etsc-2.3.3-p13/python/BuildSystem/config/packages/config.sub: Could not > execute > '/bin/sh > /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/Build > System/config/packages/config.guess': > > /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co > nfig/packages/config.guess: line 38: sed: command not found > > /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co > nfig/packages/config.guess: line 1272: mkdir: command not found > > /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co > nfig/packages/config.guess: line 1272: mkdir: command not found > : cannot create a temporary directory in /tmp > > ******************************************************************************** > * > > It seems that the configure script miss to find the correct path. Does > anyone could point me out how to set the correct path for configure script > please? > > Btw: I have successfully compiled petsc library on cygwin for gcc. > I am not sure how you built PETSc on cygwin before. Perhaps a different copy? This error seems to occur because you cygwin is missing the command line utilities 'sed' and 'mkdir'. Matt > > Thanks. > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From lua.byhh at gmail.com Tue Aug 12 08:03:46 2008 From: lua.byhh at gmail.com (berry) Date: Tue, 12 Aug 2008 21:03:46 +0800 Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: References: Message-ID: Hi, Matt Previously I installed petsc on cygwin following the official guide from petsc website. Because I started from cygwin shell, so both' mkdir 'and 'sed' command are transparent for petsc configure script. But for VC 2005 version, I start from VC 2005 command line prompt, and type: c:\cygwin\bin\bash , then going to the bash shell. I am not sure why in this way the installation script can not see the 'sed' and 'mkdir'. By the way, it also miss python. So I give an absolute path for executing configure.py. Although it starts to run the configure.py script, it misses some command line utilities in sub installation script. thanks On Tue, Aug 12, 2008 at 7:34 PM, Matthew Knepley wrote: > On Tue, Aug 12, 2008 at 5:18 AM, berry wrote: > >> Hi, >> >> I am trying to compile PETSC in cygwin for Visual c++ 2005. I follow >> the guide exactly from here: >> http://www-unix.mcs.anl.gov/petsc/petsc-2/documentation/installation.html. >> >> >> and use below command: >> >> /usr/bin/python ./config/configure.py --with-cc='win32fe cl --nodetect' >> --with-fc='win32fe ifort --nodetect' --with-mpi=1 --with-f-blas-lapack=1 >> --with-hypre=1 >> >> PETSC installation script really starts to run. However, it fails with >> below errors: >> >> >> >> ================================================================================ >> = >> Configuring PETSc to compile on your system >> >> >> ================================================================================ >> = >> >> ******************************************************************************** >> * >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> detail >> s): >> >> -------------------------------------------------------------------------------- >> ------- >> Unable to determine host type using >> /cygdrive/c/cygwin/home/pangshengyong/soft/p >> etsc-2.3.3-p13/python/BuildSystem/config/packages/config.sub: Could not >> execute >> '/bin/sh >> /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/Build >> System/config/packages/config.guess': >> >> /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co >> nfig/packages/config.guess: line 38: sed: command not found >> >> /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co >> nfig/packages/config.guess: line 1272: mkdir: command not found >> >> /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co >> nfig/packages/config.guess: line 1272: mkdir: command not found >> : cannot create a temporary directory in /tmp >> >> ******************************************************************************** >> * >> >> It seems that the configure script miss to find the correct path. Does >> anyone could point me out how to set the correct path for configure script >> please? >> >> Btw: I have successfully compiled petsc library on cygwin for gcc. >> > > I am not sure how you built PETSc on cygwin before. Perhaps a different > copy? This > error seems to occur because you cygwin is missing the command line > utilities > 'sed' and 'mkdir'. > > Matt > > >> >> Thanks. >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Aug 12 08:23:56 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 12 Aug 2008 08:23:56 -0500 Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: References: Message-ID: On Tue, Aug 12, 2008 at 8:03 AM, berry wrote: > Hi, Matt > > Previously I installed petsc on cygwin following the official guide from > petsc website. Because I started from cygwin shell, so both' mkdir 'and > 'sed' command are transparent for petsc configure script. > > But for VC 2005 version, I start from VC 2005 command line prompt, and > type: c:\cygwin\bin\bash , then going to the bash shell. I am not sure why > in this way the installation script can not see the 'sed' and 'mkdir'. By > the way, it also miss python. So I give an absolute path for executing > configure.py. Although it starts to run the configure.py script, it misses > some command line utilities in sub installation script. > Sounds like your path is messed up. Matt > > thanks > > > > On Tue, Aug 12, 2008 at 7:34 PM, Matthew Knepley wrote: > >> On Tue, Aug 12, 2008 at 5:18 AM, berry wrote: >> >>> Hi, >>> >>> I am trying to compile PETSC in cygwin for Visual c++ 2005. I follow >>> the guide exactly from here: >>> http://www-unix.mcs.anl.gov/petsc/petsc-2/documentation/installation.html. >>> >>> >>> and use below command: >>> >>> /usr/bin/python ./config/configure.py --with-cc='win32fe cl --nodetect' >>> --with-fc='win32fe ifort --nodetect' --with-mpi=1 --with-f-blas-lapack=1 >>> --with-hypre=1 >>> >>> PETSC installation script really starts to run. However, it fails with >>> below errors: >>> >>> >>> >>> ================================================================================ >>> = >>> Configuring PETSc to compile on your system >>> >>> >>> ================================================================================ >>> = >>> >>> ******************************************************************************** >>> * >>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >>> detail >>> s): >>> >>> -------------------------------------------------------------------------------- >>> ------- >>> Unable to determine host type using >>> /cygdrive/c/cygwin/home/pangshengyong/soft/p >>> etsc-2.3.3-p13/python/BuildSystem/config/packages/config.sub: Could not >>> execute >>> '/bin/sh >>> /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/Build >>> System/config/packages/config.guess': >>> >>> /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co >>> nfig/packages/config.guess: line 38: sed: command not found >>> >>> /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co >>> nfig/packages/config.guess: line 1272: mkdir: command not found >>> >>> /cygdrive/c/cygwin/home/pangshengyong/soft/petsc-2.3.3-p13/python/BuildSystem/co >>> nfig/packages/config.guess: line 1272: mkdir: command not found >>> : cannot create a temporary directory in /tmp >>> >>> ******************************************************************************** >>> * >>> >>> It seems that the configure script miss to find the correct path. Does >>> anyone could point me out how to set the correct path for configure script >>> please? >>> >>> Btw: I have successfully compiled petsc library on cygwin for gcc. >>> >> >> I am not sure how you built PETSc on cygwin before. Perhaps a different >> copy? This >> error seems to occur because you cygwin is missing the command line >> utilities >> 'sed' and 'mkdir'. >> >> Matt >> >> >>> >>> Thanks. >>> -- >>> Pang Shengyong >>> Solidification Simulation Lab, >>> State Key Lab of Mould & Die Technology, >>> Huazhong Univ. of Sci. & Tech. China >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Aug 12 09:21:10 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 12 Aug 2008 09:21:10 -0500 (CDT) Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: References: Message-ID: On Tue, 12 Aug 2008, berry wrote: > Hi, Matt > > Previously I installed petsc on cygwin following the official guide from > petsc website. Because I started from cygwin shell, so both' mkdir 'and > 'sed' command are transparent for petsc configure script. > > But for VC 2005 version, I start from VC 2005 command line prompt, and type: > c:\cygwin\bin\bash , then going to the bash shell. I am not sure why in this > way the installation script can not see the 'sed' and 'mkdir'. By the way, > it also miss python. So I give an absolute path for executing configure.py. > Although it starts to run the configure.py script, it misses some command > line utilities in sub installation script. try c:\cygwin\bin\bash --login and then verify if 'mkdir', 'sed' etc are in your PATH which sed which mkdir Satish From lua.byhh at gmail.com Tue Aug 12 09:26:23 2008 From: lua.byhh at gmail.com (berry) Date: Tue, 12 Aug 2008 22:26:23 +0800 Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: References: Message-ID: Thanks, I have already logged into the cygwin from VC command prompt. On Tue, Aug 12, 2008 at 10:21 PM, Satish Balay wrote: > On Tue, 12 Aug 2008, berry wrote: > > > Hi, Matt > > > > Previously I installed petsc on cygwin following the official guide from > > petsc website. Because I started from cygwin shell, so both' mkdir 'and > > 'sed' command are transparent for petsc configure script. > > > > But for VC 2005 version, I start from VC 2005 command line prompt, and > type: > > c:\cygwin\bin\bash , then going to the bash shell. I am not sure why in > this > > way the installation script can not see the 'sed' and 'mkdir'. By the > way, > > it also miss python. So I give an absolute path for executing > configure.py. > > Although it starts to run the configure.py script, it misses some command > > line utilities in sub installation script. > > try > > c:\cygwin\bin\bash --login > > and then verify if 'mkdir', 'sed' etc are in your PATH > > which sed > which mkdir > > Satish > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Aug 12 09:43:11 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 12 Aug 2008 09:43:11 -0500 (CDT) Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: References: Message-ID: On Tue, 12 Aug 2008, berry wrote: > Thanks, I have already logged into the cygwin from VC command prompt. What does this mean? do you now have mkdir and sed in your path? If not - redo VC cmd prompt - and 'bash --login' - and let us know if: - mkdir & sed are in your PATH which sed which mkdir - now configure is successful or not.. Satish From ctibirna at giref.ulaval.ca Tue Aug 12 11:08:13 2008 From: ctibirna at giref.ulaval.ca (Cristian Tibirna) Date: Tue, 12 Aug 2008 12:08:13 -0400 Subject: negative indices Message-ID: <200808121208.15753.ctibirna@giref.ulaval.ca> Hello We recently decided to use PETSc's negative indices features in a FEM code that originated long time ago, long before PETSc 2.0.24 (which, it seems, introduced negative indices for matrices and vectors). I attempted to write an algorithm that produces negative indices at assembly for matrix lines and columns and vector entries imposed by Dirichlet boundary conditions. All seemed straightforward up until the final testing. Our FEM solver is written "in terms of correction". Thus, we get the solution as a correction vector, which should have null entries corresponding to the original Dirichlet-imposed DOFs. To my surprise, even if I initialize the correction vector by filling it with zeros, I get the Dirichlet-imposed entries (and then a few more) set to NaN-s. The bigger surprise is that, even if I set manually to zero the Dirichlet-imposed entries in the correction vector _after_ the solver finishes, I still have the "few more" entries set to NaN. All the values in the correction vector that are not nan-s are the exact solution of the system, as expected. Thus, I conclude that the solver assembles the matrix and the residual vector correctly and returns the right solution of the matricial system. Any idea what should I look at for debugging this? NOTE: I solve the system by PREONLY (LU preconditioner, of course). Thanks in advance for any help. -- Cristian Tibirna (1-418-) 656-2131 / 4340 Laval University - Quebec, CAN ... http://www.giref.ulaval.ca/~ctibirna Research professional at GIREF ... ctibirna at giref.ulaval.ca From knepley at gmail.com Tue Aug 12 11:27:47 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 12 Aug 2008 11:27:47 -0500 Subject: negative indices In-Reply-To: <200808121208.15753.ctibirna@giref.ulaval.ca> References: <200808121208.15753.ctibirna@giref.ulaval.ca> Message-ID: If you ignore entries for the rows and columns associated with BCs, but do not eliminate them from the ordering, do you remember to put something on the diagonal of the Jacobian? Matt On Tue, Aug 12, 2008 at 11:08 AM, Cristian Tibirna wrote: > > Hello > > We recently decided to use PETSc's negative indices features in a FEM code > that originated long time ago, long before PETSc 2.0.24 (which, it seems, > introduced negative indices for matrices and vectors). > > I attempted to write an algorithm that produces negative indices at > assembly > for matrix lines and columns and vector entries imposed by Dirichlet > boundary > conditions. All seemed straightforward up until the final testing. > > Our FEM solver is written "in terms of correction". Thus, we get the > solution > as a correction vector, which should have null entries corresponding to the > original Dirichlet-imposed DOFs. > > To my surprise, even if I initialize the correction vector by filling it > with > zeros, I get the Dirichlet-imposed entries (and then a few more) set to > NaN-s. > > The bigger surprise is that, even if I set manually to zero the > Dirichlet-imposed entries in the correction vector _after_ the solver > finishes, I still have the "few more" entries set to NaN. > > All the values in the correction vector that are not nan-s are the exact > solution of the system, as expected. Thus, I conclude that the solver > assembles the matrix and the residual vector correctly and returns the > right > solution of the matricial system. > > Any idea what should I look at for debugging this? > > NOTE: I solve the system by PREONLY (LU preconditioner, of course). > > Thanks in advance for any help. > > -- > Cristian Tibirna (1-418-) 656-2131 / 4340 > Laval University - Quebec, CAN ... http://www.giref.ulaval.ca/~ctibirna > Research professional at GIREF ... ctibirna at giref.ulaval.ca > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From lua.byhh at gmail.com Tue Aug 12 12:19:39 2008 From: lua.byhh at gmail.com (berry) Date: Wed, 13 Aug 2008 01:19:39 +0800 Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: References: Message-ID: Hi, Satish Sorry for my vague reply. I just use 'bash' not 'bash --login'. With 'bash' only, the configure script can not find 'mkdir' and 'sed' command, as you kindly points me out. Then I try this utilities: 'usr/bin/run.exe -ls' to open a xterm window. I Successed. In this window, I can find 'mkdir' and 'sed' plus visual c++'s 'cl.exe' and intel fortran 's 'ifort.exe' now. After that, I do configure petsc again with './config/configure.py '( I can not remember other paramters for mpich , lapack, blas, winfe cl , winfe ifort and hypre now, but I follow the exact guidence from petsc official website). However, after several minutes's configuration, petsc configuration script still CRASHED with no error prompt on screen. It just tell me to send the config.log file to developers. Right now I am not in my office, tomorrow I will send the config.log file to corresponding email address. Hope this time you can understand what I say. Thanks for you help! On Tue, Aug 12, 2008 at 10:43 PM, Satish Balay wrote: > > On Tue, 12 Aug 2008, berry wrote: > > > Thanks, I have already logged into the cygwin from VC command prompt. > > What does this mean? do you now have mkdir and sed in your path? > > If not - redo VC cmd prompt - and 'bash --login' - and let us know if: > > - mkdir & sed are in your PATH > which sed > which mkdir > > - now configure is successful or not.. > > > Satish > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From mossaiby at yahoo.com Wed Aug 13 01:44:50 2008 From: mossaiby at yahoo.com (Farshid Mossaiby) Date: Tue, 12 Aug 2008 23:44:50 -0700 (PDT) Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: Message-ID: <697577.98848.qm@web52203.mail.re2.yahoo.com> Hi, I have built Petsc a lot of times with VC++ 2008 Express edition and Cygwin. I used to run C:\Cygwin\Cygwin.bat, and had no problem. I just looked into the batch file, and it contains: ---------------------------- @echo off C: chdir C:\Cygwin\bin bash --login -i ---------------------------- I am not sure what '-i' in bash parameters mean, but it may help you. Anyway, I still recommend you running the batch file itself. Best regards, Farshid Mossaiby --- On Tue, 8/12/08, berry wrote: > From: berry > Subject: Re: problem in compiling petsc for visual c++ 2005 > To: petsc-users at mcs.anl.gov > Date: Tuesday, August 12, 2008, 9:49 PM > Hi, Satish > > Sorry for my vague reply. > > I just use 'bash' not 'bash --login'. With > 'bash' only, the configure script > can not find 'mkdir' and 'sed' command, as > you kindly points me out. > > Then I try this utilities: 'usr/bin/run.exe -ls' > to open a xterm window. I > Successed. In this window, I can find 'mkdir' and > 'sed' plus visual c++'s > 'cl.exe' and intel fortran 's > 'ifort.exe' now. > > After that, I do configure petsc again with > './config/configure.py '( I can > not remember other paramters for mpich , lapack, blas, > winfe cl , winfe > ifort and hypre now, but I follow the exact guidence from > petsc official > website). > > However, after several minutes's configuration, petsc > configuration script > still CRASHED with no error prompt on screen. It just tell > me to send the > config.log file to developers. Right now I am not in my > office, tomorrow I > will send the config.log file to corresponding email > address. > > Hope this time you can understand what I say. > > Thanks for you help! > > On Tue, Aug 12, 2008 at 10:43 PM, Satish Balay > wrote: > > > > > On Tue, 12 Aug 2008, berry wrote: > > > > > Thanks, I have already logged into the cygwin > from VC command prompt. > > > > What does this mean? do you now have mkdir and sed in > your path? > > > > If not - redo VC cmd prompt - and 'bash > --login' - and let us know if: > > > > - mkdir & sed are in your PATH > > which sed > > which mkdir > > > > - now configure is successful or not.. > > > > > > Satish > > > > > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China From lua.byhh at gmail.com Wed Aug 13 02:09:20 2008 From: lua.byhh at gmail.com (berry) Date: Wed, 13 Aug 2008 15:09:20 +0800 Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: <697577.98848.qm@web52203.mail.re2.yahoo.com> References: <697577.98848.qm@web52203.mail.re2.yahoo.com> Message-ID: Hi, Farshid Thanks for you suggestions. Can you tell me the surfix of your compiled library is .a or .lib? On Wed, Aug 13, 2008 at 2:44 PM, Farshid Mossaiby wrote: > Hi, > > I have built Petsc a lot of times with VC++ 2008 Express edition and > Cygwin. I used to run C:\Cygwin\Cygwin.bat, and had no problem. > > I just looked into the batch file, and it contains: > > ---------------------------- > @echo off > > C: > chdir C:\Cygwin\bin > > bash --login -i > ---------------------------- > > I am not sure what '-i' in bash parameters mean, but it may help you. > Anyway, I still recommend you running the batch file itself. > > Best regards, > Farshid Mossaiby > > --- On Tue, 8/12/08, berry wrote: > > > From: berry > > Subject: Re: problem in compiling petsc for visual c++ 2005 > > To: petsc-users at mcs.anl.gov > > Date: Tuesday, August 12, 2008, 9:49 PM > > Hi, Satish > > > > Sorry for my vague reply. > > > > I just use 'bash' not 'bash --login'. With > > 'bash' only, the configure script > > can not find 'mkdir' and 'sed' command, as > > you kindly points me out. > > > > Then I try this utilities: 'usr/bin/run.exe -ls' > > to open a xterm window. I > > Successed. In this window, I can find 'mkdir' and > > 'sed' plus visual c++'s > > 'cl.exe' and intel fortran 's > > 'ifort.exe' now. > > > > After that, I do configure petsc again with > > './config/configure.py '( I can > > not remember other paramters for mpich , lapack, blas, > > winfe cl , winfe > > ifort and hypre now, but I follow the exact guidence from > > petsc official > > website). > > > > However, after several minutes's configuration, petsc > > configuration script > > still CRASHED with no error prompt on screen. It just tell > > me to send the > > config.log file to developers. Right now I am not in my > > office, tomorrow I > > will send the config.log file to corresponding email > > address. > > > > Hope this time you can understand what I say. > > > > Thanks for you help! > > > > On Tue, Aug 12, 2008 at 10:43 PM, Satish Balay > > wrote: > > > > > > > > On Tue, 12 Aug 2008, berry wrote: > > > > > > > Thanks, I have already logged into the cygwin > > from VC command prompt. > > > > > > What does this mean? do you now have mkdir and sed in > > your path? > > > > > > If not - redo VC cmd prompt - and 'bash > > --login' - and let us know if: > > > > > > - mkdir & sed are in your PATH > > > which sed > > > which mkdir > > > > > > - now configure is successful or not.. > > > > > > > > > Satish > > > > > > > > > > > > -- > > Pang Shengyong > > Solidification Simulation Lab, > > State Key Lab of Mould & Die Technology, > > Huazhong Univ. of Sci. & Tech. China > > > > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From mossaiby at yahoo.com Wed Aug 13 03:06:17 2008 From: mossaiby at yahoo.com (Farshid Mossaiby) Date: Wed, 13 Aug 2008 01:06:17 -0700 (PDT) Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: Message-ID: <674106.58951.qm@web52203.mail.re2.yahoo.com> Hi, They are .lib files. Best regards, Farshid Mossaiby --- On Wed, 8/13/08, berry wrote: > From: berry > Subject: Re: problem in compiling petsc for visual c++ 2005 > To: petsc-users at mcs.anl.gov > Date: Wednesday, August 13, 2008, 11:39 AM > Hi, Farshid > > Thanks for you suggestions. Can you tell me the surfix of > your compiled > library is .a or .lib? > > > On Wed, Aug 13, 2008 at 2:44 PM, Farshid Mossaiby > wrote: > > > Hi, > > > > I have built Petsc a lot of times with VC++ 2008 > Express edition and > > Cygwin. I used to run C:\Cygwin\Cygwin.bat, > and had no problem. > > > > I just looked into the batch file, and it contains: > > > > ---------------------------- > > @echo off > > > > C: > > chdir C:\Cygwin\bin > > > > bash --login -i > > ---------------------------- > > > > I am not sure what '-i' in bash parameters > mean, but it may help you. > > Anyway, I still recommend you running the batch file > itself. > > > > Best regards, > > Farshid Mossaiby > > > > --- On Tue, 8/12/08, berry > wrote: > > > > > From: berry > > > Subject: Re: problem in compiling petsc for > visual c++ 2005 > > > To: petsc-users at mcs.anl.gov > > > Date: Tuesday, August 12, 2008, 9:49 PM > > > Hi, Satish > > > > > > Sorry for my vague reply. > > > > > > I just use 'bash' not 'bash > --login'. With > > > 'bash' only, the configure script > > > can not find 'mkdir' and 'sed' > command, as > > > you kindly points me out. > > > > > > Then I try this utilities: 'usr/bin/run.exe > -ls' > > > to open a xterm window. I > > > Successed. In this window, I can find > 'mkdir' and > > > 'sed' plus visual c++'s > > > 'cl.exe' and intel fortran 's > > > 'ifort.exe' now. > > > > > > After that, I do configure petsc again with > > > './config/configure.py '( I can > > > not remember other paramters for mpich , lapack, > blas, > > > winfe cl , winfe > > > ifort and hypre now, but I follow the exact > guidence from > > > petsc official > > > website). > > > > > > However, after several minutes's > configuration, petsc > > > configuration script > > > still CRASHED with no error prompt on screen. It > just tell > > > me to send the > > > config.log file to developers. Right now I am not > in my > > > office, tomorrow I > > > will send the config.log file to corresponding > email > > > address. > > > > > > Hope this time you can understand what I say. > > > > > > Thanks for you help! > > > > > > On Tue, Aug 12, 2008 at 10:43 PM, Satish Balay > > > wrote: > > > > > > > > > > > On Tue, 12 Aug 2008, berry wrote: > > > > > > > > > Thanks, I have already logged into the > cygwin > > > from VC command prompt. > > > > > > > > What does this mean? do you now have mkdir > and sed in > > > your path? > > > > > > > > If not - redo VC cmd prompt - and 'bash > > > --login' - and let us know if: > > > > > > > > - mkdir & sed are in your PATH > > > > which sed > > > > which mkdir > > > > > > > > - now configure is successful or not.. > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > -- > > > Pang Shengyong > > > Solidification Simulation Lab, > > > State Key Lab of Mould & Die Technology, > > > Huazhong Univ. of Sci. & Tech. China > > > > > > > > > > > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China From lua.byhh at gmail.com Wed Aug 13 03:20:58 2008 From: lua.byhh at gmail.com (berry) Date: Wed, 13 Aug 2008 16:20:58 +0800 Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: <674106.58951.qm@web52203.mail.re2.yahoo.com> References: <674106.58951.qm@web52203.mail.re2.yahoo.com> Message-ID: Hi, How can you change the Linker to VC 2008 's link.exe? I configured the script successfuly. But my linker option (PETSc generated) is /usr/bin/ar. Below is part of the scratch configure results. Compilers: C Compiler: /home/pangshengyong/soft/petsc-2.3.3-p13/bin/win32fe/win32 fe cl --nodetect -wd4996 -MT C++ Compiler: /home/pangshengyong/soft/petsc-2.3.3-p13/bin/win32fe/win32 fe cl --nodetect -GR -GX -EHsc -MT -O2 -Zm200 -TP Fortran Compiler: /home/pangshengyong/soft/petsc-2.3.3-p13/bin/win32fe/win32 fe ifort --nodetect -MT -O3 -QxW -fpp *Linkers: Static linker: /usr/bin/ar cr* PETSc: PETSC_ARCH: windows_VC PETSC_DIR: /home/pangshengyong/soft/petsc-2.3.3-p13 ** ** Now build and test the libraries with "make all test" ** Clanguage: C Scalar type:real PETSc shared libraries: disabled PETSc dynamic libraries: disabled BLAS/LAPACK: -L/home/pangshengyong/soft/petsc-2.3.3-p13/externalpackages/fblasla pack/windows_VC -L/home/pangshengyong/soft/petsc-2.3.3-p13/externalpackages/fbla slapack/windows_VC -lflapack -L/home/pangshengyong/soft/petsc-2.3.3-p13/external packages/fblaslapack/windows_VC -L/home/pangshengyong/soft/petsc-2.3.3-p13/exter nalpackages/fblaslapack/windows_VC -lfblas Regards, On Wed, Aug 13, 2008 at 4:06 PM, Farshid Mossaiby wrote: > Hi, > > They are .lib files. > > Best regards, > Farshid Mossaiby > > > --- On Wed, 8/13/08, berry wrote: > > > From: berry > > Subject: Re: problem in compiling petsc for visual c++ 2005 > > To: petsc-users at mcs.anl.gov > > Date: Wednesday, August 13, 2008, 11:39 AM > > Hi, Farshid > > > > Thanks for you suggestions. Can you tell me the surfix of > > your compiled > > library is .a or .lib? > > > > > > On Wed, Aug 13, 2008 at 2:44 PM, Farshid Mossaiby > > wrote: > > > > > Hi, > > > > > > I have built Petsc a lot of times with VC++ 2008 > > Express edition and > > > Cygwin. I used to run C:\Cygwin\Cygwin.bat, > > and had no problem. > > > > > > I just looked into the batch file, and it contains: > > > > > > ---------------------------- > > > @echo off > > > > > > C: > > > chdir C:\Cygwin\bin > > > > > > bash --login -i > > > ---------------------------- > > > > > > I am not sure what '-i' in bash parameters > > mean, but it may help you. > > > Anyway, I still recommend you running the batch file > > itself. > > > > > > Best regards, > > > Farshid Mossaiby > > > > > > --- On Tue, 8/12/08, berry > > wrote: > > > > > > > From: berry > > > > Subject: Re: problem in compiling petsc for > > visual c++ 2005 > > > > To: petsc-users at mcs.anl.gov > > > > Date: Tuesday, August 12, 2008, 9:49 PM > > > > Hi, Satish > > > > > > > > Sorry for my vague reply. > > > > > > > > I just use 'bash' not 'bash > > --login'. With > > > > 'bash' only, the configure script > > > > can not find 'mkdir' and 'sed' > > command, as > > > > you kindly points me out. > > > > > > > > Then I try this utilities: 'usr/bin/run.exe > > -ls' > > > > to open a xterm window. I > > > > Successed. In this window, I can find > > 'mkdir' and > > > > 'sed' plus visual c++'s > > > > 'cl.exe' and intel fortran 's > > > > 'ifort.exe' now. > > > > > > > > After that, I do configure petsc again with > > > > './config/configure.py '( I can > > > > not remember other paramters for mpich , lapack, > > blas, > > > > winfe cl , winfe > > > > ifort and hypre now, but I follow the exact > > guidence from > > > > petsc official > > > > website). > > > > > > > > However, after several minutes's > > configuration, petsc > > > > configuration script > > > > still CRASHED with no error prompt on screen. It > > just tell > > > > me to send the > > > > config.log file to developers. Right now I am not > > in my > > > > office, tomorrow I > > > > will send the config.log file to corresponding > > email > > > > address. > > > > > > > > Hope this time you can understand what I say. > > > > > > > > Thanks for you help! > > > > > > > > On Tue, Aug 12, 2008 at 10:43 PM, Satish Balay > > > > wrote: > > > > > > > > > > > > > > On Tue, 12 Aug 2008, berry wrote: > > > > > > > > > > > Thanks, I have already logged into the > > cygwin > > > > from VC command prompt. > > > > > > > > > > What does this mean? do you now have mkdir > > and sed in > > > > your path? > > > > > > > > > > If not - redo VC cmd prompt - and 'bash > > > > --login' - and let us know if: > > > > > > > > > > - mkdir & sed are in your PATH > > > > > which sed > > > > > which mkdir > > > > > > > > > > - now configure is successful or not.. > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > -- > > > > Pang Shengyong > > > > Solidification Simulation Lab, > > > > State Key Lab of Mould & Die Technology, > > > > Huazhong Univ. of Sci. & Tech. China > > > > > > > > > > > > > > > > > > > > > -- > > Pang Shengyong > > Solidification Simulation Lab, > > State Key Lab of Mould & Die Technology, > > Huazhong Univ. of Sci. & Tech. China > > > > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From mossaiby at yahoo.com Wed Aug 13 03:27:07 2008 From: mossaiby at yahoo.com (Farshid Mossaiby) Date: Wed, 13 Aug 2008 01:27:07 -0700 (PDT) Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: Message-ID: <149684.22074.qm@web52208.mail.re2.yahoo.com> Hi, I am not sure how to do this, but why you need it? The resulting .lib files can later be used in your own programs without any problems. Best regards, Farshid Mossaiby --- On Wed, 8/13/08, berry wrote: > From: berry > Subject: Re: problem in compiling petsc for visual c++ 2005 > To: petsc-users at mcs.anl.gov > Date: Wednesday, August 13, 2008, 12:50 PM > Hi, > > How can you change the Linker to VC 2008 's link.exe? > I configured the > script successfuly. But my linker option (PETSc generated) > is /usr/bin/ar. Below is part of the scratch configure > results. > > Compilers: > C Compiler: > /home/pangshengyong/soft/petsc-2.3.3-p13/bin/win32fe/win32 > fe cl --nodetect -wd4996 -MT > C++ Compiler: > /home/pangshengyong/soft/petsc-2.3.3-p13/bin/win32fe/win32 > fe cl --nodetect -GR -GX -EHsc -MT -O2 -Zm200 -TP > Fortran Compiler: > /home/pangshengyong/soft/petsc-2.3.3-p13/bin/win32fe/win32 > fe ifort --nodetect -MT -O3 -QxW -fpp > *Linkers: > Static linker: /usr/bin/ar cr* > PETSc: > PETSC_ARCH: windows_VC > PETSC_DIR: /home/pangshengyong/soft/petsc-2.3.3-p13 > ** > ** Now build and test the libraries with "make all > test" > ** > Clanguage: C > Scalar type:real > PETSc shared libraries: disabled > PETSc dynamic libraries: disabled > BLAS/LAPACK: > -L/home/pangshengyong/soft/petsc-2.3.3-p13/externalpackages/fblasla > pack/windows_VC > -L/home/pangshengyong/soft/petsc-2.3.3-p13/externalpackages/fbla > slapack/windows_VC -lflapack > -L/home/pangshengyong/soft/petsc-2.3.3-p13/external > packages/fblaslapack/windows_VC > -L/home/pangshengyong/soft/petsc-2.3.3-p13/exter > nalpackages/fblaslapack/windows_VC -lfblas > > > Regards, > > > On Wed, Aug 13, 2008 at 4:06 PM, Farshid Mossaiby > wrote: > > > Hi, > > > > They are .lib files. > > > > Best regards, > > Farshid Mossaiby > > > > > > --- On Wed, 8/13/08, berry > wrote: > > > > > From: berry > > > Subject: Re: problem in compiling petsc for > visual c++ 2005 > > > To: petsc-users at mcs.anl.gov > > > Date: Wednesday, August 13, 2008, 11:39 AM > > > Hi, Farshid > > > > > > Thanks for you suggestions. Can you tell me the > surfix of > > > your compiled > > > library is .a or .lib? > > > > > > > > > On Wed, Aug 13, 2008 at 2:44 PM, Farshid Mossaiby > > > wrote: > > > > > > > Hi, > > > > > > > > I have built Petsc a lot of times with VC++ > 2008 > > > Express edition and > > > > Cygwin. I used to run > C:\Cygwin\Cygwin.bat, > > > and had no problem. > > > > > > > > I just looked into the batch file, and it > contains: > > > > > > > > ---------------------------- > > > > @echo off > > > > > > > > C: > > > > chdir C:\Cygwin\bin > > > > > > > > bash --login -i > > > > ---------------------------- > > > > > > > > I am not sure what '-i' in bash > parameters > > > mean, but it may help you. > > > > Anyway, I still recommend you running the > batch file > > > itself. > > > > > > > > Best regards, > > > > Farshid Mossaiby > > > > > > > > --- On Tue, 8/12/08, berry > > > > wrote: > > > > > > > > > From: berry > > > > > Subject: Re: problem in compiling petsc > for > > > visual c++ 2005 > > > > > To: petsc-users at mcs.anl.gov > > > > > Date: Tuesday, August 12, 2008, 9:49 PM > > > > > Hi, Satish > > > > > > > > > > Sorry for my vague reply. > > > > > > > > > > I just use 'bash' not 'bash > > > --login'. With > > > > > 'bash' only, the configure > script > > > > > can not find 'mkdir' and > 'sed' > > > command, as > > > > > you kindly points me out. > > > > > > > > > > Then I try this utilities: > 'usr/bin/run.exe > > > -ls' > > > > > to open a xterm window. I > > > > > Successed. In this window, I can find > > > 'mkdir' and > > > > > 'sed' plus visual c++'s > > > > > 'cl.exe' and intel fortran > 's > > > > > 'ifort.exe' now. > > > > > > > > > > After that, I do configure petsc again > with > > > > > './config/configure.py '( I can > > > > > not remember other paramters for mpich > , lapack, > > > blas, > > > > > winfe cl , winfe > > > > > ifort and hypre now, but I follow the > exact > > > guidence from > > > > > petsc official > > > > > website). > > > > > > > > > > However, after several minutes's > > > configuration, petsc > > > > > configuration script > > > > > still CRASHED with no error prompt on > screen. It > > > just tell > > > > > me to send the > > > > > config.log file to developers. Right > now I am not > > > in my > > > > > office, tomorrow I > > > > > will send the config.log file to > corresponding > > > email > > > > > address. > > > > > > > > > > Hope this time you can understand what > I say. > > > > > > > > > > Thanks for you help! > > > > > > > > > > On Tue, Aug 12, 2008 at 10:43 PM, > Satish Balay > > > > > wrote: > > > > > > > > > > > > > > > > > On Tue, 12 Aug 2008, berry wrote: > > > > > > > > > > > > > Thanks, I have already > logged into the > > > cygwin > > > > > from VC command prompt. > > > > > > > > > > > > What does this mean? do you now > have mkdir > > > and sed in > > > > > your path? > > > > > > > > > > > > If not - redo VC cmd prompt - and > 'bash > > > > > --login' - and let us know if: > > > > > > > > > > > > - mkdir & sed are in your PATH > > > > > > which sed > > > > > > which mkdir > > > > > > > > > > > > - now configure is successful or > not.. > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > Pang Shengyong > > > > > Solidification Simulation Lab, > > > > > State Key Lab of Mould & Die > Technology, > > > > > Huazhong Univ. of Sci. & Tech. > China > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > Pang Shengyong > > > Solidification Simulation Lab, > > > State Key Lab of Mould & Die Technology, > > > Huazhong Univ. of Sci. & Tech. China > > > > > > > > > > > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China From lua.byhh at gmail.com Wed Aug 13 03:44:56 2008 From: lua.byhh at gmail.com (berry) Date: Wed, 13 Aug 2008 16:44:56 +0800 Subject: problem in compiling petsc for visual c++ 2005 In-Reply-To: <149684.22074.qm@web52208.mail.re2.yahoo.com> References: <149684.22074.qm@web52208.mail.re2.yahoo.com> Message-ID: Hi, Thanks for you reply. Previously may be I did not set the cxx compiler , so 'make all test' produces '.a ' file. This time it produces correct '.lib' files after the correct setting. Best Regards, On Wed, Aug 13, 2008 at 4:27 PM, Farshid Mossaiby wrote: > Hi, > > I am not sure how to do this, but why you need it? The resulting .lib files > can later be used in your own programs without any problems. > > Best regards, > Farshid Mossaiby > > > --- On Wed, 8/13/08, berry wrote: > > > From: berry > > Subject: Re: problem in compiling petsc for visual c++ 2005 > > To: petsc-users at mcs.anl.gov > > Date: Wednesday, August 13, 2008, 12:50 PM > > Hi, > > > > How can you change the Linker to VC 2008 's link.exe? > > I configured the > > script successfuly. But my linker option (PETSc generated) > > is /usr/bin/ar. Below is part of the scratch configure > > results. > > > > Compilers: > > C Compiler: > > /home/pangshengyong/soft/petsc-2.3.3-p13/bin/win32fe/win32 > > fe cl --nodetect -wd4996 -MT > > C++ Compiler: > > /home/pangshengyong/soft/petsc-2.3.3-p13/bin/win32fe/win32 > > fe cl --nodetect -GR -GX -EHsc -MT -O2 -Zm200 -TP > > Fortran Compiler: > > /home/pangshengyong/soft/petsc-2.3.3-p13/bin/win32fe/win32 > > fe ifort --nodetect -MT -O3 -QxW -fpp > > *Linkers: > > Static linker: /usr/bin/ar cr* > > PETSc: > > PETSC_ARCH: windows_VC > > PETSC_DIR: /home/pangshengyong/soft/petsc-2.3.3-p13 > > ** > > ** Now build and test the libraries with "make all > > test" > > ** > > Clanguage: C > > Scalar type:real > > PETSc shared libraries: disabled > > PETSc dynamic libraries: disabled > > BLAS/LAPACK: > > -L/home/pangshengyong/soft/petsc-2.3.3-p13/externalpackages/fblasla > > pack/windows_VC > > -L/home/pangshengyong/soft/petsc-2.3.3-p13/externalpackages/fbla > > slapack/windows_VC -lflapack > > -L/home/pangshengyong/soft/petsc-2.3.3-p13/external > > packages/fblaslapack/windows_VC > > -L/home/pangshengyong/soft/petsc-2.3.3-p13/exter > > nalpackages/fblaslapack/windows_VC -lfblas > > > > > > Regards, > > > > > > On Wed, Aug 13, 2008 at 4:06 PM, Farshid Mossaiby > > wrote: > > > > > Hi, > > > > > > They are .lib files. > > > > > > Best regards, > > > Farshid Mossaiby > > > > > > > > > --- On Wed, 8/13/08, berry > > wrote: > > > > > > > From: berry > > > > Subject: Re: problem in compiling petsc for > > visual c++ 2005 > > > > To: petsc-users at mcs.anl.gov > > > > Date: Wednesday, August 13, 2008, 11:39 AM > > > > Hi, Farshid > > > > > > > > Thanks for you suggestions. Can you tell me the > > surfix of > > > > your compiled > > > > library is .a or .lib? > > > > > > > > > > > > On Wed, Aug 13, 2008 at 2:44 PM, Farshid Mossaiby > > > > wrote: > > > > > > > > > Hi, > > > > > > > > > > I have built Petsc a lot of times with VC++ > > 2008 > > > > Express edition and > > > > > Cygwin. I used to run > > C:\Cygwin\Cygwin.bat, > > > > and had no problem. > > > > > > > > > > I just looked into the batch file, and it > > contains: > > > > > > > > > > ---------------------------- > > > > > @echo off > > > > > > > > > > C: > > > > > chdir C:\Cygwin\bin > > > > > > > > > > bash --login -i > > > > > ---------------------------- > > > > > > > > > > I am not sure what '-i' in bash > > parameters > > > > mean, but it may help you. > > > > > Anyway, I still recommend you running the > > batch file > > > > itself. > > > > > > > > > > Best regards, > > > > > Farshid Mossaiby > > > > > > > > > > --- On Tue, 8/12/08, berry > > > > > > wrote: > > > > > > > > > > > From: berry > > > > > > Subject: Re: problem in compiling petsc > > for > > > > visual c++ 2005 > > > > > > To: petsc-users at mcs.anl.gov > > > > > > Date: Tuesday, August 12, 2008, 9:49 PM > > > > > > Hi, Satish > > > > > > > > > > > > Sorry for my vague reply. > > > > > > > > > > > > I just use 'bash' not 'bash > > > > --login'. With > > > > > > 'bash' only, the configure > > script > > > > > > can not find 'mkdir' and > > 'sed' > > > > command, as > > > > > > you kindly points me out. > > > > > > > > > > > > Then I try this utilities: > > 'usr/bin/run.exe > > > > -ls' > > > > > > to open a xterm window. I > > > > > > Successed. In this window, I can find > > > > 'mkdir' and > > > > > > 'sed' plus visual c++'s > > > > > > 'cl.exe' and intel fortran > > 's > > > > > > 'ifort.exe' now. > > > > > > > > > > > > After that, I do configure petsc again > > with > > > > > > './config/configure.py '( I can > > > > > > not remember other paramters for mpich > > , lapack, > > > > blas, > > > > > > winfe cl , winfe > > > > > > ifort and hypre now, but I follow the > > exact > > > > guidence from > > > > > > petsc official > > > > > > website). > > > > > > > > > > > > However, after several minutes's > > > > configuration, petsc > > > > > > configuration script > > > > > > still CRASHED with no error prompt on > > screen. It > > > > just tell > > > > > > me to send the > > > > > > config.log file to developers. Right > > now I am not > > > > in my > > > > > > office, tomorrow I > > > > > > will send the config.log file to > > corresponding > > > > email > > > > > > address. > > > > > > > > > > > > Hope this time you can understand what > > I say. > > > > > > > > > > > > Thanks for you help! > > > > > > > > > > > > On Tue, Aug 12, 2008 at 10:43 PM, > > Satish Balay > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > On Tue, 12 Aug 2008, berry wrote: > > > > > > > > > > > > > > > Thanks, I have already > > logged into the > > > > cygwin > > > > > > from VC command prompt. > > > > > > > > > > > > > > What does this mean? do you now > > have mkdir > > > > and sed in > > > > > > your path? > > > > > > > > > > > > > > If not - redo VC cmd prompt - and > > 'bash > > > > > > --login' - and let us know if: > > > > > > > > > > > > > > - mkdir & sed are in your PATH > > > > > > > which sed > > > > > > > which mkdir > > > > > > > > > > > > > > - now configure is successful or > > not.. > > > > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Pang Shengyong > > > > > > Solidification Simulation Lab, > > > > > > State Key Lab of Mould & Die > > Technology, > > > > > > Huazhong Univ. of Sci. & Tech. > > China > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Pang Shengyong > > > > Solidification Simulation Lab, > > > > State Key Lab of Mould & Die Technology, > > > > Huazhong Univ. of Sci. & Tech. China > > > > > > > > > > > > > > > > > > > > > -- > > Pang Shengyong > > Solidification Simulation Lab, > > State Key Lab of Mould & Die Technology, > > Huazhong Univ. of Sci. & Tech. China > > > > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From Hung.V.Nguyen at usace.army.mil Thu Aug 14 09:37:38 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Thu, 14 Aug 2008 09:37:38 -0500 Subject: Petsc questions Message-ID: Hello All, I am new to PETSC. I have following questions about PETSC: 1. Do PETSC functions help to estimate/calculate a matrix condition number? If yes, can I get the info how to do it? 2. A question about renumbering nodes: our CFD code uses ParMetis to compute the original partitioning of the mesh. The global nodes are renumbered consecutively within each Parmetis partition as npetsc which is a mapping vector from the original global node numbering to the new numbering, see below as a test code. My question is whether PETSC function helps to renumber from ParMetis partition to PETSC partition or not? Thank for your help. Regards, -Hung -- code: ! Read the data. fname = 'petsc.dat' call parnam (fname) open (2, file = fname, status = 'old') ! No. global nodes, local nodes, owned nodes, ! compressed columns, PEs. read (2, '(5i10)') ng, nloc, nown, ncol, npes if (noproc .ne. npes) then if (myid .eq. 0) then print*, 'Number of PEs from the data file does not match', & ' the number from the run command.' end if call PetscFinalize (ierr) stop end if ! Local node array containing global node numbers. allocate (nglobal(nloc)) read (2, '(8i10)') nglobal ! Find petsc numbering scheme. allocate (nown_all(noproc)) allocate (idisp(noproc)) allocate (npetsc(nloc)) allocate (nodes1(ng)) allocate (nodes2(ng)) call MPI_ALLGATHER (nown, 1, MPI_INTEGER, nown_all, 1, & MPI_INTEGER, PETSC_COMM_WORLD, ierr) idisp(1) = 0 do i = 2, noproc idisp(i) = idisp(i - 1) + nown_all(i - 1) end do call MPI_ALLGATHERV (nglobal, nown, MPI_INTEGER, nodes1, & nown_all, idisp, MPI_INTEGER, PETSC_COMM_WORLD, ierr) do i = 1, ng ii = nodes1(i) nodes2(ii) = i end do ! Process the local nodes for their petsc numbers. do i = 1, nloc ii = nglobal(i) npetsc(i) = nodes2(ii) end do From ctibirna at giref.ulaval.ca Thu Aug 14 09:56:19 2008 From: ctibirna at giref.ulaval.ca (Cristian Tibirna) Date: Thu, 14 Aug 2008 10:56:19 -0400 Subject: negative indices In-Reply-To: References: <200808121208.15753.ctibirna@giref.ulaval.ca> Message-ID: <200808141056.21354.ctibirna@giref.ulaval.ca> On Tuesday, 12 August 2008, Matthew Knepley wrote: > If you ignore entries for the rows and columns associated with BCs, but do > not > eliminate them from the ordering, do you remember to put something on the > diagonal of the Jacobian? Thank you for the heads up. Indeed my initial intention was to eliminate the lines completely from the matrix structure (i.e. I preallocate zero non-zero entries for these lines), but after checking my code more attentively, I found that was failing to do it correctly because of a subtle error. Unfortunately, even after fixing this error and thus eliminating the lines completely, I can't manage to do this Dirichlet elimination work. This time, the LU preconditioner complains that the matrix has an empty row (which I thought should be permitted). I examined the aij.c code a bit and could tell (to my humble understanding) that using matrices with completely empty rows should work for iterative solvers -- at least the MatMult method does seem to ignore empty lines completely -- but I didn't do actual tests yet. Shouldn't it also be possible to eliminate lines completely even when using direct solving? -- Cristian Tibirna (1-418-) 656-2131 / 4340 Laval University - Quebec, CAN ... http://www.giref.ulaval.ca/~ctibirna Research professional at GIREF ... ctibirna at giref.ulaval.ca From knepley at gmail.com Thu Aug 14 10:58:45 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 14 Aug 2008 10:58:45 -0500 Subject: negative indices In-Reply-To: <200808141056.21354.ctibirna@giref.ulaval.ca> References: <200808121208.15753.ctibirna@giref.ulaval.ca> <200808141056.21354.ctibirna@giref.ulaval.ca> Message-ID: On Thu, Aug 14, 2008 at 9:56 AM, Cristian Tibirna wrote: > On Tuesday, 12 August 2008, Matthew Knepley wrote: >> If you ignore entries for the rows and columns associated with BCs, but do >> not >> eliminate them from the ordering, do you remember to put something on the >> diagonal of the Jacobian? > > Thank you for the heads up. Indeed my initial intention was to eliminate the > lines completely from the matrix structure (i.e. I preallocate zero non-zero > entries for these lines), but after checking my code more attentively, I > found that was failing to do it correctly because of a subtle error. > > Unfortunately, even after fixing this error and thus eliminating the lines > completely, I can't manage to do this Dirichlet elimination work. This time, > the LU preconditioner complains that the matrix has an empty row (which I > thought should be permitted). > > I examined the aij.c code a bit and could tell (to my humble understanding) > that using matrices with completely empty rows should work for iterative > solvers -- at least the MatMult method does seem to ignore empty lines > completely -- but I didn't do actual tests yet. > > Shouldn't it also be possible to eliminate lines completely even when using > direct solving? I am not sure I understand. You would like to have a row in the matrix which has only zeros (items not filled in are implicitly)? This would mean a singular matrix, and thus LU fails. When I do this, I eliminate these rows from the matrix and rhs. Matt > -- > Cristian Tibirna (1-418-) 656-2131 / 4340 > Laval University - Quebec, CAN ... http://www.giref.ulaval.ca/~ctibirna > Research professional at GIREF ... ctibirna at giref.ulaval.ca > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Thu Aug 14 11:08:18 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 14 Aug 2008 11:08:18 -0500 Subject: Petsc questions In-Reply-To: References: Message-ID: <66D788D6-1839-40C4-A90C-4BD17F75B0C2@mcs.anl.gov> Check AO functions. They renumber between two different orderings. But, if your code already puts everything in the new numbering, you can just use PETSc with that new numbering and leave your ordering code as it is. Barry On Aug 14, 2008, at 9:37 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > > Hello All, > > I am new to PETSC. I have following questions about PETSC: > > 1. Do PETSC functions help to estimate/calculate a matrix condition > number? > If yes, can I get the info how to do it? > > 2. A question about renumbering nodes: our CFD code uses ParMetis to > compute > the original partitioning of the mesh. The global nodes are > renumbered > consecutively within each Parmetis partition as npetsc which is a > mapping > vector from the original global node numbering to the new numbering, > see > below as a test code. My question is whether PETSC function helps to > renumber > from ParMetis partition to PETSC partition or not? > > Thank for your help. > > Regards, > > -Hung > -- code: > ! Read the data. > > fname = 'petsc.dat' > call parnam (fname) > open (2, file = fname, status = 'old') > > ! No. global nodes, local nodes, owned nodes, > ! compressed columns, PEs. > > read (2, '(5i10)') ng, nloc, nown, ncol, npes > > if (noproc .ne. npes) then > if (myid .eq. 0) then > print*, 'Number of PEs from the data file does not match', > & ' the number from the run command.' > end if > call PetscFinalize (ierr) > stop > end if > > ! Local node array containing global node numbers. > > allocate (nglobal(nloc)) > > read (2, '(8i10)') nglobal > > ! Find petsc numbering scheme. > > allocate (nown_all(noproc)) > allocate (idisp(noproc)) > allocate (npetsc(nloc)) > allocate (nodes1(ng)) > allocate (nodes2(ng)) > > call MPI_ALLGATHER (nown, 1, MPI_INTEGER, nown_all, 1, > & MPI_INTEGER, PETSC_COMM_WORLD, ierr) > > idisp(1) = 0 > do i = 2, noproc > idisp(i) = idisp(i - 1) + nown_all(i - 1) > end do > > call MPI_ALLGATHERV (nglobal, nown, MPI_INTEGER, nodes1, > & nown_all, idisp, MPI_INTEGER, PETSC_COMM_WORLD, ierr) > > do i = 1, ng > ii = nodes1(i) > nodes2(ii) = i > end do > > ! Process the local nodes for their petsc numbers. > > do i = 1, nloc > ii = nglobal(i) > npetsc(i) = nodes2(ii) > end do > > From bsmith at mcs.anl.gov Thu Aug 14 11:20:24 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 14 Aug 2008 11:20:24 -0500 Subject: negative indices In-Reply-To: <200808141056.21354.ctibirna@giref.ulaval.ca> References: <200808121208.15753.ctibirna@giref.ulaval.ca> <200808141056.21354.ctibirna@giref.ulaval.ca> Message-ID: The idea with using the negative indices iin Vec/MatSetValues is that those grid nodes (vertices) are NOT assigned a global number at all. For example, say you have an element with global number of its vertices 2, 5, 22 and say say the first of these (what is labeled global number 2) is a Dirichlet boundary vertex. Then what you do is RENUMBER the vertices removing the Dirichlet boundary vertices from the numbering. In this case the vertices of this element would then be labeled -1, 4, 21 (the other vertices are decreased by 1 since a vertex with global number 2 which is less than 5 and 22 was removed. Since you do this before creating the PETSc objects, the PETSc solvers (Vec, Mat, KSP, etc) never see (or even know about these "extra" locations. Barry Note: the renumbering simply compresses out the numbers of the Dirichlet boundary. On Aug 14, 2008, at 9:56 AM, Cristian Tibirna wrote: > On Tuesday, 12 August 2008, Matthew Knepley wrote: >> If you ignore entries for the rows and columns associated with BCs, >> but do >> not >> eliminate them from the ordering, do you remember to put something >> on the >> diagonal of the Jacobian? > > Thank you for the heads up. Indeed my initial intention was to > eliminate the > lines completely from the matrix structure (i.e. I preallocate zero > non-zero > entries for these lines), but after checking my code more > attentively, I > found that was failing to do it correctly because of a subtle error. > > Unfortunately, even after fixing this error and thus eliminating the > lines > completely, I can't manage to do this Dirichlet elimination work. > This time, > the LU preconditioner complains that the matrix has an empty row > (which I > thought should be permitted). > > I examined the aij.c code a bit and could tell (to my humble > understanding) > that using matrices with completely empty rows should work for > iterative > solvers -- at least the MatMult method does seem to ignore empty lines > completely -- but I didn't do actual tests yet. > > Shouldn't it also be possible to eliminate lines completely even > when using > direct solving? > > -- > Cristian Tibirna (1-418-) 656-2131 / 4340 > Laval University - Quebec, CAN ... http://www.giref.ulaval.ca/~ctibirna > Research professional at GIREF ... ctibirna at giref.ulaval.ca > From stephan.kramer at imperial.ac.uk Thu Aug 14 12:17:54 2008 From: stephan.kramer at imperial.ac.uk (Stephan Kramer) Date: Thu, 14 Aug 2008 18:17:54 +0100 Subject: running out mpi communicators with Prometheus Message-ID: <48A468C2.8070201@imperial.ac.uk> Hello When running with Prometheus in parallel we get the following error after about 1000 solves: Fatal error in MPI_Comm_dup: Other MPI error, error stack: MPI_Comm_dup(176)..: MPI_Comm_dup(comm=0x84000002, new_comm=0x36901e8c) failed MPIR_Comm_copy(547): Too many communicators So it's seems like a MPI_Comm_dup not matched by a MPI_Comm_free. Our model does a number of solves per time step, and for each solve all PETSc objects (ksp, mat, pc and vec) are created and destroyed again. It's only when we switch to prometheus and in parallel (no problems with prometheus in serial) that the error occurs. Does anyone have any suggestions how to track down the problem, or seen a similar problem? I'm fairly sure we're not missing any KSP/PC/Vec or MatDestroys. Cheers Stephan Kramer From Hung.V.Nguyen at usace.army.mil Thu Aug 14 13:22:52 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Thu, 14 Aug 2008 13:22:52 -0500 Subject: Petsc questions In-Reply-To: <66D788D6-1839-40C4-A90C-4BD17F75B0C2@mcs.anl.gov> References: <66D788D6-1839-40C4-A90C-4BD17F75B0C2@mcs.anl.gov> Message-ID: Barry, Thanks for the info. I will use what we already have. How's about estimation of matrix condition number via PETSC? -Hung -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith Sent: Thursday, August 14, 2008 11:08 AM To: petsc-users at mcs.anl.gov Subject: Re: Petsc questions Check AO functions. They renumber between two different orderings. But, if your code already puts everything in the new numbering, you can just use PETSc with that new numbering and leave your ordering code as it is. Barry On Aug 14, 2008, at 9:37 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > > Hello All, > > I am new to PETSC. I have following questions about PETSC: > > 1. Do PETSC functions help to estimate/calculate a matrix condition > number? > If yes, can I get the info how to do it? > > 2. A question about renumbering nodes: our CFD code uses ParMetis to > compute the original partitioning of the mesh. The global nodes are > renumbered consecutively within each Parmetis partition as npetsc > which is a mapping vector from the original global node numbering to > the new numbering, see below as a test code. My question is whether > PETSC function helps to renumber from ParMetis partition to PETSC > partition or not? > > Thank for your help. > > Regards, > > -Hung > -- code: > ! Read the data. > > fname = 'petsc.dat' > call parnam (fname) > open (2, file = fname, status = 'old') > > ! No. global nodes, local nodes, owned nodes, > ! compressed columns, PEs. > > read (2, '(5i10)') ng, nloc, nown, ncol, npes > > if (noproc .ne. npes) then > if (myid .eq. 0) then > print*, 'Number of PEs from the data file does not match', > & ' the number from the run command.' > end if > call PetscFinalize (ierr) > stop > end if > > ! Local node array containing global node numbers. > > allocate (nglobal(nloc)) > > read (2, '(8i10)') nglobal > > ! Find petsc numbering scheme. > > allocate (nown_all(noproc)) > allocate (idisp(noproc)) > allocate (npetsc(nloc)) > allocate (nodes1(ng)) > allocate (nodes2(ng)) > > call MPI_ALLGATHER (nown, 1, MPI_INTEGER, nown_all, 1, > & MPI_INTEGER, PETSC_COMM_WORLD, ierr) > > idisp(1) = 0 > do i = 2, noproc > idisp(i) = idisp(i - 1) + nown_all(i - 1) > end do > > call MPI_ALLGATHERV (nglobal, nown, MPI_INTEGER, nodes1, > & nown_all, idisp, MPI_INTEGER, PETSC_COMM_WORLD, ierr) > > do i = 1, ng > ii = nodes1(i) > nodes2(ii) = i > end do > > ! Process the local nodes for their petsc numbers. > > do i = 1, nloc > ii = nglobal(i) > npetsc(i) = nodes2(ii) > end do > > From knepley at gmail.com Thu Aug 14 13:33:35 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 14 Aug 2008 13:33:35 -0500 Subject: Petsc questions In-Reply-To: References: <66D788D6-1839-40C4-A90C-4BD17F75B0C2@mcs.anl.gov> Message-ID: On Thu, Aug 14, 2008 at 1:22 PM, Nguyen, Hung V ERDC-ITL-MS wrote: > > Barry, > > Thanks for the info. I will use what we already have. How's about estimation > of matrix condition number via PETSC? You can use -ksp_monitor_singular_value to print out the exterior singular values for the Hessenberg matrix generated by the Krylov method. I don't think we have anything else. Condition number estimation is a tough problem. Matt > -Hung > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On > Behalf Of Barry Smith > Sent: Thursday, August 14, 2008 11:08 AM > To: petsc-users at mcs.anl.gov > Subject: Re: Petsc questions > > > Check AO functions. They renumber between two different orderings. > > But, if your code already puts everything in the new numbering, you can just > use PETSc with that new numbering and leave your ordering code as it is. > > Barry > > On Aug 14, 2008, at 9:37 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > >> >> Hello All, >> >> I am new to PETSC. I have following questions about PETSC: >> >> 1. Do PETSC functions help to estimate/calculate a matrix condition >> number? >> If yes, can I get the info how to do it? >> >> 2. A question about renumbering nodes: our CFD code uses ParMetis to >> compute the original partitioning of the mesh. The global nodes are >> renumbered consecutively within each Parmetis partition as npetsc >> which is a mapping vector from the original global node numbering to >> the new numbering, see below as a test code. My question is whether >> PETSC function helps to renumber from ParMetis partition to PETSC >> partition or not? >> >> Thank for your help. >> >> Regards, >> >> -Hung >> -- code: >> ! Read the data. >> >> fname = 'petsc.dat' >> call parnam (fname) >> open (2, file = fname, status = 'old') >> >> ! No. global nodes, local nodes, owned nodes, >> ! compressed columns, PEs. >> >> read (2, '(5i10)') ng, nloc, nown, ncol, npes >> >> if (noproc .ne. npes) then >> if (myid .eq. 0) then >> print*, 'Number of PEs from the data file does not match', >> & ' the number from the run command.' >> end if >> call PetscFinalize (ierr) >> stop >> end if >> >> ! Local node array containing global node numbers. >> >> allocate (nglobal(nloc)) >> >> read (2, '(8i10)') nglobal >> >> ! Find petsc numbering scheme. >> >> allocate (nown_all(noproc)) >> allocate (idisp(noproc)) >> allocate (npetsc(nloc)) >> allocate (nodes1(ng)) >> allocate (nodes2(ng)) >> >> call MPI_ALLGATHER (nown, 1, MPI_INTEGER, nown_all, 1, >> & MPI_INTEGER, PETSC_COMM_WORLD, ierr) >> >> idisp(1) = 0 >> do i = 2, noproc >> idisp(i) = idisp(i - 1) + nown_all(i - 1) >> end do >> >> call MPI_ALLGATHERV (nglobal, nown, MPI_INTEGER, nodes1, >> & nown_all, idisp, MPI_INTEGER, PETSC_COMM_WORLD, ierr) >> >> do i = 1, ng >> ii = nodes1(i) >> nodes2(ii) = i >> end do >> >> ! Process the local nodes for their petsc numbers. >> >> do i = 1, nloc >> ii = nglobal(i) >> npetsc(i) = nodes2(ii) >> end do >> >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From lukasz at civil.gla.ac.uk Thu Aug 14 17:09:32 2008 From: lukasz at civil.gla.ac.uk (Lukasz Kaczmarczyk) Date: Thu, 14 Aug 2008 23:09:32 +0100 Subject: gmres - restart and Gauss-Seidel Message-ID: <48A4AD1C.8080906@civil.gla.ac.uk> Hello, I have implementation of geometric multi-grid for heterogeneous quasi-brittle materials for hybrid-trefftz finite elements (degrees of freedom are on faces -> small number of neighbours). Multi-grid algorithm need smoothing, for that I use Gauss-Seidel, however SOR implemented in PETSc is not parallel. That is way, I implemented my own parallel Gauss-Seidel with colouring of faces in order to reduce communication. Everything seems to work prefect, except that that for GMRES after restart algorithm is divergent. With SOR implemented in PETSc algorithm does not have this problem. Can any give my tip how to cure may problem. Regards, Lukasz From knepley at gmail.com Thu Aug 14 19:21:25 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 14 Aug 2008 19:21:25 -0500 Subject: gmres - restart and Gauss-Seidel In-Reply-To: <48A4AD1C.8080906@civil.gla.ac.uk> References: <48A4AD1C.8080906@civil.gla.ac.uk> Message-ID: That is strange. Your parallel SOR should be a stronger PC than the Block-Jacobi SOR that PETSc is using. However, it is hard to prove that restarted GMRES will converge, and thus this behavior is not impossible. However, if you are using MG and the smoother is doing its job, you should not take more than a few iterates. You should never get to 30 outer GMRES iterates. Matt On Thu, Aug 14, 2008 at 5:09 PM, Lukasz Kaczmarczyk wrote: > Hello, > I have implementation of geometric multi-grid for heterogeneous > quasi-brittle materials for hybrid-trefftz finite elements (degrees of > freedom are on faces -> small number of neighbours). Multi-grid > algorithm need smoothing, for that I use Gauss-Seidel, however SOR > implemented in PETSc is not parallel. That is way, I implemented my own > parallel Gauss-Seidel with colouring of faces in order to reduce > communication. Everything seems to work prefect, except that that for > GMRES after restart algorithm is divergent. With SOR implemented in > PETSc algorithm does not have this problem. Can any give my tip how to > cure may problem. > > Regards, > Lukasz > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Thu Aug 14 20:49:30 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 14 Aug 2008 20:49:30 -0500 Subject: Petsc questions In-Reply-To: References: <66D788D6-1839-40C4-A90C-4BD17F75B0C2@mcs.anl.gov> Message-ID: <9D5069E5-DB45-48E5-9485-28F71A47E0EE@mcs.anl.gov> On Aug 14, 2008, at 1:33 PM, Matthew Knepley wrote: > On Thu, Aug 14, 2008 at 1:22 PM, Nguyen, Hung V ERDC-ITL-MS > wrote: >> >> Barry, >> >> Thanks for the info. I will use what we already have. How's about >> estimation >> of matrix condition number via PETSC? > > You can use -ksp_monitor_singular_value to print out the exterior > singular values > for the Hessenberg matrix generated by the Krylov method. I don't > think we have > anything else. Condition number estimation is a tough problem. > See also KSPSetComputeSingularValues(), KSPSetComputeEigenvalues(), KSPComputeSingularValues(), KSPComputeEigenvalues() for use in your code. I've found if the operator is not terribly ill-conditioned these can work fairly well. If using GMRES for a non-symmetric matrix make sure you use a large enough restart to get accurate values. Barry > Matt > >> -Hung >> >> -----Original Message----- >> From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov >> ] On >> Behalf Of Barry Smith >> Sent: Thursday, August 14, 2008 11:08 AM >> To: petsc-users at mcs.anl.gov >> Subject: Re: Petsc questions >> >> >> Check AO functions. They renumber between two different orderings. >> >> But, if your code already puts everything in the new numbering, you >> can just >> use PETSc with that new numbering and leave your ordering code as >> it is. >> >> Barry >> >> On Aug 14, 2008, at 9:37 AM, Nguyen, Hung V ERDC-ITL-MS wrote: >> >>> >>> Hello All, >>> >>> I am new to PETSC. I have following questions about PETSC: >>> >>> 1. Do PETSC functions help to estimate/calculate a matrix condition >>> number? >>> If yes, can I get the info how to do it? >>> >>> 2. A question about renumbering nodes: our CFD code uses ParMetis to >>> compute the original partitioning of the mesh. The global nodes are >>> renumbered consecutively within each Parmetis partition as npetsc >>> which is a mapping vector from the original global node numbering to >>> the new numbering, see below as a test code. My question is whether >>> PETSC function helps to renumber from ParMetis partition to PETSC >>> partition or not? >>> >>> Thank for your help. >>> >>> Regards, >>> >>> -Hung >>> -- code: >>> ! Read the data. >>> >>> fname = 'petsc.dat' >>> call parnam (fname) >>> open (2, file = fname, status = 'old') >>> >>> ! No. global nodes, local nodes, owned nodes, >>> ! compressed columns, PEs. >>> >>> read (2, '(5i10)') ng, nloc, nown, ncol, npes >>> >>> if (noproc .ne. npes) then >>> if (myid .eq. 0) then >>> print*, 'Number of PEs from the data file does not match', >>> & ' the number from the run command.' >>> end if >>> call PetscFinalize (ierr) >>> stop >>> end if >>> >>> ! Local node array containing global node numbers. >>> >>> allocate (nglobal(nloc)) >>> >>> read (2, '(8i10)') nglobal >>> >>> ! Find petsc numbering scheme. >>> >>> allocate (nown_all(noproc)) >>> allocate (idisp(noproc)) >>> allocate (npetsc(nloc)) >>> allocate (nodes1(ng)) >>> allocate (nodes2(ng)) >>> >>> call MPI_ALLGATHER (nown, 1, MPI_INTEGER, nown_all, 1, >>> & MPI_INTEGER, PETSC_COMM_WORLD, ierr) >>> >>> idisp(1) = 0 >>> do i = 2, noproc >>> idisp(i) = idisp(i - 1) + nown_all(i - 1) >>> end do >>> >>> call MPI_ALLGATHERV (nglobal, nown, MPI_INTEGER, nodes1, >>> & nown_all, idisp, MPI_INTEGER, PETSC_COMM_WORLD, ierr) >>> >>> do i = 1, ng >>> ii = nodes1(i) >>> nodes2(ii) = i >>> end do >>> >>> ! Process the local nodes for their petsc numbers. >>> >>> do i = 1, nloc >>> ii = nglobal(i) >>> npetsc(i) = nodes2(ii) >>> end do >>> >>> >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > From bsmith at mcs.anl.gov Thu Aug 14 21:03:37 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 14 Aug 2008 21:03:37 -0500 Subject: gmres - restart and Gauss-Seidel In-Reply-To: <48A4AD1C.8080906@civil.gla.ac.uk> References: <48A4AD1C.8080906@civil.gla.ac.uk> Message-ID: <43186D3B-4335-49C9-A0B0-BB7719AA2015@mcs.anl.gov> On Aug 14, 2008, at 5:09 PM, Lukasz Kaczmarczyk wrote: > Hello, > I have implementation of geometric multi-grid for heterogeneous > quasi-brittle materials for hybrid-trefftz finite elements (degrees of > freedom are on faces -> small number of neighbours). Multi-grid > algorithm need smoothing, for that I use Gauss-Seidel, however SOR > implemented in PETSc is not parallel. That is way, I implemented my > own > parallel Gauss-Seidel with colouring of faces in order to reduce > communication. Everything seems to work prefect, except that that for > GMRES after restart algorithm is divergent. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ What do you mean? It is converging fine until you hit the restart iteration and then get a totally different residual norm? And then each iteration gives worse residuals? Please run with -ksp_monitor_true_residual and send the output. If the preconditioner is not actually a linear operator (i.e. it has a bug in smoother) then the residual norm computed in GMRES may be wrong and so GMRES may look like it is working but actually is basically chugging on garbage. Also run with -ksp_type fgmres - ksp_monitor_true_residual and send the output. Barry > With SOR implemented in > PETSc algorithm does not have this problem. Can any give my tip how > to > cure may problem. > > Regards, > Lukasz > From lukasz at civil.gla.ac.uk Fri Aug 15 02:36:47 2008 From: lukasz at civil.gla.ac.uk (Lukasz Kaczmarczyk) Date: Fri, 15 Aug 2008 08:36:47 +0100 Subject: gmres - restart and Gauss-Seidel In-Reply-To: <43186D3B-4335-49C9-A0B0-BB7719AA2015@mcs.anl.gov> References: <48A4AD1C.8080906@civil.gla.ac.uk> <43186D3B-4335-49C9-A0B0-BB7719AA2015@mcs.anl.gov> Message-ID: <48A5320F.7050002@civil.gla.ac.uk> Barry Smith wrote: > > On Aug 14, 2008, at 5:09 PM, Lukasz Kaczmarczyk wrote: > >> Hello, >> I have implementation of geometric multi-grid for heterogeneous >> quasi-brittle materials for hybrid-trefftz finite elements (degrees of >> freedom are on faces -> small number of neighbours). Multi-grid >> algorithm need smoothing, for that I use Gauss-Seidel, however SOR >> implemented in PETSc is not parallel. That is way, I implemented my own >> parallel Gauss-Seidel with colouring of faces in order to reduce >> communication. Everything seems to work prefect, except that that for >> GMRES after restart algorithm is divergent. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > What do you mean? It is converging fine until you hit the restart > iteration and > then get a totally different residual norm? And then each iteration gives > worse residuals? Thanks You all for response. No, algorithm is stable to next restart > > Please run with -ksp_monitor_true_residual and send the output. > If the preconditioner is not actually a linear operator (i.e. it has a > bug in > smoother) then the residual norm computed in GMRES may be wrong > and so GMRES may look like it is working but actually is basically > chugging on garbage. Also run with -ksp_type fgmres > -ksp_monitor_true_residual > and send the output. > I hope that this helps, I send three outputs: 1) -ksp_type gmres -ksp_gmres_restart 10 2) -ksp_type fgmres -ksp_gmres_restart 100 -ksp_monitor_true_residual 3) -ksp_type fgmres -ksp_gmres_restart 10 -ksp_monitor_true_residual 1) ksp_type gmres -ksp_gmres_restart 10 0 KSP Residual norm 2.604671539574e+02 1 KSP Residual norm 8.673339524769e+00 2 KSP Residual norm 1.854343060681e+00 3 KSP Residual norm 4.635172307027e-01 4 KSP Residual norm 1.824358407207e-01 5 KSP Residual norm 9.823366032782e-02 6 KSP Residual norm 5.859833143089e-02 7 KSP Residual norm 2.929617664041e-02 8 KSP Residual norm 1.184403532587e-02 9 KSP Residual norm 3.942287795560e-03 10 KSP Residual norm 4.611215310185e+00 11 KSP Residual norm 3.557713305907e-01 12 KSP Residual norm 1.911999331832e-01 13 KSP Residual norm 1.048287519555e-01 14 KSP Residual norm 5.745962963315e-02 15 KSP Residual norm 5.238562834476e-02 16 KSP Residual norm 5.046872351948e-02 17 KSP Residual norm 5.041805668527e-02 18 KSP Residual norm 4.952005417494e-02 19 KSP Residual norm 4.950805199415e-02 20 KSP Residual norm 8.581369434072e+00 21 KSP Residual norm 8.168490133118e-01 22 KSP Residual norm 1.416002910406e-01 23 KSP Residual norm 6.679961898612e-02 24 KSP Residual norm 5.031183883430e-02 25 KSP Residual norm 4.670843908572e-02 26 KSP Residual norm 4.662064456590e-02 27 KSP Residual norm 4.652217965668e-02 28 KSP Residual norm 4.611529866498e-02 29 KSP Residual norm 4.610330586416e-02 30 KSP Residual norm 9.139117000729e+00 31 KSP Residual norm 1.170939505327e+00 32 KSP Residual norm 1.783857007759e-01 33 KSP Residual norm 1.413658128682e-01 34 KSP Residual norm 1.411820201886e-01 35 KSP Residual norm 1.372634877051e-01 2) -ksp_type fgmres -ksp_gmres_restart 100 -ksp_monitor_true_residual 0 KSP Residual norm 8.770931682456e+05 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 1 KSP Residual norm 1.466659768959e+00 1 KSP preconditioned resid norm 1.466659768959e+00 true resid norm 1.466659768959e+00 ||Ae||/||Ax|| 3.387105649673e+00 2 KSP Residual norm 2.135783619677e-01 2 KSP preconditioned resid norm 2.135783619677e-01 true resid norm 2.135783619678e-01 ||Ae||/||Ax|| 4.932380991006e-01 3 KSP Residual norm 1.342354228152e-01 3 KSP preconditioned resid norm 1.342354228152e-01 true resid norm 1.342354228151e-01 ||Ae||/||Ax|| 3.100034299884e-01 4 KSP Residual norm 7.872906297489e-02 4 KSP preconditioned resid norm 7.872906297489e-02 true resid norm 7.872906297491e-02 ||Ae||/||Ax|| 1.818169828064e-01 5 KSP Residual norm 3.414496482864e-02 5 KSP preconditioned resid norm 3.414496482864e-02 true resid norm 3.414496482865e-02 ||Ae||/||Ax|| 7.885441854116e-02 6 KSP Residual norm 2.025183908090e-02 6 KSP preconditioned resid norm 2.025183908090e-02 true resid norm 2.025183908092e-02 ||Ae||/||Ax|| 4.676961897981e-02 7 KSP Residual norm 1.023707179274e-02 7 KSP preconditioned resid norm 1.023707179274e-02 true resid norm 1.023707179279e-02 ||Ae||/||Ax|| 2.364150462111e-02 8 KSP Residual norm 4.938281368004e-03 8 KSP preconditioned resid norm 4.938281368004e-03 true resid norm 4.938281368031e-03 ||Ae||/||Ax|| 1.140447230867e-02 9 KSP Residual norm 2.373276511281e-03 9 KSP preconditioned resid norm 2.373276511281e-03 true resid norm 2.373276511245e-03 ||Ae||/||Ax|| 5.480847330515e-03 10 KSP Residual norm 1.180493643594e-03 10 KSP preconditioned resid norm 1.180493643594e-03 true resid norm 1.180493643635e-03 ||Ae||/||Ax|| 2.726233291716e-03 11 KSP Residual norm 7.142592937639e-04 11 KSP preconditioned resid norm 7.142592937639e-04 true resid norm 7.142592936809e-04 ||Ae||/||Ax|| 1.649511181911e-03 12 KSP Residual norm 4.226036746778e-04 12 KSP preconditioned resid norm 4.226036746778e-04 true resid norm 4.226036746206e-04 ||Ae||/||Ax|| 9.759613812110e-04 13 KSP Residual norm 2.572539106719e-04 13 KSP preconditioned resid norm 2.572539106719e-04 true resid norm 2.572539106261e-04 ||Ae||/||Ax|| 5.941024582002e-04 14 KSP Residual norm 1.444791718863e-04 14 KSP preconditioned resid norm 1.444791718863e-04 true resid norm 1.444791718510e-04 ||Ae||/||Ax|| 3.336603550419e-04 15 KSP Residual norm 8.492790274639e-05 15 KSP preconditioned resid norm 8.492790274639e-05 true resid norm 8.492790267409e-05 ||Ae||/||Ax|| 1.961325898824e-04 16 KSP Residual norm 4.707269600494e-05 16 KSP preconditioned resid norm 4.707269600494e-05 true resid norm 4.707269601177e-05 ||Ae||/||Ax|| 1.087097348555e-04 17 KSP Residual norm 2.692621130650e-05 17 KSP preconditioned resid norm 2.692621130650e-05 true resid norm 2.692621130585e-05 ||Ae||/||Ax|| 6.218342138276e-05 18 KSP Residual norm 1.339283607815e-05 18 KSP preconditioned resid norm 1.339283607815e-05 true resid norm 1.339283617815e-05 ||Ae||/||Ax|| 3.092943029068e-05 19 KSP Residual norm 8.006569523728e-06 19 KSP preconditioned resid norm 8.006569523728e-06 true resid norm 8.006569546889e-06 ||Ae||/||Ax|| 1.849038033273e-05 20 KSP Residual norm 5.048620120372e-06 20 KSP preconditioned resid norm 5.048620120372e-06 true resid norm 5.048620165524e-06 ||Ae||/||Ax|| 1.165928884641e-05 21 KSP Residual norm 3.079047055409e-06 21 KSP preconditioned resid norm 3.079047055409e-06 true resid norm 3.079047092852e-06 ||Ae||/||Ax|| 7.110754671622e-06 22 KSP Residual norm 1.837917124370e-06 22 KSP preconditioned resid norm 1.837917124370e-06 true resid norm 1.837917088425e-06 ||Ae||/||Ax|| 4.244487703000e-06 23 KSP Residual norm 8.755715968227e-07 23 KSP preconditioned resid norm 8.755715968227e-07 true resid norm 8.755715746951e-07 ||Ae||/||Ax|| 2.022045937380e-06 24 KSP Residual norm 4.460215115186e-07 24 KSP preconditioned resid norm 4.460215115186e-07 true resid norm 4.460215473811e-07 ||Ae||/||Ax|| 1.030042641779e-06 25 KSP Residual norm 2.074601204717e-07 25 KSP preconditioned resid norm 2.074601204717e-07 true resid norm 2.074601610498e-07 ||Ae||/||Ax|| 4.791087193129e-07 26 KSP Residual norm 1.078594582430e-07 26 KSP preconditioned resid norm 1.078594582430e-07 true resid norm 1.078594313079e-07 ||Ae||/||Ax|| 2.490906868011e-07 27 KSP Residual norm 5.595789534852e-08 27 KSP preconditioned resid norm 5.595789534852e-08 true resid norm 5.595808793116e-08 ||Ae||/||Ax|| 1.292296685216e-07 28 KSP Residual norm 2.866350154035e-08 28 KSP preconditioned resid norm 2.866350154035e-08 true resid norm 2.866379598785e-08 ||Ae||/||Ax|| 6.619620131833e-08 29 KSP Residual norm 1.602353949308e-08 29 KSP preconditioned resid norm 1.602353949308e-08 true resid norm 1.602386060096e-08 ||Ae||/||Ax|| 3.700552092568e-08 30 KSP Residual norm 8.795011075741e-09 30 KSP preconditioned resid norm 8.795011075741e-09 true resid norm 8.795292666765e-09 ||Ae||/||Ax|| 2.031185835503e-08 31 KSP Residual norm 4.077129947799e-09 31 KSP preconditioned resid norm 4.077129947799e-09 true resid norm 4.079852425518e-09 ||Ae||/||Ax|| 9.422015584506e-09 32 KSP Residual norm 1.739123114987e-09 32 KSP preconditioned resid norm 1.739123114987e-09 true resid norm 1.743717145343e-09 ||Ae||/||Ax|| 4.026942253017e-09 33 KSP Residual norm 8.452345380894e-10 33 KSP preconditioned resid norm 8.452345380894e-10 true resid norm 8.816532993133e-10 ||Ae||/||Ax|| 2.036091078762e-09 34 KSP Residual norm 3.861058363559e-10 34 KSP preconditioned resid norm 3.861058363559e-10 true resid norm 3.853896597046e-10 ||Ae||/||Ax|| 8.900192950935e-10 35 KSP Residual norm 1.986142320978e-10 3) -ksp_type fgmres -ksp_gmres_restart 10 -ksp_monitor_true_residual 0 KSP Residual norm 8.770931682456e+05 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 1 KSP Residual norm 1.466659768959e+00 1 KSP preconditioned resid norm 1.466659768959e+00 true resid norm 1.466659768959e+00 ||Ae||/||Ax|| 3.387105649673e+00 2 KSP Residual norm 2.135783619677e-01 2 KSP preconditioned resid norm 2.135783619677e-01 true resid norm 2.135783619678e-01 ||Ae||/||Ax|| 4.932380991006e-01 3 KSP Residual norm 1.342354228152e-01 3 KSP preconditioned resid norm 1.342354228152e-01 true resid norm 1.342354228151e-01 ||Ae||/||Ax|| 3.100034299884e-01 4 KSP Residual norm 7.872906297489e-02 4 KSP preconditioned resid norm 7.872906297489e-02 true resid norm 7.872906297491e-02 ||Ae||/||Ax|| 1.818169828064e-01 5 KSP Residual norm 3.414496482864e-02 5 KSP preconditioned resid norm 3.414496482864e-02 true resid norm 3.414496482865e-02 ||Ae||/||Ax|| 7.885441854116e-02 6 KSP Residual norm 2.025183908090e-02 6 KSP preconditioned resid norm 2.025183908090e-02 true resid norm 2.025183908092e-02 ||Ae||/||Ax|| 4.676961897981e-02 7 KSP Residual norm 1.023707179274e-02 7 KSP preconditioned resid norm 1.023707179274e-02 true resid norm 1.023707179279e-02 ||Ae||/||Ax|| 2.364150462111e-02 8 KSP Residual norm 4.938281368004e-03 8 KSP preconditioned resid norm 4.938281368004e-03 true resid norm 4.938281368031e-03 ||Ae||/||Ax|| 1.140447230867e-02 9 KSP Residual norm 2.373276511281e-03 9 KSP preconditioned resid norm 2.373276511281e-03 true resid norm 2.373276511245e-03 ||Ae||/||Ax|| 5.480847330515e-03 10 KSP Residual norm 1.180493643635e-03 10 KSP preconditioned resid norm 1.180493643635e-03 true resid norm 1.180493643635e-03 ||Ae||/||Ax|| 2.726233291716e-03 11 KSP Residual norm 8.973593571160e-04 11 KSP preconditioned resid norm 8.973593571160e-04 true resid norm 8.973593570913e-04 ||Ae||/||Ax|| 2.072362665506e-03 12 KSP Residual norm 8.954054180580e-04 12 KSP preconditioned resid norm 8.954054180580e-04 true resid norm 8.954054180762e-04 ||Ae||/||Ax|| 2.067850236641e-03 13 KSP Residual norm 8.509609619524e-04 13 KSP preconditioned resid norm 8.509609619524e-04 true resid norm 8.509609619593e-04 ||Ae||/||Ax|| 1.965210161828e-03 14 KSP Residual norm 8.458566359370e-04 14 KSP preconditioned resid norm 8.458566359370e-04 true resid norm 8.458566359291e-04 ||Ae||/||Ax|| 1.953422225798e-03 15 KSP Residual norm 8.372984660266e-04 15 KSP preconditioned resid norm 8.372984660266e-04 true resid norm 8.372984660471e-04 ||Ae||/||Ax|| 1.933657979058e-03 16 KSP Residual norm 7.279198160941e-04 16 KSP preconditioned resid norm 7.279198160941e-04 true resid norm 7.279198160961e-04 ||Ae||/||Ax|| 1.681058807086e-03 17 KSP Residual norm 7.254072398644e-04 17 KSP preconditioned resid norm 7.254072398644e-04 true resid norm 7.254072398528e-04 ||Ae||/||Ax|| 1.675256260804e-03 18 KSP Residual norm 6.813582761007e-04 18 KSP preconditioned resid norm 6.813582761007e-04 true resid norm 6.813582760932e-04 ||Ae||/||Ax|| 1.573529536468e-03 19 KSP Residual norm 5.556242867367e-04 19 KSP preconditioned resid norm 5.556242867367e-04 true resid norm 5.556242867546e-04 ||Ae||/||Ax|| 1.283159326104e-03 20 KSP Residual norm 4.379413135337e-04 20 KSP preconditioned resid norm 4.379413135337e-04 true resid norm 4.379413135337e-04 ||Ae||/||Ax|| 1.011382141032e-03 21 KSP Residual norm 4.159789873039e-04 21 KSP preconditioned resid norm 4.159789873039e-04 true resid norm 4.159789873292e-04 ||Ae||/||Ax|| 9.606623212470e-04 22 KSP Residual norm 3.988218089618e-04 22 KSP preconditioned resid norm 3.988218089618e-04 true resid norm 3.988218089773e-04 ||Ae||/||Ax|| 9.210395150869e-04 23 KSP Residual norm 3.981102611370e-04 23 KSP preconditioned resid norm 3.981102611370e-04 true resid norm 3.981102611544e-04 ||Ae||/||Ax|| 9.193962657785e-04 24 KSP Residual norm 3.619785012910e-04 24 KSP preconditioned resid norm 3.619785012910e-04 true resid norm 3.619785013142e-04 ||Ae||/||Ax|| 8.359535406984e-04 25 KSP Residual norm 3.351791646576e-04 25 KSP preconditioned resid norm 3.351791646576e-04 true resid norm 3.351791646837e-04 ||Ae||/||Ax|| 7.740631238275e-04 26 KSP Residual norm 3.051027323014e-04 26 KSP preconditioned resid norm 3.051027323014e-04 true resid norm 3.051027323283e-04 ||Ae||/||Ax|| 7.046045785610e-04 27 KSP Residual norm 2.977852161346e-04 27 KSP preconditioned resid norm 2.977852161346e-04 true resid norm 2.977852161482e-04 ||Ae||/||Ax|| 6.877054988155e-04 28 KSP Residual norm 2.507200839706e-04 28 KSP preconditioned resid norm 2.507200839706e-04 true resid norm 2.507200839354e-04 ||Ae||/||Ax|| 5.790132318055e-04 29 KSP Residual norm 1.792952379074e-04 29 KSP preconditioned resid norm 1.792952379074e-04 true resid norm 1.792952379357e-04 ||Ae||/||Ax|| 4.140646155464e-04 30 KSP Residual norm 1.539920471163e-04 30 KSP preconditioned resid norm 1.539920471163e-04 true resid norm 1.539920471163e-04 ||Ae||/||Ax|| 3.556293994227e-04 31 KSP Residual norm 1.538529289011e-04 31 KSP preconditioned resid norm 1.538529289011e-04 true resid norm 1.538529288671e-04 ||Ae||/||Ax|| 3.553081195881e-04 32 KSP Residual norm 1.504592268331e-04 32 KSP preconditioned resid norm 1.504592268331e-04 true resid norm 1.504592268204e-04 ||Ae||/||Ax|| 3.474707004273e-04 33 KSP Residual norm 1.504404986449e-04 33 KSP preconditioned resid norm 1.504404986449e-04 true resid norm 1.504404986576e-04 ||Ae||/||Ax|| 3.474274495880e-04 34 KSP Residual norm 1.411298358801e-04 34 KSP preconditioned resid norm 1.411298358801e-04 true resid norm 1.411298358680e-04 ||Ae||/||Ax|| 3.259253949163e-04 35 KSP Residual norm 1.128668990420e-04 From lukasz at civil.gla.ac.uk Fri Aug 15 04:56:18 2008 From: lukasz at civil.gla.ac.uk (Lukasz Kaczmarczyk) Date: Fri, 15 Aug 2008 10:56:18 +0100 Subject: gmres - restart and Gauss-Seidel In-Reply-To: References: <48A4AD1C.8080906@civil.gla.ac.uk> Message-ID: <48A552C2.4040602@civil.gla.ac.uk> Matthew Knepley wrote: > That is strange. Your parallel SOR should be a stronger PC than the > Block-Jacobi SOR that PETSc is using. I have such hope and that was a motivation. I like to have algorithm which is efficient with large number of processors. However, it is hard to prove > that restarted GMRES will converge, and thus this behavior is not > impossible. It is strange because with Block-Jacobi SOR convergence is ok. May be I should add some trick, f.e. add shift to diagonal. > > However, if you are using MG and the smoother is doing its job, you > should not take more than a few iterates. You should never get to > 30 outer GMRES iterates. > It is true, in practical computations I never have more than 30 iterations. However for tests, where I set error for small number (1e-12) number of iterations is more than 30. Regards, Lukasz From lukasz at civil.gla.ac.uk Fri Aug 15 06:45:21 2008 From: lukasz at civil.gla.ac.uk (Lukasz Kaczmarczyk) Date: Fri, 15 Aug 2008 12:45:21 +0100 Subject: gmres - restart and Gauss-Seidel In-Reply-To: <48A4AD1C.8080906@civil.gla.ac.uk> References: <48A4AD1C.8080906@civil.gla.ac.uk> Message-ID: <48A56C51.1030602@civil.gla.ac.uk> Thank for all, problem is solved, I add PCJACOBI, PCSetType(Composite_prec,PCCOMPOSITE); PCCompositeAddPC(Composite_prec,PCJACOBI); PCCompositeAddPC(Composite_prec,PCSHELL); // Gauss-Seidel COLOR Forward PCCompositeAddPC(Composite_prec,PCSHELL); // MulltiGrid PCCompositeAddPC(Composite_prec,PCSHELL); // Gauss-Seidel COLOR Forward PCCompositeAddPC(Composite_prec,PCJACOBI); Now it works :) -ksp_type fgmres -ksp_gmres_restart 10 -ksp_monitor_true_residual 0 KSP Residual norm 8.770931682456e+05 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 1 KSP Residual norm 2.470886778565e-01 1 KSP preconditioned resid norm 2.470886778565e-01 true resid norm 2.470886778563e-01 ||Ae||/||Ax|| 5.706268586963e-01 2 KSP Residual norm 6.271486855102e-02 2 KSP preconditioned resid norm 6.271486855102e-02 true resid norm 6.271486855100e-02 ||Ae||/||Ax|| 1.448337849605e-01 3 KSP Residual norm 1.782615793912e-02 3 KSP preconditioned resid norm 1.782615793912e-02 true resid norm 1.782615793917e-02 ||Ae||/||Ax|| 4.116774833918e-02 4 KSP Residual norm 6.410478294417e-03 4 KSP preconditioned resid norm 6.410478294417e-03 true resid norm 6.410478294455e-03 ||Ae||/||Ax|| 1.480436547575e-02 5 KSP Residual norm 1.314224590977e-03 5 KSP preconditioned resid norm 1.314224590977e-03 true resid norm 1.314224590937e-03 ||Ae||/||Ax|| 3.035071685413e-03 6 KSP Residual norm 2.683633682600e-04 6 KSP preconditioned resid norm 2.683633682600e-04 true resid norm 2.683633681768e-04 ||Ae||/||Ax|| 6.197586514302e-04 7 KSP Residual norm 8.799918944041e-05 7 KSP preconditioned resid norm 8.799918944041e-05 true resid norm 8.799918940855e-05 ||Ae||/||Ax|| 2.032254227740e-04 8 KSP Residual norm 2.508995399544e-05 8 KSP preconditioned resid norm 2.508995399544e-05 true resid norm 2.508995405419e-05 ||Ae||/||Ax|| 5.794276690857e-05 9 KSP Residual norm 1.026378586306e-05 9 KSP preconditioned resid norm 1.026378586306e-05 true resid norm 1.026378592108e-05 ||Ae||/||Ax|| 2.370319825777e-05 10 KSP Residual norm 3.126671988424e-06 10 KSP preconditioned resid norm 3.126671988424e-06 true resid norm 3.126671988424e-06 ||Ae||/||Ax|| 7.220739656737e-06 11 KSP Residual norm 1.229059204228e-06 11 KSP preconditioned resid norm 1.229059204228e-06 true resid norm 1.229059216105e-06 ||Ae||/||Ax|| 2.838390677073e-06 12 KSP Residual norm 5.416643445915e-07 12 KSP preconditioned resid norm 5.416643445915e-07 true resid norm 5.416643499083e-07 ||Ae||/||Ax|| 1.250920232920e-06 13 KSP Residual norm 2.633538639496e-07 13 KSP preconditioned resid norm 2.633538639496e-07 true resid norm 2.633538571527e-07 ||Ae||/||Ax|| 6.081896812770e-07 14 KSP Residual norm 1.027652175716e-07 14 KSP preconditioned resid norm 1.027652175716e-07 true resid norm 1.027652231589e-07 ||Ae||/||Ax|| 2.373261170164e-07 15 KSP Residual norm 3.189221996600e-08 15 KSP preconditioned resid norm 3.189221996600e-08 true resid norm 3.189223368074e-08 ||Ae||/||Ax|| 7.365195880253e-08 16 KSP Residual norm 1.202828530862e-08 16 KSP preconditioned resid norm 1.202828530862e-08 true resid norm 1.202828930905e-08 ||Ae||/||Ax|| 2.777814428189e-08 17 KSP Residual norm 2.701944514090e-09 17 KSP preconditioned resid norm 2.701944514090e-09 true resid norm 2.701946176761e-09 ||Ae||/||Ax|| 6.239877409956e-09 18 KSP Residual norm 8.920433449729e-10 18 KSP preconditioned resid norm 8.920433449729e-10 true resid norm 8.920273046556e-10 ||Ae||/||Ax|| 2.060048817870e-09 19 KSP Residual norm 2.868707402558e-10 19 KSP preconditioned resid norm 2.868707402558e-10 true resid norm 2.868717856448e-10 ||Ae||/||Ax|| 6.625020106597e-10 20 KSP Residual norm 5.959506466900e-11 20 KSP preconditioned resid norm 5.959506466900e-11 true resid norm 5.959506466900e-11 ||Ae||/||Ax|| 1.376289065161e-10 21 KSP Residual norm 2.347311204935e-11 21 KSP preconditioned resid norm 2.347311204935e-11 true resid norm 2.346228685060e-11 ||Ae||/||Ax|| 5.418383051598e-11 22 KSP Residual norm 9.323415814482e-12 22 KSP preconditioned resid norm 9.323415814482e-12 true resid norm 9.323298967857e-12 ||Ae||/||Ax|| 2.153123667531e-11 23 KSP Residual norm 2.925536240498e-12 23 KSP preconditioned resid norm 2.925536240498e-12 true resid norm 2.974960765051e-12 ||Ae||/||Ax|| 6.870377594124e-12 24 KSP Residual norm 1.025371996314e-12 24 KSP preconditioned resid norm 1.025371996314e-12 true resid norm 1.141211075669e-12 ||Ae||/||Ax|| 2.635514086960e-12 25 KSP Residual norm 3.207279162194e-13 25 KSP preconditioned resid norm 3.207279162194e-13 true resid norm 6.238776238382e-13 ||Ae||/||Ax|| 1.440783656257e-12 26 KSP Residual norm 1.290295601110e-13 26 KSP preconditioned resid norm 1.290295601110e-13 true resid norm 4.880917397572e-13 ||Ae||/||Ax|| 1.127199589352e-12 27 KSP Residual norm 3.648895071464e-14 27 KSP preconditioned resid norm 3.648895071464e-14 true resid norm 4.915218264645e-13 ||Ae||/||Ax|| 1.135121035287e-12 28 KSP Residual norm 9.038317929266e-15 28 KSP preconditioned resid norm 9.038317929266e-15 true resid norm 4.525760014181e-13 ||Ae||/||Ax|| 1.045179504990e-12 29 KSP Residual norm 3.168044355090e-15 29 KSP preconditioned resid norm 3.168044355090e-15 true resid norm 4.800332481647e-13 ||Ae||/||Ax|| 1.108589300191e-12 30 KSP Residual norm 4.793973611579e-13 30 KSP preconditioned resid norm 4.793973611579e-13 true resid norm 4.793973611579e-13 ||Ae||/||Ax|| 1.107120782053e-12 31 KSP Residual norm 1.622163699810e-13 31 KSP preconditioned resid norm 1.622163699810e-13 true resid norm 5.177440837392e-13 ||Ae||/||Ax|| 1.195678744473e-12 32 KSP Residual norm 3.278455971610e-14 32 KSP preconditioned resid norm 3.278455971610e-14 true resid norm 5.095690679196e-13 ||Ae||/||Ax|| 1.176799354136e-12 33 KSP Residual norm 9.985582557377e-15 33 KSP preconditioned resid norm 9.985582557377e-15 true resid norm 5.216297041713e-13 ||Ae||/||Ax|| 1.204652200482e-12 34 KSP Residual norm 4.197972975568e-15 34 KSP preconditioned resid norm 4.197972975568e-15 true resid norm 5.581506198985e-13 ||Ae||/||Ax|| 1.288993642587e-12 35 KSP Residual norm 1.034379105713e-15 From lua.byhh at gmail.com Fri Aug 15 07:16:11 2008 From: lua.byhh at gmail.com (berry) Date: Fri, 15 Aug 2008 20:16:11 +0800 Subject: Does petsc exploit the symmetric property of matrix? Message-ID: Hi, I am totally a newbie to petsc. Previously I use laspack library for my linear solver. Now I decide to transfer to Petsc for our simulation code. I have a huge sparse matrix with 3million*3million with 21 million no zero elements. The matrix is symmetric, so the non zero element of upper triangular matrix is only 12 million. And matrix format is Yale Space Matrix format. However, I have not seen any indicative parmater for matrix symmetry property in MatCreateSeqAIJWithArrays() function. The manual says: PetscErrorCode PETSCMAT_DLLEXPORT MatCreateSeqAIJWithArrays(MPI_Comm comm,PetscInt m,PetscInt n,PetscInt* i,PetscInt*j,PetscScalar *a,Mat *mat) Input Parameters *comm *- must be an MPI communicator of size1 *m *- number of rows *n *- number of columns *i *- row indices *j * *a * Can petsc exploit any symmetric proptery of a given matrix? How could I Set it to be symmetry one ? Best Regards, -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From Hung.V.Nguyen at usace.army.mil Fri Aug 15 08:04:23 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Fri, 15 Aug 2008 08:04:23 -0500 Subject: Petsc questions In-Reply-To: <9D5069E5-DB45-48E5-9485-28F71A47E0EE@mcs.anl.gov> References: <66D788D6-1839-40C4-A90C-4BD17F75B0C2@mcs.anl.gov> <9D5069E5-DB45-48E5-9485-28F71A47E0EE@mcs.anl.gov> Message-ID: Barry, Thank you all, -Hung -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith Sent: Thursday, August 14, 2008 8:50 PM To: petsc-users at mcs.anl.gov Subject: Re: Petsc questions On Aug 14, 2008, at 1:33 PM, Matthew Knepley wrote: > On Thu, Aug 14, 2008 at 1:22 PM, Nguyen, Hung V ERDC-ITL-MS > wrote: >> >> Barry, >> >> Thanks for the info. I will use what we already have. How's about >> estimation of matrix condition number via PETSC? > > You can use -ksp_monitor_singular_value to print out the exterior > singular values for the Hessenberg matrix generated by the Krylov > method. I don't think we have anything else. Condition number > estimation is a tough problem. > See also KSPSetComputeSingularValues(), KSPSetComputeEigenvalues(), KSPComputeSingularValues(), KSPComputeEigenvalues() for use in your code. I've found if the operator is not terribly ill-conditioned these can work fairly well. If using GMRES for a non-symmetric matrix make sure you use a large enough restart to get accurate values. Barry > Matt > >> -Hung >> >> -----Original Message----- >> From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov >> ] On >> Behalf Of Barry Smith >> Sent: Thursday, August 14, 2008 11:08 AM >> To: petsc-users at mcs.anl.gov >> Subject: Re: Petsc questions >> >> >> Check AO functions. They renumber between two different orderings. >> >> But, if your code already puts everything in the new numbering, you >> can just >> use PETSc with that new numbering and leave your ordering code as >> it is. >> >> Barry >> >> On Aug 14, 2008, at 9:37 AM, Nguyen, Hung V ERDC-ITL-MS wrote: >> >>> >>> Hello All, >>> >>> I am new to PETSC. I have following questions about PETSC: >>> >>> 1. Do PETSC functions help to estimate/calculate a matrix condition >>> number? >>> If yes, can I get the info how to do it? >>> >>> 2. A question about renumbering nodes: our CFD code uses ParMetis to >>> compute the original partitioning of the mesh. The global nodes are >>> renumbered consecutively within each Parmetis partition as npetsc >>> which is a mapping vector from the original global node numbering to >>> the new numbering, see below as a test code. My question is whether >>> PETSC function helps to renumber from ParMetis partition to PETSC >>> partition or not? >>> >>> Thank for your help. >>> >>> Regards, >>> >>> -Hung >>> -- code: >>> ! Read the data. >>> >>> fname = 'petsc.dat' >>> call parnam (fname) >>> open (2, file = fname, status = 'old') >>> >>> ! No. global nodes, local nodes, owned nodes, >>> ! compressed columns, PEs. >>> >>> read (2, '(5i10)') ng, nloc, nown, ncol, npes >>> >>> if (noproc .ne. npes) then >>> if (myid .eq. 0) then >>> print*, 'Number of PEs from the data file does not match', >>> & ' the number from the run command.' >>> end if >>> call PetscFinalize (ierr) >>> stop >>> end if >>> >>> ! Local node array containing global node numbers. >>> >>> allocate (nglobal(nloc)) >>> >>> read (2, '(8i10)') nglobal >>> >>> ! Find petsc numbering scheme. >>> >>> allocate (nown_all(noproc)) >>> allocate (idisp(noproc)) >>> allocate (npetsc(nloc)) >>> allocate (nodes1(ng)) >>> allocate (nodes2(ng)) >>> >>> call MPI_ALLGATHER (nown, 1, MPI_INTEGER, nown_all, 1, >>> & MPI_INTEGER, PETSC_COMM_WORLD, ierr) >>> >>> idisp(1) = 0 >>> do i = 2, noproc >>> idisp(i) = idisp(i - 1) + nown_all(i - 1) >>> end do >>> >>> call MPI_ALLGATHERV (nglobal, nown, MPI_INTEGER, nodes1, >>> & nown_all, idisp, MPI_INTEGER, PETSC_COMM_WORLD, ierr) >>> >>> do i = 1, ng >>> ii = nodes1(i) >>> nodes2(ii) = i >>> end do >>> >>> ! Process the local nodes for their petsc numbers. >>> >>> do i = 1, nloc >>> ii = nglobal(i) >>> npetsc(i) = nodes2(ii) >>> end do >>> >>> >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > From hzhang at mcs.anl.gov Fri Aug 15 09:28:07 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 15 Aug 2008 09:28:07 -0500 (CDT) Subject: Does petsc exploit the symmetric property of matrix? In-Reply-To: References: Message-ID: Shengyong, On Fri, 15 Aug 2008, berry wrote: > Hi, > > I am totally a newbie to petsc. Previously I use laspack library for my > linear solver. Now I decide to transfer to Petsc for our simulation code. > > I have a huge sparse matrix with 3million*3million with 21 million no zero > elements. The matrix is symmetric, so the non zero element of upper > triangular matrix > is only 12 million. And matrix format is Yale Space Matrix format. However, > I have not seen any indicative parmater for matrix symmetry property in > MatCreateSeqAIJWithArrays() function. The manual says: > > > PetscErrorCode PETSCMAT_DLLEXPORT MatCreateSeqAIJWithArrays(MPI_Comm > comm,PetscInt m,PetscInt n,PetscInt* i,PetscInt*j,PetscScalar *a,Mat > *mat) > > > > Input Parameters > *comm > *- must be an MPI communicator of > size1 > *m *- number of rows > *n *- number of columns > *i *- row indices > *j * > > *a * > > > Can petsc exploit any symmetric proptery of a given matrix? How could I Set > it to be symmetry one ? You can use MATSBAIJ matrix format (see http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MATSBAIJ.html) Using MATSBAIJ saves half of storage space for original sparse matrix. However, it is not as efficient as MATAIJ for some matrix operations, e.g., MatMult(), due to non-contiguous data access and additional communication overhead. Hong > > Best Regards, > > > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China > From Hung.V.Nguyen at usace.army.mil Fri Aug 15 11:00:43 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Fri, 15 Aug 2008 11:00:43 -0500 Subject: Petsc questions Message-ID: Hello All, I have question about how to write a matrix to output file in binary (or ASCII) format. In the test case below, the ai is original matrix and a is petsc version. The test runs successfully to write a binary file named "matrix.mat". However, when I rerun the test with reading matrix via "matload", then I got the error "Unknown Mat type given: !". Please let me know how to fix it. Thank you in advance, Regards, -Hung -- error at running: yod -np 16 ./fw -ksp_type cg -pc_type bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 hvnguyen:sapphire18% yod -np 16 ./fw -ksp_type cg -pc_type bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 [10]PETSC ERROR: [9]PETSC ERROR: --------------------- Error Message ------------------------------------ --------------------- Error Message ------------------------------------ [10]PETSC ERROR: [9]PETSC ERROR: Unknown type. Check for miss-spelling or missing external package needed for type! Unknown type. Check for miss-spelling or missing external package needed for type! [10]PETSC ERROR: [9]PETSC ERROR: Unknown Mat type given: ! Unknown Mat type given: ! [10]PETSC ERROR: [9]PETSC ERROR: ---code: ! Define the a matrix. ione = 1 ii = ncol jj = ncol / 2 call MatCreateMPIAIJ (PETSC_COMM_WORLD, nown, nown, ng, ng, & ii, PETSC_NULL_INTEGER, jj, PETSC_NULL_INTEGER, a, ierr) call MatSetFromOptions (a, ierr) do i = 1, nown ii = npetsc(i) - 1 do j = 1, ncol jloc = id(i, j) if (jloc .ne. 0) then jj = npetsc(jloc) - 1 v = ai(i, j) call MatSetValues (a, ione, ii, ione, jj, v, & INSERT_VALUES, ierr) end if end do end do call MatAssemblyBegin (a, MAT_FINAL_ASSEMBLY, ierr) call MatAssemblyEnd (a, MAT_FINAL_ASSEMBLY, ierr) ! Now Write this matrix to a binary file ! call PetscViewerBinaryOpen(PETSC_COMM_WORLD,'matrix.mat', & & FILE_MODE_WRITE,view,ierr) call PetscViewerSetFormat(view,PETSC_VIEWER_BINARY_NATIVE,ierr) call MatView(A,view,ierr) call PetscViewerDestroy(view,ierr) ---- Test to load A matix: !!! read matrix call PetscViewerBinaryOpen(PETSC_COMM_WORLD,'matrix.mat', & & FILE_MODE_READ,view,ierr) call PetscViewerSetFormat(view,PETSC_VIEWER_BINARY_NATIVE,ierr) call MatLoad(view,MPIAIJ,A,ierr) CHKERRQ(ierr) call PetscViewerDestroy(view,ierr) From pbauman at ices.utexas.edu Fri Aug 15 13:39:52 2008 From: pbauman at ices.utexas.edu (Paul T. Bauman) Date: Fri, 15 Aug 2008 13:39:52 -0500 Subject: compiling PETSc with Intel MKL 10.0.1.14 In-Reply-To: <381F69EF-1C27-420C-8337-0EAEB70093E0@mcs.anl.gov> References: <485970E1.1090104@gmail.com> <381F69EF-1C27-420C-8337-0EAEB70093E0@mcs.anl.gov> Message-ID: <48A5CD78.9070201@ices.utexas.edu> Was there ever a fix/workaround introduced for this? I'm using 2.3.3p13 and I'm having trouble getting the config to recognize mkl 10.0.3.020. Thanks, Paul Barry Smith wrote: > > Could you email to petsc-maint at mcs.anl.gov ALL the messages as to > what goes wrong with > our current linking so we can fix it? > > Thanks > > Barry > > On Jun 18, 2008, at 3:32 PM, Randall Mackie wrote: > >> We've upgraded Intel MKL to version 10.0, but in this version, Intel has >> changed how libraries are suppose to be linked. For example, the >> libmkl_lapack.a >> is a dummy library, but that's what the PETSc configure script looks >> for. >> >> The documentation says, for example, to compile LAPACK in the static >> case, >> use libmkl_lapack.a libmkl_em64t.a >> >> and in the layered pure case to use >> libmkl_intel_lp64.a libmkl_intel_thread.a libmkl_core.a >> >> However, the PETSC configuration wants -lmkl_lapack -lmkl -lguide >> -lpthread >> >> Any suggestions are appreciated. >> >> Randy >> > From rlmackie862 at gmail.com Fri Aug 15 13:54:07 2008 From: rlmackie862 at gmail.com (Randall Mackie) Date: Fri, 15 Aug 2008 11:54:07 -0700 Subject: compiling PETSc with Intel MKL 10.0.1.14 In-Reply-To: <48A5CD78.9070201@ices.utexas.edu> References: <485970E1.1090104@gmail.com> <381F69EF-1C27-420C-8337-0EAEB70093E0@mcs.anl.gov> <48A5CD78.9070201@ices.utexas.edu> Message-ID: <48A5D0CF.8090605@gmail.com> Hi Paul, we were trying this under ROCKS 5.0, and I never could get my programs to work correctly, so we downgraded back to ROCKS 4.2, and everything is fine. We figured we would wait until ROCKS 5.1 and try again. Randy Paul T. Bauman wrote: > Was there ever a fix/workaround introduced for this? I'm using 2.3.3p13 > and I'm having trouble getting the config to recognize mkl 10.0.3.020. > > Thanks, > > Paul > > Barry Smith wrote: >> >> Could you email to petsc-maint at mcs.anl.gov ALL the messages as to >> what goes wrong with >> our current linking so we can fix it? >> >> Thanks >> >> Barry >> >> On Jun 18, 2008, at 3:32 PM, Randall Mackie wrote: >> >>> We've upgraded Intel MKL to version 10.0, but in this version, Intel has >>> changed how libraries are suppose to be linked. For example, the >>> libmkl_lapack.a >>> is a dummy library, but that's what the PETSc configure script looks >>> for. >>> >>> The documentation says, for example, to compile LAPACK in the static >>> case, >>> use libmkl_lapack.a libmkl_em64t.a >>> >>> and in the layered pure case to use >>> libmkl_intel_lp64.a libmkl_intel_thread.a libmkl_core.a >>> >>> However, the PETSC configuration wants -lmkl_lapack -lmkl -lguide >>> -lpthread >>> >>> Any suggestions are appreciated. >>> >>> Randy >>> >> > From bsmith at mcs.anl.gov Fri Aug 15 08:15:34 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 15 Aug 2008 08:15:34 -0500 Subject: gmres - restart and Gauss-Seidel In-Reply-To: <48A5320F.7050002@civil.gla.ac.uk> References: <48A4AD1C.8080906@civil.gla.ac.uk> <43186D3B-4335-49C9-A0B0-BB7719AA2015@mcs.anl.gov> <48A5320F.7050002@civil.gla.ac.uk> Message-ID: <6B0F4C2E-5DCC-4509-A0C1-E7F74BADB2E7@mcs.anl.gov> 0 KSP Residual norm 2.604671539574e+02 1 KSP Residual norm 8.673339524769e+00 2 KSP Residual norm 1.854343060681e+00 3 KSP Residual norm 4.635172307027e-01 4 KSP Residual norm 1.824358407207e-01 5 KSP Residual norm 9.823366032782e-02 6 KSP Residual norm 5.859833143089e-02 7 KSP Residual norm 2.929617664041e-02 8 KSP Residual norm 1.184403532587e-02 9 KSP Residual norm 3.942287795560e-03 10 KSP Residual norm 4.611215310185e+00 ^^^^^^^^^^^^^^^^^^^^^^ This should never happen. This means there is an error in your preconditioner, your smoother is not a linear operator on its input variables. With GMRES, at restart the residual norm should hardly change at all. The fact that it shoots up means that the norm indicated in iteration 9 is totally wrong. If you run the gmres version with -ksp_monitor_true_residual you will see that the true residual is not actually decreasing like you think it is. Barry On Aug 15, 2008, at 2:36 AM, Lukasz Kaczmarczyk wrote: > Barry Smith wrote: >> >> On Aug 14, 2008, at 5:09 PM, Lukasz Kaczmarczyk wrote: >> >>> Hello, >>> I have implementation of geometric multi-grid for heterogeneous >>> quasi-brittle materials for hybrid-trefftz finite elements >>> (degrees of >>> freedom are on faces -> small number of neighbours). Multi-grid >>> algorithm need smoothing, for that I use Gauss-Seidel, however SOR >>> implemented in PETSc is not parallel. That is way, I implemented >>> my own >>> parallel Gauss-Seidel with colouring of faces in order to reduce >>> communication. Everything seems to work prefect, except that that >>> for >>> GMRES after restart algorithm is divergent. >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> >> What do you mean? It is converging fine until you hit the restart >> iteration and >> then get a totally different residual norm? And then each iteration >> gives >> worse residuals? > > Thanks You all for response. No, algorithm is stable to next restart > >> >> Please run with -ksp_monitor_true_residual and send the output. >> If the preconditioner is not actually a linear operator (i.e. it >> has a >> bug in >> smoother) then the residual norm computed in GMRES may be wrong >> and so GMRES may look like it is working but actually is basically >> chugging on garbage. Also run with -ksp_type fgmres >> -ksp_monitor_true_residual >> and send the output. >> > > I hope that this helps, I send three outputs: > 1) -ksp_type gmres -ksp_gmres_restart 10 > 2) -ksp_type fgmres -ksp_gmres_restart 100 -ksp_monitor_true_residual > 3) -ksp_type fgmres -ksp_gmres_restart 10 -ksp_monitor_true_residual > > 1) ksp_type gmres -ksp_gmres_restart 10 > > 0 KSP Residual norm 2.604671539574e+02 > 1 KSP Residual norm 8.673339524769e+00 > 2 KSP Residual norm 1.854343060681e+00 > 3 KSP Residual norm 4.635172307027e-01 > 4 KSP Residual norm 1.824358407207e-01 > 5 KSP Residual norm 9.823366032782e-02 > 6 KSP Residual norm 5.859833143089e-02 > 7 KSP Residual norm 2.929617664041e-02 > 8 KSP Residual norm 1.184403532587e-02 > 9 KSP Residual norm 3.942287795560e-03 > 10 KSP Residual norm 4.611215310185e+00 > 11 KSP Residual norm 3.557713305907e-01 > 12 KSP Residual norm 1.911999331832e-01 > 13 KSP Residual norm 1.048287519555e-01 > 14 KSP Residual norm 5.745962963315e-02 > 15 KSP Residual norm 5.238562834476e-02 > 16 KSP Residual norm 5.046872351948e-02 > 17 KSP Residual norm 5.041805668527e-02 > 18 KSP Residual norm 4.952005417494e-02 > 19 KSP Residual norm 4.950805199415e-02 > 20 KSP Residual norm 8.581369434072e+00 > 21 KSP Residual norm 8.168490133118e-01 > 22 KSP Residual norm 1.416002910406e-01 > 23 KSP Residual norm 6.679961898612e-02 > 24 KSP Residual norm 5.031183883430e-02 > 25 KSP Residual norm 4.670843908572e-02 > 26 KSP Residual norm 4.662064456590e-02 > 27 KSP Residual norm 4.652217965668e-02 > 28 KSP Residual norm 4.611529866498e-02 > 29 KSP Residual norm 4.610330586416e-02 > 30 KSP Residual norm 9.139117000729e+00 > 31 KSP Residual norm 1.170939505327e+00 > 32 KSP Residual norm 1.783857007759e-01 > 33 KSP Residual norm 1.413658128682e-01 > 34 KSP Residual norm 1.411820201886e-01 > 35 KSP Residual norm 1.372634877051e-01 > > > 2) -ksp_type fgmres -ksp_gmres_restart 100 -ksp_monitor_true_residual > 0 KSP Residual norm 8.770931682456e+05 > 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm > 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 > 1 KSP Residual norm 1.466659768959e+00 > 1 KSP preconditioned resid norm 1.466659768959e+00 true resid norm > 1.466659768959e+00 ||Ae||/||Ax|| 3.387105649673e+00 > 2 KSP Residual norm 2.135783619677e-01 > 2 KSP preconditioned resid norm 2.135783619677e-01 true resid norm > 2.135783619678e-01 ||Ae||/||Ax|| 4.932380991006e-01 > 3 KSP Residual norm 1.342354228152e-01 > 3 KSP preconditioned resid norm 1.342354228152e-01 true resid norm > 1.342354228151e-01 ||Ae||/||Ax|| 3.100034299884e-01 > 4 KSP Residual norm 7.872906297489e-02 > 4 KSP preconditioned resid norm 7.872906297489e-02 true resid norm > 7.872906297491e-02 ||Ae||/||Ax|| 1.818169828064e-01 > 5 KSP Residual norm 3.414496482864e-02 > 5 KSP preconditioned resid norm 3.414496482864e-02 true resid norm > 3.414496482865e-02 ||Ae||/||Ax|| 7.885441854116e-02 > 6 KSP Residual norm 2.025183908090e-02 > 6 KSP preconditioned resid norm 2.025183908090e-02 true resid norm > 2.025183908092e-02 ||Ae||/||Ax|| 4.676961897981e-02 > 7 KSP Residual norm 1.023707179274e-02 > 7 KSP preconditioned resid norm 1.023707179274e-02 true resid norm > 1.023707179279e-02 ||Ae||/||Ax|| 2.364150462111e-02 > 8 KSP Residual norm 4.938281368004e-03 > 8 KSP preconditioned resid norm 4.938281368004e-03 true resid norm > 4.938281368031e-03 ||Ae||/||Ax|| 1.140447230867e-02 > 9 KSP Residual norm 2.373276511281e-03 > 9 KSP preconditioned resid norm 2.373276511281e-03 true resid norm > 2.373276511245e-03 ||Ae||/||Ax|| 5.480847330515e-03 > 10 KSP Residual norm 1.180493643594e-03 > 10 KSP preconditioned resid norm 1.180493643594e-03 true resid norm > 1.180493643635e-03 ||Ae||/||Ax|| 2.726233291716e-03 > 11 KSP Residual norm 7.142592937639e-04 > 11 KSP preconditioned resid norm 7.142592937639e-04 true resid norm > 7.142592936809e-04 ||Ae||/||Ax|| 1.649511181911e-03 > 12 KSP Residual norm 4.226036746778e-04 > 12 KSP preconditioned resid norm 4.226036746778e-04 true resid norm > 4.226036746206e-04 ||Ae||/||Ax|| 9.759613812110e-04 > 13 KSP Residual norm 2.572539106719e-04 > 13 KSP preconditioned resid norm 2.572539106719e-04 true resid norm > 2.572539106261e-04 ||Ae||/||Ax|| 5.941024582002e-04 > 14 KSP Residual norm 1.444791718863e-04 > 14 KSP preconditioned resid norm 1.444791718863e-04 true resid norm > 1.444791718510e-04 ||Ae||/||Ax|| 3.336603550419e-04 > 15 KSP Residual norm 8.492790274639e-05 > 15 KSP preconditioned resid norm 8.492790274639e-05 true resid norm > 8.492790267409e-05 ||Ae||/||Ax|| 1.961325898824e-04 > 16 KSP Residual norm 4.707269600494e-05 > 16 KSP preconditioned resid norm 4.707269600494e-05 true resid norm > 4.707269601177e-05 ||Ae||/||Ax|| 1.087097348555e-04 > 17 KSP Residual norm 2.692621130650e-05 > 17 KSP preconditioned resid norm 2.692621130650e-05 true resid norm > 2.692621130585e-05 ||Ae||/||Ax|| 6.218342138276e-05 > 18 KSP Residual norm 1.339283607815e-05 > 18 KSP preconditioned resid norm 1.339283607815e-05 true resid norm > 1.339283617815e-05 ||Ae||/||Ax|| 3.092943029068e-05 > 19 KSP Residual norm 8.006569523728e-06 > 19 KSP preconditioned resid norm 8.006569523728e-06 true resid norm > 8.006569546889e-06 ||Ae||/||Ax|| 1.849038033273e-05 > 20 KSP Residual norm 5.048620120372e-06 > 20 KSP preconditioned resid norm 5.048620120372e-06 true resid norm > 5.048620165524e-06 ||Ae||/||Ax|| 1.165928884641e-05 > 21 KSP Residual norm 3.079047055409e-06 > 21 KSP preconditioned resid norm 3.079047055409e-06 true resid norm > 3.079047092852e-06 ||Ae||/||Ax|| 7.110754671622e-06 > 22 KSP Residual norm 1.837917124370e-06 > 22 KSP preconditioned resid norm 1.837917124370e-06 true resid norm > 1.837917088425e-06 ||Ae||/||Ax|| 4.244487703000e-06 > 23 KSP Residual norm 8.755715968227e-07 > 23 KSP preconditioned resid norm 8.755715968227e-07 true resid norm > 8.755715746951e-07 ||Ae||/||Ax|| 2.022045937380e-06 > 24 KSP Residual norm 4.460215115186e-07 > 24 KSP preconditioned resid norm 4.460215115186e-07 true resid norm > 4.460215473811e-07 ||Ae||/||Ax|| 1.030042641779e-06 > 25 KSP Residual norm 2.074601204717e-07 > 25 KSP preconditioned resid norm 2.074601204717e-07 true resid norm > 2.074601610498e-07 ||Ae||/||Ax|| 4.791087193129e-07 > 26 KSP Residual norm 1.078594582430e-07 > 26 KSP preconditioned resid norm 1.078594582430e-07 true resid norm > 1.078594313079e-07 ||Ae||/||Ax|| 2.490906868011e-07 > 27 KSP Residual norm 5.595789534852e-08 > 27 KSP preconditioned resid norm 5.595789534852e-08 true resid norm > 5.595808793116e-08 ||Ae||/||Ax|| 1.292296685216e-07 > 28 KSP Residual norm 2.866350154035e-08 > 28 KSP preconditioned resid norm 2.866350154035e-08 true resid norm > 2.866379598785e-08 ||Ae||/||Ax|| 6.619620131833e-08 > 29 KSP Residual norm 1.602353949308e-08 > 29 KSP preconditioned resid norm 1.602353949308e-08 true resid norm > 1.602386060096e-08 ||Ae||/||Ax|| 3.700552092568e-08 > 30 KSP Residual norm 8.795011075741e-09 > 30 KSP preconditioned resid norm 8.795011075741e-09 true resid norm > 8.795292666765e-09 ||Ae||/||Ax|| 2.031185835503e-08 > 31 KSP Residual norm 4.077129947799e-09 > 31 KSP preconditioned resid norm 4.077129947799e-09 true resid norm > 4.079852425518e-09 ||Ae||/||Ax|| 9.422015584506e-09 > 32 KSP Residual norm 1.739123114987e-09 > 32 KSP preconditioned resid norm 1.739123114987e-09 true resid norm > 1.743717145343e-09 ||Ae||/||Ax|| 4.026942253017e-09 > 33 KSP Residual norm 8.452345380894e-10 > 33 KSP preconditioned resid norm 8.452345380894e-10 true resid norm > 8.816532993133e-10 ||Ae||/||Ax|| 2.036091078762e-09 > 34 KSP Residual norm 3.861058363559e-10 > 34 KSP preconditioned resid norm 3.861058363559e-10 true resid norm > 3.853896597046e-10 ||Ae||/||Ax|| 8.900192950935e-10 > 35 KSP Residual norm 1.986142320978e-10 > > 3) -ksp_type fgmres -ksp_gmres_restart 10 -ksp_monitor_true_residual > 0 KSP Residual norm 8.770931682456e+05 > 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm > 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 > 1 KSP Residual norm 1.466659768959e+00 > 1 KSP preconditioned resid norm 1.466659768959e+00 true resid norm > 1.466659768959e+00 ||Ae||/||Ax|| 3.387105649673e+00 > 2 KSP Residual norm 2.135783619677e-01 > 2 KSP preconditioned resid norm 2.135783619677e-01 true resid norm > 2.135783619678e-01 ||Ae||/||Ax|| 4.932380991006e-01 > 3 KSP Residual norm 1.342354228152e-01 > 3 KSP preconditioned resid norm 1.342354228152e-01 true resid norm > 1.342354228151e-01 ||Ae||/||Ax|| 3.100034299884e-01 > 4 KSP Residual norm 7.872906297489e-02 > 4 KSP preconditioned resid norm 7.872906297489e-02 true resid norm > 7.872906297491e-02 ||Ae||/||Ax|| 1.818169828064e-01 > 5 KSP Residual norm 3.414496482864e-02 > 5 KSP preconditioned resid norm 3.414496482864e-02 true resid norm > 3.414496482865e-02 ||Ae||/||Ax|| 7.885441854116e-02 > 6 KSP Residual norm 2.025183908090e-02 > 6 KSP preconditioned resid norm 2.025183908090e-02 true resid norm > 2.025183908092e-02 ||Ae||/||Ax|| 4.676961897981e-02 > 7 KSP Residual norm 1.023707179274e-02 > 7 KSP preconditioned resid norm 1.023707179274e-02 true resid norm > 1.023707179279e-02 ||Ae||/||Ax|| 2.364150462111e-02 > 8 KSP Residual norm 4.938281368004e-03 > 8 KSP preconditioned resid norm 4.938281368004e-03 true resid norm > 4.938281368031e-03 ||Ae||/||Ax|| 1.140447230867e-02 > 9 KSP Residual norm 2.373276511281e-03 > 9 KSP preconditioned resid norm 2.373276511281e-03 true resid norm > 2.373276511245e-03 ||Ae||/||Ax|| 5.480847330515e-03 > 10 KSP Residual norm 1.180493643635e-03 > 10 KSP preconditioned resid norm 1.180493643635e-03 true resid norm > 1.180493643635e-03 ||Ae||/||Ax|| 2.726233291716e-03 > 11 KSP Residual norm 8.973593571160e-04 > 11 KSP preconditioned resid norm 8.973593571160e-04 true resid norm > 8.973593570913e-04 ||Ae||/||Ax|| 2.072362665506e-03 > 12 KSP Residual norm 8.954054180580e-04 > 12 KSP preconditioned resid norm 8.954054180580e-04 true resid norm > 8.954054180762e-04 ||Ae||/||Ax|| 2.067850236641e-03 > 13 KSP Residual norm 8.509609619524e-04 > 13 KSP preconditioned resid norm 8.509609619524e-04 true resid norm > 8.509609619593e-04 ||Ae||/||Ax|| 1.965210161828e-03 > 14 KSP Residual norm 8.458566359370e-04 > 14 KSP preconditioned resid norm 8.458566359370e-04 true resid norm > 8.458566359291e-04 ||Ae||/||Ax|| 1.953422225798e-03 > 15 KSP Residual norm 8.372984660266e-04 > 15 KSP preconditioned resid norm 8.372984660266e-04 true resid norm > 8.372984660471e-04 ||Ae||/||Ax|| 1.933657979058e-03 > 16 KSP Residual norm 7.279198160941e-04 > 16 KSP preconditioned resid norm 7.279198160941e-04 true resid norm > 7.279198160961e-04 ||Ae||/||Ax|| 1.681058807086e-03 > 17 KSP Residual norm 7.254072398644e-04 > 17 KSP preconditioned resid norm 7.254072398644e-04 true resid norm > 7.254072398528e-04 ||Ae||/||Ax|| 1.675256260804e-03 > 18 KSP Residual norm 6.813582761007e-04 > 18 KSP preconditioned resid norm 6.813582761007e-04 true resid norm > 6.813582760932e-04 ||Ae||/||Ax|| 1.573529536468e-03 > 19 KSP Residual norm 5.556242867367e-04 > 19 KSP preconditioned resid norm 5.556242867367e-04 true resid norm > 5.556242867546e-04 ||Ae||/||Ax|| 1.283159326104e-03 > 20 KSP Residual norm 4.379413135337e-04 > 20 KSP preconditioned resid norm 4.379413135337e-04 true resid norm > 4.379413135337e-04 ||Ae||/||Ax|| 1.011382141032e-03 > 21 KSP Residual norm 4.159789873039e-04 > 21 KSP preconditioned resid norm 4.159789873039e-04 true resid norm > 4.159789873292e-04 ||Ae||/||Ax|| 9.606623212470e-04 > 22 KSP Residual norm 3.988218089618e-04 > 22 KSP preconditioned resid norm 3.988218089618e-04 true resid norm > 3.988218089773e-04 ||Ae||/||Ax|| 9.210395150869e-04 > 23 KSP Residual norm 3.981102611370e-04 > 23 KSP preconditioned resid norm 3.981102611370e-04 true resid norm > 3.981102611544e-04 ||Ae||/||Ax|| 9.193962657785e-04 > 24 KSP Residual norm 3.619785012910e-04 > 24 KSP preconditioned resid norm 3.619785012910e-04 true resid norm > 3.619785013142e-04 ||Ae||/||Ax|| 8.359535406984e-04 > 25 KSP Residual norm 3.351791646576e-04 > 25 KSP preconditioned resid norm 3.351791646576e-04 true resid norm > 3.351791646837e-04 ||Ae||/||Ax|| 7.740631238275e-04 > 26 KSP Residual norm 3.051027323014e-04 > 26 KSP preconditioned resid norm 3.051027323014e-04 true resid norm > 3.051027323283e-04 ||Ae||/||Ax|| 7.046045785610e-04 > 27 KSP Residual norm 2.977852161346e-04 > 27 KSP preconditioned resid norm 2.977852161346e-04 true resid norm > 2.977852161482e-04 ||Ae||/||Ax|| 6.877054988155e-04 > 28 KSP Residual norm 2.507200839706e-04 > 28 KSP preconditioned resid norm 2.507200839706e-04 true resid norm > 2.507200839354e-04 ||Ae||/||Ax|| 5.790132318055e-04 > 29 KSP Residual norm 1.792952379074e-04 > 29 KSP preconditioned resid norm 1.792952379074e-04 true resid norm > 1.792952379357e-04 ||Ae||/||Ax|| 4.140646155464e-04 > 30 KSP Residual norm 1.539920471163e-04 > 30 KSP preconditioned resid norm 1.539920471163e-04 true resid norm > 1.539920471163e-04 ||Ae||/||Ax|| 3.556293994227e-04 > 31 KSP Residual norm 1.538529289011e-04 > 31 KSP preconditioned resid norm 1.538529289011e-04 true resid norm > 1.538529288671e-04 ||Ae||/||Ax|| 3.553081195881e-04 > 32 KSP Residual norm 1.504592268331e-04 > 32 KSP preconditioned resid norm 1.504592268331e-04 true resid norm > 1.504592268204e-04 ||Ae||/||Ax|| 3.474707004273e-04 > 33 KSP Residual norm 1.504404986449e-04 > 33 KSP preconditioned resid norm 1.504404986449e-04 true resid norm > 1.504404986576e-04 ||Ae||/||Ax|| 3.474274495880e-04 > 34 KSP Residual norm 1.411298358801e-04 > 34 KSP preconditioned resid norm 1.411298358801e-04 true resid norm > 1.411298358680e-04 ||Ae||/||Ax|| 3.259253949163e-04 > 35 KSP Residual norm 1.128668990420e-04 > > > From lukasz at civil.gla.ac.uk Fri Aug 15 17:40:12 2008 From: lukasz at civil.gla.ac.uk (Lukasz Kaczmarczyk) Date: Fri, 15 Aug 2008 23:40:12 +0100 Subject: gmres - restart and Gauss-Seidel In-Reply-To: <6B0F4C2E-5DCC-4509-A0C1-E7F74BADB2E7@mcs.anl.gov> References: <48A4AD1C.8080906@civil.gla.ac.uk> <43186D3B-4335-49C9-A0B0-BB7719AA2015@mcs.anl.gov> <48A5320F.7050002@civil.gla.ac.uk> <6B0F4C2E-5DCC-4509-A0C1-E7F74BADB2E7@mcs.anl.gov> Message-ID: <48A605CC.3090503@civil.gla.ac.uk> Barry Smith wrote: > 9 KSP Residual norm 3.942287795560e-03 > 10 KSP Residual norm 4.611215310185e+00 > ^^^^^^^^^^^^^^^^^^^^^^ > > This should never happen. This means there is an error > in your preconditioner, your smoother is not a linear operator > on its input variables. With GMRES, at restart the residual > norm should hardly change at all. The fact that it shoots > up means that the norm indicated in iteration 9 is totally wrong. > > If you run the gmres version with -ksp_monitor_true_residual > you will see that the true residual is not actually decreasing > like you think it is. > Thank You for answer, result for which you asking are in my previous email - look at end of the previous email. If You look there at true residual, You will see that results are ok. Regards, Lukasz 2) -ksp_type fgmres -ksp_gmres_restart 100 -ksp_monitor_true_residual 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 1 KSP preconditioned resid norm 1.466659768959e+00 true resid norm 1.466659768959e+00 ||Ae||/||Ax|| 3.387105649673e+00 2 KSP preconditioned resid norm 2.135783619677e-01 true resid norm 2.135783619678e-01 ||Ae||/||Ax|| 4.932380991006e-01 3 KSP preconditioned resid norm 1.342354228152e-01 true resid norm 1.342354228151e-01 ||Ae||/||Ax|| 3.100034299884e-01 4 KSP preconditioned resid norm 7.872906297489e-02 true resid norm 7.872906297491e-02 ||Ae||/||Ax|| 1.818169828064e-01 5 KSP preconditioned resid norm 3.414496482864e-02 true resid norm 3.414496482865e-02 ||Ae||/||Ax|| 7.885441854116e-02 6 KSP preconditioned resid norm 2.025183908090e-02 true resid norm 2.025183908092e-02 ||Ae||/||Ax|| 4.676961897981e-02 7 KSP preconditioned resid norm 1.023707179274e-02 true resid norm 1.023707179279e-02 ||Ae||/||Ax|| 2.364150462111e-02 8 KSP preconditioned resid norm 4.938281368004e-03 true resid norm 4.938281368031e-03 ||Ae||/||Ax|| 1.140447230867e-02 9 KSP preconditioned resid norm 2.373276511281e-03 true resid norm 2.373276511245e-03 ||Ae||/||Ax|| 5.480847330515e-03 10 KSP preconditioned resid norm 1.180493643594e-03 true resid norm 1.180493643635e-03 ||Ae||/||Ax|| 2.726233291716e-03 11 KSP preconditioned resid norm 7.142592937639e-04 true resid norm 7.142592936809e-04 ||Ae||/||Ax|| 1.649511181911e-03 12 KSP preconditioned resid norm 4.226036746778e-04 true resid norm 3) -ksp_type fgmres -ksp_gmres_restart 10 -ksp_monitor_true_residual 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 1 KSP preconditioned resid norm 1.466659768959e+00 true resid norm 1.466659768959e+00 ||Ae||/||Ax|| 3.387105649673e+00 2 KSP preconditioned resid norm 2.135783619677e-01 true resid norm 2.135783619678e-01 ||Ae||/||Ax|| 4.932380991006e-01 3 KSP preconditioned resid norm 1.342354228152e-01 true resid norm 1.342354228151e-01 ||Ae||/||Ax|| 3.100034299884e-01 4 KSP preconditioned resid norm 7.872906297489e-02 true resid norm 7.872906297491e-02 ||Ae||/||Ax|| 1.818169828064e-01 5 KSP preconditioned resid norm 3.414496482864e-02 true resid norm 3.414496482865e-02 ||Ae||/||Ax|| 7.885441854116e-02 6 KSP preconditioned resid norm 2.025183908090e-02 true resid norm 2.025183908092e-02 ||Ae||/||Ax|| 4.676961897981e-02 7 KSP preconditioned resid norm 1.023707179274e-02 true resid norm 1.023707179279e-02 ||Ae||/||Ax|| 2.364150462111e-02 8 KSP preconditioned resid norm 4.938281368004e-03 true resid norm 4.938281368031e-03 ||Ae||/||Ax|| 1.140447230867e-02 9 KSP preconditioned resid norm 2.373276511281e-03 true resid norm 2.373276511245e-03 ||Ae||/||Ax|| 5.480847330515e-03 10 KSP preconditioned resid norm 1.180493643635e-03 true resid norm 1.180493643635e-03 ||Ae||/||Ax|| 2.726233291716e-03 11 KSP preconditioned resid norm 8.973593571160e-04 true resid norm 8.973593570913e-04 ||Ae||/||Ax|| 2.072362665506e-03 12 KSP preconditioned resid norm 8.954054180580e-04 true resid norm From knepley at gmail.com Fri Aug 15 18:38:48 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 Aug 2008 18:38:48 -0500 Subject: gmres - restart and Gauss-Seidel In-Reply-To: <48A605CC.3090503@civil.gla.ac.uk> References: <48A4AD1C.8080906@civil.gla.ac.uk> <43186D3B-4335-49C9-A0B0-BB7719AA2015@mcs.anl.gov> <48A5320F.7050002@civil.gla.ac.uk> <6B0F4C2E-5DCC-4509-A0C1-E7F74BADB2E7@mcs.anl.gov> <48A605CC.3090503@civil.gla.ac.uk> Message-ID: On Fri, Aug 15, 2008 at 5:40 PM, Lukasz Kaczmarczyk wrote: > Barry Smith wrote: >> 9 KSP Residual norm 3.942287795560e-03 >> 10 KSP Residual norm 4.611215310185e+00 >> ^^^^^^^^^^^^^^^^^^^^^^ >> >> This should never happen. This means there is an error >> in your preconditioner, your smoother is not a linear operator >> on its input variables. With GMRES, at restart the residual >> norm should hardly change at all. The fact that it shoots >> up means that the norm indicated in iteration 9 is totally wrong. >> >> If you run the gmres version with -ksp_monitor_true_residual >> you will see that the true residual is not actually decreasing >> like you think it is. >> > > Thank You for answer, result for which you asking are in my previous > email - look at end of the previous email. If You look there at true > residual, You will see that results are ok. That is not the point. The residual may be decreasing, but your PC is not a linear operator, and thus you can encounter unexpected results with GMRES. You can switch to FGMRES to handle this. Matt > Regards, > Lukasz > > 2) -ksp_type fgmres -ksp_gmres_restart 100 -ksp_monitor_true_residual > 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm > 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 > 1 KSP preconditioned resid norm 1.466659768959e+00 true resid norm > 1.466659768959e+00 ||Ae||/||Ax|| 3.387105649673e+00 > 2 KSP preconditioned resid norm 2.135783619677e-01 true resid norm > 2.135783619678e-01 ||Ae||/||Ax|| 4.932380991006e-01 > 3 KSP preconditioned resid norm 1.342354228152e-01 true resid norm > 1.342354228151e-01 ||Ae||/||Ax|| 3.100034299884e-01 > 4 KSP preconditioned resid norm 7.872906297489e-02 true resid norm > 7.872906297491e-02 ||Ae||/||Ax|| 1.818169828064e-01 > 5 KSP preconditioned resid norm 3.414496482864e-02 true resid norm > 3.414496482865e-02 ||Ae||/||Ax|| 7.885441854116e-02 > 6 KSP preconditioned resid norm 2.025183908090e-02 true resid norm > 2.025183908092e-02 ||Ae||/||Ax|| 4.676961897981e-02 > 7 KSP preconditioned resid norm 1.023707179274e-02 true resid norm > 1.023707179279e-02 ||Ae||/||Ax|| 2.364150462111e-02 > 8 KSP preconditioned resid norm 4.938281368004e-03 true resid norm > 4.938281368031e-03 ||Ae||/||Ax|| 1.140447230867e-02 > 9 KSP preconditioned resid norm 2.373276511281e-03 true resid norm > 2.373276511245e-03 ||Ae||/||Ax|| 5.480847330515e-03 > 10 KSP preconditioned resid norm 1.180493643594e-03 true resid norm > 1.180493643635e-03 ||Ae||/||Ax|| 2.726233291716e-03 > 11 KSP preconditioned resid norm 7.142592937639e-04 true resid norm > 7.142592936809e-04 ||Ae||/||Ax|| 1.649511181911e-03 > 12 KSP preconditioned resid norm 4.226036746778e-04 true resid norm > > > 3) -ksp_type fgmres -ksp_gmres_restart 10 -ksp_monitor_true_residual > 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm > 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 > 1 KSP preconditioned resid norm 1.466659768959e+00 true resid norm > 1.466659768959e+00 ||Ae||/||Ax|| 3.387105649673e+00 > 2 KSP preconditioned resid norm 2.135783619677e-01 true resid norm > 2.135783619678e-01 ||Ae||/||Ax|| 4.932380991006e-01 > 3 KSP preconditioned resid norm 1.342354228152e-01 true resid norm > 1.342354228151e-01 ||Ae||/||Ax|| 3.100034299884e-01 > 4 KSP preconditioned resid norm 7.872906297489e-02 true resid norm > 7.872906297491e-02 ||Ae||/||Ax|| 1.818169828064e-01 > 5 KSP preconditioned resid norm 3.414496482864e-02 true resid norm > 3.414496482865e-02 ||Ae||/||Ax|| 7.885441854116e-02 > 6 KSP preconditioned resid norm 2.025183908090e-02 true resid norm > 2.025183908092e-02 ||Ae||/||Ax|| 4.676961897981e-02 > 7 KSP preconditioned resid norm 1.023707179274e-02 true resid norm > 1.023707179279e-02 ||Ae||/||Ax|| 2.364150462111e-02 > 8 KSP preconditioned resid norm 4.938281368004e-03 true resid norm > 4.938281368031e-03 ||Ae||/||Ax|| 1.140447230867e-02 > 9 KSP preconditioned resid norm 2.373276511281e-03 true resid norm > 2.373276511245e-03 ||Ae||/||Ax|| 5.480847330515e-03 > 10 KSP preconditioned resid norm 1.180493643635e-03 true resid norm > 1.180493643635e-03 ||Ae||/||Ax|| 2.726233291716e-03 > 11 KSP preconditioned resid norm 8.973593571160e-04 true resid norm > 8.973593570913e-04 ||Ae||/||Ax|| 2.072362665506e-03 > 12 KSP preconditioned resid norm 8.954054180580e-04 true resid norm > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Fri Aug 15 21:28:15 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 15 Aug 2008 21:28:15 -0500 Subject: Petsc questions In-Reply-To: References: Message-ID: The matrix name is MATMPIAIJ, not MPIAIJ Barry We always recomment using implicit none with Fortran so it will flag all undefined variables and prevent problems like this. On Aug 15, 2008, at 11:00 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > > Hello All, > > I have question about how to write a matrix to output file in binary > (or > ASCII) format. > In the test case below, the ai is original matrix and a is petsc > version. > > The test runs successfully to write a binary file named "matrix.mat". > However, when I rerun the test with reading matrix via "matload", > then I got > the error "Unknown Mat type given: !". Please let me know how to fix > it. > > Thank you in advance, > > Regards, > > -Hung > > -- error at running: > yod -np 16 ./fw -ksp_type cg -pc_type bjacobi -ksp_rtol 1.0e-15 - > ksp_max_it > 50000 > > hvnguyen:sapphire18% yod -np 16 ./fw -ksp_type cg -pc_type bjacobi - > ksp_rtol > 1.0e-15 -ksp_max_it 50000 > [10]PETSC ERROR: [9]PETSC ERROR: --------------------- Error Message > ------------------------------------ > --------------------- Error Message > ------------------------------------ > [10]PETSC ERROR: [9]PETSC ERROR: Unknown type. Check for miss- > spelling or > missing external package needed for type! > Unknown type. Check for miss-spelling or missing external package > needed for > type! > [10]PETSC ERROR: [9]PETSC ERROR: Unknown Mat type given: ! > Unknown Mat type given: ! > [10]PETSC ERROR: [9]PETSC ERROR: > > ---code: > ! Define the a matrix. > > ione = 1 > ii = ncol > jj = ncol / 2 > call MatCreateMPIAIJ (PETSC_COMM_WORLD, nown, nown, ng, ng, > & ii, PETSC_NULL_INTEGER, jj, PETSC_NULL_INTEGER, a, ierr) > call MatSetFromOptions (a, ierr) > > do i = 1, nown > > ii = npetsc(i) - 1 > > do j = 1, ncol > jloc = id(i, j) > if (jloc .ne. 0) then > jj = npetsc(jloc) - 1 > v = ai(i, j) > call MatSetValues (a, ione, ii, ione, jj, v, > & INSERT_VALUES, ierr) > end if > end do > > end do > > call MatAssemblyBegin (a, MAT_FINAL_ASSEMBLY, ierr) > call MatAssemblyEnd (a, MAT_FINAL_ASSEMBLY, ierr) > > ! Now Write this matrix to a binary file > ! > call PetscViewerBinaryOpen(PETSC_COMM_WORLD,'matrix.mat', > & > & FILE_MODE_WRITE,view,ierr) > call > PetscViewerSetFormat(view,PETSC_VIEWER_BINARY_NATIVE,ierr) > call MatView(A,view,ierr) > call PetscViewerDestroy(view,ierr) > > > ---- Test to load A matix: > !!! read matrix > > call > PetscViewerBinaryOpen(PETSC_COMM_WORLD,'matrix.mat', & > & FILE_MODE_READ,view,ierr) > call > PetscViewerSetFormat(view,PETSC_VIEWER_BINARY_NATIVE,ierr) > call MatLoad(view,MPIAIJ,A,ierr) > CHKERRQ(ierr) > call PetscViewerDestroy(view,ierr) > > From bsmith at mcs.anl.gov Fri Aug 15 21:29:47 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 15 Aug 2008 21:29:47 -0500 Subject: compiling PETSc with Intel MKL 10.0.1.14 In-Reply-To: <48A5CD78.9070201@ices.utexas.edu> References: <485970E1.1090104@gmail.com> <381F69EF-1C27-420C-8337-0EAEB70093E0@mcs.anl.gov> <48A5CD78.9070201@ices.utexas.edu> Message-ID: Please send the configure.log to petsc-maint at mcs.anl.gov Barry On Aug 15, 2008, at 1:39 PM, Paul T. Bauman wrote: > Was there ever a fix/workaround introduced for this? I'm using > 2.3.3p13 and I'm having trouble getting the config to recognize mkl > 10.0.3.020. > > Thanks, > > Paul > > Barry Smith wrote: >> >> Could you email to petsc-maint at mcs.anl.gov ALL the messages as to >> what goes wrong with >> our current linking so we can fix it? >> >> Thanks >> >> Barry >> >> On Jun 18, 2008, at 3:32 PM, Randall Mackie wrote: >> >>> We've upgraded Intel MKL to version 10.0, but in this version, >>> Intel has >>> changed how libraries are suppose to be linked. For example, the >>> libmkl_lapack.a >>> is a dummy library, but that's what the PETSc configure script >>> looks for. >>> >>> The documentation says, for example, to compile LAPACK in the >>> static case, >>> use libmkl_lapack.a libmkl_em64t.a >>> >>> and in the layered pure case to use >>> libmkl_intel_lp64.a libmkl_intel_thread.a libmkl_core.a >>> >>> However, the PETSC configuration wants -lmkl_lapack -lmkl -lguide - >>> lpthread >>> >>> Any suggestions are appreciated. >>> >>> Randy >>> >> > From lua.byhh at gmail.com Fri Aug 15 21:33:07 2008 From: lua.byhh at gmail.com (Shengyong) Date: Sat, 16 Aug 2008 10:33:07 +0800 Subject: Does petsc exploit the symmetric property of matrix? In-Reply-To: References: Message-ID: Hi, hong Thanks very much! Best Regards, On Fri, Aug 15, 2008 at 10:28 PM, Hong Zhang wrote: > > Shengyong, > > On Fri, 15 Aug 2008, berry wrote: > > Hi, >> >> I am totally a newbie to petsc. Previously I use laspack library for my >> linear solver. Now I decide to transfer to Petsc for our simulation code. >> >> I have a huge sparse matrix with 3million*3million with 21 million no zero >> elements. The matrix is symmetric, so the non zero element of upper >> triangular matrix >> is only 12 million. And matrix format is Yale Space Matrix format. >> However, >> I have not seen any indicative parmater for matrix symmetry property in >> MatCreateSeqAIJWithArrays() function. The manual says: >> >> >> PetscErrorCode PETSCMAT_DLLEXPORT MatCreateSeqAIJWithArrays(MPI_Comm >> comm,PetscInt m,PetscInt n,PetscInt* i,PetscInt*j,PetscScalar *a,Mat >> *mat) >> >> < >> http://www-unix.mcs.anl.gov/petsc/petsc-2/snapshots/petsc-current/docs/manualpages/Sys/MPI_Comm.html#MPI_Comm >> > >> >> Input Parameters >> *comm< >> http://www-unix.mcs.anl.gov/petsc/petsc-2/snapshots/petsc-current/docs/manualpages/Sys/comm.html#comm >> > >> *- must be an MPI communicator of >> size< >> http://www-unix.mcs.anl.gov/petsc/petsc-2/snapshots/petsc-current/docs/manualpages/Sys/size.html#size >> >1 >> *m *- number of rows >> *n *- number of columns >> *i *- row indices >> *j * >> >> *a * >> >> >> Can petsc exploit any symmetric proptery of a given matrix? How could I >> Set >> it to be symmetry one ? >> > > You can use MATSBAIJ matrix format > (see > http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MATSBAIJ.html > ) > > Using MATSBAIJ saves half of storage space for original sparse matrix. > However, > it is not as efficient as MATAIJ for some matrix operations, e.g., > MatMult(), due to non-contiguous data access and additional communication > overhead. > > Hong > > >> Best Regards, >> >> >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> >> > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Aug 15 21:41:36 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 15 Aug 2008 21:41:36 -0500 Subject: gmres - restart and Gauss-Seidel In-Reply-To: <48A56C51.1030602@civil.gla.ac.uk> References: <48A4AD1C.8080906@civil.gla.ac.uk> <48A56C51.1030602@civil.gla.ac.uk> Message-ID: <1235BB0E-14DE-4D05-8248-6D9B13001F3D@mcs.anl.gov> I don't think this really solves the problem, I think it merely hides the problem. Barry On Aug 15, 2008, at 6:45 AM, Lukasz Kaczmarczyk wrote: > Thank for all, problem is solved, I add PCJACOBI, > PCSetType(Composite_prec,PCCOMPOSITE); > PCCompositeAddPC(Composite_prec,PCJACOBI); > PCCompositeAddPC(Composite_prec,PCSHELL); // Gauss-Seidel COLOR > Forward > PCCompositeAddPC(Composite_prec,PCSHELL); // MulltiGrid > PCCompositeAddPC(Composite_prec,PCSHELL); // Gauss-Seidel COLOR > Forward > PCCompositeAddPC(Composite_prec,PCJACOBI); > > Now it works :) > -ksp_type fgmres -ksp_gmres_restart 10 -ksp_monitor_true_residual > 0 KSP Residual norm 8.770931682456e+05 > 0 KSP preconditioned resid norm 8.770931682456e+05 true resid norm > 8.770931682456e+05 ||Ae||/||Ax|| 2.025559907164e+06 > 1 KSP Residual norm 2.470886778565e-01 > 1 KSP preconditioned resid norm 2.470886778565e-01 true resid norm > 2.470886778563e-01 ||Ae||/||Ax|| 5.706268586963e-01 > 2 KSP Residual norm 6.271486855102e-02 > 2 KSP preconditioned resid norm 6.271486855102e-02 true resid norm > 6.271486855100e-02 ||Ae||/||Ax|| 1.448337849605e-01 > 3 KSP Residual norm 1.782615793912e-02 > 3 KSP preconditioned resid norm 1.782615793912e-02 true resid norm > 1.782615793917e-02 ||Ae||/||Ax|| 4.116774833918e-02 > 4 KSP Residual norm 6.410478294417e-03 > 4 KSP preconditioned resid norm 6.410478294417e-03 true resid norm > 6.410478294455e-03 ||Ae||/||Ax|| 1.480436547575e-02 > 5 KSP Residual norm 1.314224590977e-03 > 5 KSP preconditioned resid norm 1.314224590977e-03 true resid norm > 1.314224590937e-03 ||Ae||/||Ax|| 3.035071685413e-03 > 6 KSP Residual norm 2.683633682600e-04 > 6 KSP preconditioned resid norm 2.683633682600e-04 true resid norm > 2.683633681768e-04 ||Ae||/||Ax|| 6.197586514302e-04 > 7 KSP Residual norm 8.799918944041e-05 > 7 KSP preconditioned resid norm 8.799918944041e-05 true resid norm > 8.799918940855e-05 ||Ae||/||Ax|| 2.032254227740e-04 > 8 KSP Residual norm 2.508995399544e-05 > 8 KSP preconditioned resid norm 2.508995399544e-05 true resid norm > 2.508995405419e-05 ||Ae||/||Ax|| 5.794276690857e-05 > 9 KSP Residual norm 1.026378586306e-05 > 9 KSP preconditioned resid norm 1.026378586306e-05 true resid norm > 1.026378592108e-05 ||Ae||/||Ax|| 2.370319825777e-05 > 10 KSP Residual norm 3.126671988424e-06 > 10 KSP preconditioned resid norm 3.126671988424e-06 true resid norm > 3.126671988424e-06 ||Ae||/||Ax|| 7.220739656737e-06 > 11 KSP Residual norm 1.229059204228e-06 > 11 KSP preconditioned resid norm 1.229059204228e-06 true resid norm > 1.229059216105e-06 ||Ae||/||Ax|| 2.838390677073e-06 > 12 KSP Residual norm 5.416643445915e-07 > 12 KSP preconditioned resid norm 5.416643445915e-07 true resid norm > 5.416643499083e-07 ||Ae||/||Ax|| 1.250920232920e-06 > 13 KSP Residual norm 2.633538639496e-07 > 13 KSP preconditioned resid norm 2.633538639496e-07 true resid norm > 2.633538571527e-07 ||Ae||/||Ax|| 6.081896812770e-07 > 14 KSP Residual norm 1.027652175716e-07 > 14 KSP preconditioned resid norm 1.027652175716e-07 true resid norm > 1.027652231589e-07 ||Ae||/||Ax|| 2.373261170164e-07 > 15 KSP Residual norm 3.189221996600e-08 > 15 KSP preconditioned resid norm 3.189221996600e-08 true resid norm > 3.189223368074e-08 ||Ae||/||Ax|| 7.365195880253e-08 > 16 KSP Residual norm 1.202828530862e-08 > 16 KSP preconditioned resid norm 1.202828530862e-08 true resid norm > 1.202828930905e-08 ||Ae||/||Ax|| 2.777814428189e-08 > 17 KSP Residual norm 2.701944514090e-09 > 17 KSP preconditioned resid norm 2.701944514090e-09 true resid norm > 2.701946176761e-09 ||Ae||/||Ax|| 6.239877409956e-09 > 18 KSP Residual norm 8.920433449729e-10 > 18 KSP preconditioned resid norm 8.920433449729e-10 true resid norm > 8.920273046556e-10 ||Ae||/||Ax|| 2.060048817870e-09 > 19 KSP Residual norm 2.868707402558e-10 > 19 KSP preconditioned resid norm 2.868707402558e-10 true resid norm > 2.868717856448e-10 ||Ae||/||Ax|| 6.625020106597e-10 > 20 KSP Residual norm 5.959506466900e-11 > 20 KSP preconditioned resid norm 5.959506466900e-11 true resid norm > 5.959506466900e-11 ||Ae||/||Ax|| 1.376289065161e-10 > 21 KSP Residual norm 2.347311204935e-11 > 21 KSP preconditioned resid norm 2.347311204935e-11 true resid norm > 2.346228685060e-11 ||Ae||/||Ax|| 5.418383051598e-11 > 22 KSP Residual norm 9.323415814482e-12 > 22 KSP preconditioned resid norm 9.323415814482e-12 true resid norm > 9.323298967857e-12 ||Ae||/||Ax|| 2.153123667531e-11 > 23 KSP Residual norm 2.925536240498e-12 > 23 KSP preconditioned resid norm 2.925536240498e-12 true resid norm > 2.974960765051e-12 ||Ae||/||Ax|| 6.870377594124e-12 > 24 KSP Residual norm 1.025371996314e-12 > 24 KSP preconditioned resid norm 1.025371996314e-12 true resid norm > 1.141211075669e-12 ||Ae||/||Ax|| 2.635514086960e-12 > 25 KSP Residual norm 3.207279162194e-13 > 25 KSP preconditioned resid norm 3.207279162194e-13 true resid norm > 6.238776238382e-13 ||Ae||/||Ax|| 1.440783656257e-12 > 26 KSP Residual norm 1.290295601110e-13 > 26 KSP preconditioned resid norm 1.290295601110e-13 true resid norm > 4.880917397572e-13 ||Ae||/||Ax|| 1.127199589352e-12 > 27 KSP Residual norm 3.648895071464e-14 > 27 KSP preconditioned resid norm 3.648895071464e-14 true resid norm > 4.915218264645e-13 ||Ae||/||Ax|| 1.135121035287e-12 > 28 KSP Residual norm 9.038317929266e-15 > 28 KSP preconditioned resid norm 9.038317929266e-15 true resid norm > 4.525760014181e-13 ||Ae||/||Ax|| 1.045179504990e-12 > 29 KSP Residual norm 3.168044355090e-15 > 29 KSP preconditioned resid norm 3.168044355090e-15 true resid norm > 4.800332481647e-13 ||Ae||/||Ax|| 1.108589300191e-12 > 30 KSP Residual norm 4.793973611579e-13 > 30 KSP preconditioned resid norm 4.793973611579e-13 true resid norm > 4.793973611579e-13 ||Ae||/||Ax|| 1.107120782053e-12 > 31 KSP Residual norm 1.622163699810e-13 > 31 KSP preconditioned resid norm 1.622163699810e-13 true resid norm > 5.177440837392e-13 ||Ae||/||Ax|| 1.195678744473e-12 > 32 KSP Residual norm 3.278455971610e-14 > 32 KSP preconditioned resid norm 3.278455971610e-14 true resid norm > 5.095690679196e-13 ||Ae||/||Ax|| 1.176799354136e-12 > 33 KSP Residual norm 9.985582557377e-15 > 33 KSP preconditioned resid norm 9.985582557377e-15 true resid norm > 5.216297041713e-13 ||Ae||/||Ax|| 1.204652200482e-12 > 34 KSP Residual norm 4.197972975568e-15 > 34 KSP preconditioned resid norm 4.197972975568e-15 true resid norm > 5.581506198985e-13 ||Ae||/||Ax|| 1.288993642587e-12 > 35 KSP Residual norm 1.034379105713e-15 > From lukasz at civil.gla.ac.uk Sat Aug 16 01:32:32 2008 From: lukasz at civil.gla.ac.uk (Lukasz Kaczmarczyk) Date: Sat, 16 Aug 2008 07:32:32 +0100 Subject: gmres - restart and Gauss-Seidel In-Reply-To: <1235BB0E-14DE-4D05-8248-6D9B13001F3D@mcs.anl.gov> References: <48A4AD1C.8080906@civil.gla.ac.uk> <48A56C51.1030602@civil.gla.ac.uk> <1235BB0E-14DE-4D05-8248-6D9B13001F3D@mcs.anl.gov> Message-ID: <48A67480.9070702@civil.gla.ac.uk> Barry Smith wrote: > > I don't think this really solves the problem, I think it merely hides > the problem. > Yes, You right, I looking for a bug. Regards, Lukasz From lua.byhh at gmail.com Sat Aug 16 04:38:14 2008 From: lua.byhh at gmail.com (Shengyong) Date: Sat, 16 Aug 2008 17:38:14 +0800 Subject: How to set '_pc_factor_shift_postive_define' not in runtime? Message-ID: Hi, I am going to solve a variable coefficient Poisson equation with ICCG method, ie, the pc-type is icc while the ksp type is pcg. I have found functions in petsc library for setting such options. But how to set this type of options for PC object, for instance ,' _pc_factor_shift_positive_define' not in runtime? Which functions should I call if I want to set this type of option previous in an exectuable program. Because It is not so convient for me to use runtime options on Windows. Best Regards, -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Aug 16 06:47:49 2008 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 16 Aug 2008 06:47:49 -0500 Subject: How to set '_pc_factor_shift_postive_define' not in runtime? In-Reply-To: References: Message-ID: http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/PC/PCFactorSetShiftPd.html Matt On Sat, Aug 16, 2008 at 4:38 AM, Shengyong wrote: > Hi, > > I am going to solve a variable coefficient Poisson equation with ICCG > method, ie, the pc-type is icc while the ksp type is pcg. I have found > functions in petsc library for setting such options. But how to set this > type of options for PC object, for instance ,' > _pc_factor_shift_positive_define' not in runtime? Which functions should I > call if I want to set this type of option previous in an exectuable > program. Because It is not so convient for me to use runtime options on > Windows. > > Best Regards, > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From lua.byhh at gmail.com Sat Aug 16 07:20:06 2008 From: lua.byhh at gmail.com (Shengyong) Date: Sat, 16 Aug 2008 20:20:06 +0800 Subject: How to set '_pc_factor_shift_postive_define' not in runtime? In-Reply-To: References: Message-ID: Hi, Matt Thank you very much. On Sat, Aug 16, 2008 at 7:47 PM, Matthew Knepley wrote: > > http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/PC/PCFactorSetShiftPd.html > > Matt > > On Sat, Aug 16, 2008 at 4:38 AM, Shengyong wrote: > > Hi, > > > > I am going to solve a variable coefficient Poisson equation with ICCG > > method, ie, the pc-type is icc while the ksp type is pcg. I have found > > functions in petsc library for setting such options. But how to set this > > type of options for PC object, for instance ,' > > _pc_factor_shift_positive_define' not in runtime? Which functions should > I > > call if I want to set this type of option previous in an exectuable > > program. Because It is not so convient for me to use runtime options on > > Windows. > > > > Best Regards, > > > > > > -- > > Pang Shengyong > > Solidification Simulation Lab, > > State Key Lab of Mould & Die Technology, > > Huazhong Univ. of Sci. & Tech. China > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From lua.byhh at gmail.com Sun Aug 17 03:36:54 2008 From: lua.byhh at gmail.com (Shengyong) Date: Sun, 17 Aug 2008 16:36:54 +0800 Subject: strange convergency behavior with petsc (intialy produces good result, suddenly diverges) Message-ID: Hi, I am still struggling to use petsc to solve variable coefficient poisson problems (which orinates from a multi-phase(liquid-gas two phase flow with sharp interface method, the density ratio is 1000, and with surface tension) flow problem) successively. Initially petsc produces good results with icc pc_type , cg iterative method and with _pc_factor_shift_positive_define flag. I follow petsc's null space method to constrain the singular property of coefficient matrix. However, after several time steps good results of simulation, the KSP Residual Norm suddenly reaches to a number greater than 1000. I guess that petsc diverges. I have also tried to swith to other types of methods, e.g. to gmeres, it just behaves similar to cg method. However, when I use my previous self-coded SOR iterative solver, it produces nice results. And I have tested the same petsc solver class for a single phase driven cavity flow problem, it also produces nice results. It seems that I have missed something important setup procedure in my solver class. could anyone to point me the problem ? I have attached large part of the code below: //in Header class CPETScPoissonSolver { typedef struct { Vec x,b; //x, b Mat A; //A KSP ksp; //Krylov subspace preconditioner PC pc; MatNullSpace nullspace; PetscInt l, m, n;//L, M, N } UserContext; public: CPETScPoissonSolver(int argc, char** argv); ~CPETScPoissonSolver(void); //........ private: //Yale Sparse Matrix format matrix PetscScalar* A; PetscInt * I; PetscInt * J; // Number of Nonzero Element PetscInt nnz; //grid step PetscInt L, M, N; UserContext userctx; private: bool FirstTime; }; //in cpp static char helpPetscPoisson[] = "PETSc class Solves a variable Poisson problem with Null Space Method.\n\n"; CPETScPoissonSolver::CPETScPoissonSolver(int argc, char** argv) { PetscInitialize(&argc, &argv, (char*)0, helpPetscPoisson); FirstTime=true; } CPETScPoissonSolver::~CPETScPoissonSolver(void) { PetscFinalize(); } ...... void CPETScPoissonSolver::SetAIJ(PetscScalar *a, PetscInt *i, PetscInt *j, PetscInt Nnz) { A= a; I=i; J=j; nnz = Nnz; } PetscErrorCode CPETScPoissonSolver::UserInitializeLinearSolver() { PetscErrorCode ierr = 0; PetscInt Num = L*M*N; //Since we use successive solvers, so in the second time step we must deallocate the original matrix then setup a new one if(FirstTime==true) { ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, J, A, &userctx.A); CHKERRQ(ierr); } else { FirstTime = false; ierr = MatDestroy(userctx.A);CHKERRQ(ierr); ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, J, A, &userctx.A); CHKERRQ(ierr); } if(FirstTime==true) { ierr = VecCreateSeqWithArray(PETSC_COMM_SELF,Num,PETSC_NULL,&userctx.b);CHKERRQ(ierr); ierr = VecDuplicate(userctx.b,&userctx.x);CHKERRQ(ierr); ierr = MatNullSpaceCreate(PETSC_COMM_SELF, PETSC_TRUE, 0, PETSC_NULL, &userctx.nullspace); CHKERRQ(ierr); ierr = KSPCreate(PETSC_COMM_SELF,&userctx.ksp);CHKERRQ(ierr); /*Set Null Space for KSP*/ ierr = KSPSetNullSpace(userctx.ksp, userctx.nullspace);CHKERRQ(ierr); } return 0; } PetscErrorCode CPETScPoissonSolver::UserSetBX(PetscScalar *x, PetscScalar *b) { PetscErrorCode ierr ; //below code we must set it every time step ierr = VecPlaceArray(userctx.x,x);CHKERRQ(ierr); ierr = VecPlaceArray(userctx.b,b);CHKERRQ(ierr); ierr = MatNullSpaceRemove(userctx.nullspace,userctx.b, PETSC_NULL);CHKERRQ(ierr); return 0; } PetscInt CPETScPoissonSolver::UserSolve() { PetscErrorCode ierr; ierr = KSPSetOperators(userctx.ksp,userctx.A,userctx.A,SAME_NONZERO_PATTERN);CHKERRQ(ierr); ierr = KSPSetType(userctx.ksp, KSPCG); ierr = KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); ierr = KSPGetPC(userctx.ksp,&userctx.pc);CHKERRQ(ierr); ierr = PCSetType(userctx.pc,PCICC);CHKERRQ(ierr); ierr = PCFactorSetShiftPd(userctx.pc, PETSC_TRUE); ierr = KSPSetTolerances(userctx.ksp,1.e-4,PETSC_DEFAULT,PETSC_DEFAULT,2000); ierr = KSPSetFromOptions(userctx.ksp);CHKERRQ(ierr); /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Solve the linear system - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */ ierr = KSPSolve(userctx.ksp,userctx.b,userctx.x);CHKERRQ(ierr); ierr = VecResetArray(userctx.x);CHKERRQ(ierr); ierr = VecResetArray(userctx.b);CHKERRQ(ierr); return 0; } PetscErrorCode CPETScPoissonSolver::ReleaseMem() { PetscErrorCode ierr; ierr = KSPDestroy(userctx.ksp);CHKERRQ(ierr); ierr = VecDestroy(userctx.x); CHKERRQ(ierr); ierr = VecDestroy(userctx.b); CHKERRQ(ierr); ierr = MatDestroy(userctx.A); CHKERRQ(ierr); ierr = MatNullSpaceDestroy(userctx.nullspace); CHKERRQ(ierr); return 0; } Thanks very much! -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Aug 17 09:32:40 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 17 Aug 2008 09:32:40 -0500 Subject: strange convergency behavior with petsc (intialy produces good result, suddenly diverges) In-Reply-To: References: Message-ID: Try using -pc_type sor -pc_sor_local_symmetric -ksp_type cg Also try running the original icc one with the additional option - ksp_monitor_true_residual, see if funky stuff starts to happen. You could also try adding -pc_factor_levels 1 to try ICC(1) instead of ICC(0). Your code looks fine, I don't see a problem there, Barry Here is my guess, as the simulation proceeds the variable coefficient problem changes enough so that the ICC produces a badly scaled preconditioner that messes up the iterative method. I see this on occasion and don't have a good fix, the shift positive definite helps sometimes but not always. On Aug 17, 2008, at 3:36 AM, Shengyong wrote: > Hi, > > I am still struggling to use petsc to solve variable coefficient > poisson problems (which orinates from a multi-phase(liquid-gas two > phase flow with sharp interface method, the density ratio is 1000, > and with surface tension) flow problem) successively. > > Initially petsc produces good results with icc pc_type , cg > iterative method and with _pc_factor_shift_positive_define flag. I > follow petsc's null space method to constrain the singular property > of coefficient matrix. However, after several time steps good > results of simulation, the KSP Residual Norm suddenly reaches to a > number greater than 1000. I guess that petsc diverges. I have also > tried to swith to other types of methods, e.g. to gmeres, it just > behaves similar to cg method. However, when I use my previous self- > coded SOR iterative solver, it produces nice results. And I have > tested the same petsc solver class for a single phase driven cavity > flow problem, it also produces nice results. > > It seems that I have missed something important setup procedure in > my solver class. could anyone to point me the problem ? I have > attached large part of the code below: > > //in Header > class CPETScPoissonSolver > { > typedef struct { > Vec x,b; //x, b > Mat A; //A > KSP ksp; //Krylov subspace preconditioner > PC pc; > MatNullSpace nullspace; > PetscInt l, m, n;//L, M, N > } UserContext; > > public: > CPETScPoissonSolver(int argc, char** argv); > ~CPETScPoissonSolver(void); > > //........ > > private: > > //Yale Sparse Matrix format matrix > PetscScalar* A; > PetscInt * I; > PetscInt * J; > > // Number of Nonzero Element > PetscInt nnz; > //grid step > PetscInt L, M, N; > UserContext userctx; > private: > bool FirstTime; > }; > > //in cpp > static char helpPetscPoisson[] = "PETSc class Solves a variable > Poisson problem with Null Space Method.\n\n"; > > CPETScPoissonSolver::CPETScPoissonSolver(int argc, char** argv) > { > PetscInitialize(&argc, &argv, (char*)0, helpPetscPoisson); > FirstTime=true; > } > CPETScPoissonSolver::~CPETScPoissonSolver(void) > { > PetscFinalize(); > } > ...... > void CPETScPoissonSolver::SetAIJ(PetscScalar *a, PetscInt *i, > PetscInt *j, PetscInt Nnz) > { > A= a; I=i; J=j; nnz = Nnz; > } > > PetscErrorCode CPETScPoissonSolver::UserInitializeLinearSolver() > { > PetscErrorCode ierr = 0; > PetscInt Num = L*M*N; > > //Since we use successive solvers, so in the second time step we > must deallocate the original matrix then setup a new one > if(FirstTime==true) > { > ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, > Num, I, J, A, &userctx.A); CHKERRQ(ierr); > } > else > { > FirstTime = false; > ierr = MatDestroy(userctx.A);CHKERRQ(ierr); > ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, > Num, I, J, A, &userctx.A); CHKERRQ(ierr); > } > > if(FirstTime==true) > { > > ierr = > VecCreateSeqWithArray > (PETSC_COMM_SELF,Num,PETSC_NULL,&userctx.b);CHKERRQ(ierr); > ierr = VecDuplicate(userctx.b,&userctx.x);CHKERRQ(ierr); > ierr = MatNullSpaceCreate(PETSC_COMM_SELF, PETSC_TRUE, 0, > PETSC_NULL, &userctx.nullspace); CHKERRQ(ierr); > ierr = KSPCreate(PETSC_COMM_SELF,&userctx.ksp);CHKERRQ(ierr); > /*Set Null Space for KSP*/ > ierr = KSPSetNullSpace(userctx.ksp, > userctx.nullspace);CHKERRQ(ierr); > } > return 0; > } > > > PetscErrorCode CPETScPoissonSolver::UserSetBX(PetscScalar *x, > PetscScalar *b) > { > PetscErrorCode ierr ; > //below code we must set it every time step > ierr = VecPlaceArray(userctx.x,x);CHKERRQ(ierr); > ierr = VecPlaceArray(userctx.b,b);CHKERRQ(ierr); > ierr = MatNullSpaceRemove(userctx.nullspace,userctx.b, > PETSC_NULL);CHKERRQ(ierr); > return 0; > } > > PetscInt CPETScPoissonSolver::UserSolve() > { > PetscErrorCode ierr; > ierr = > KSPSetOperators > (userctx.ksp,userctx.A,userctx.A,SAME_NONZERO_PATTERN);CHKERRQ(ierr); > ierr = KSPSetType(userctx.ksp, KSPCG); > ierr = KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); > ierr = KSPGetPC(userctx.ksp,&userctx.pc);CHKERRQ(ierr); > ierr = PCSetType(userctx.pc,PCICC);CHKERRQ(ierr); > ierr = PCFactorSetShiftPd(userctx.pc, PETSC_TRUE); > ierr = KSPSetTolerances(userctx.ksp, > 1.e-4,PETSC_DEFAULT,PETSC_DEFAULT,2000); > ierr = KSPSetFromOptions(userctx.ksp);CHKERRQ(ierr); > /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > - - - - > Solve the linear system > - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > - - - - */ > ierr = KSPSolve(userctx.ksp,userctx.b,userctx.x);CHKERRQ(ierr); > > ierr = VecResetArray(userctx.x);CHKERRQ(ierr); > ierr = VecResetArray(userctx.b);CHKERRQ(ierr); > > return 0; > } > > PetscErrorCode CPETScPoissonSolver::ReleaseMem() > { > PetscErrorCode ierr; > ierr = KSPDestroy(userctx.ksp);CHKERRQ(ierr); > ierr = VecDestroy(userctx.x); CHKERRQ(ierr); > ierr = VecDestroy(userctx.b); CHKERRQ(ierr); > ierr = MatDestroy(userctx.A); CHKERRQ(ierr); > ierr = MatNullSpaceDestroy(userctx.nullspace); CHKERRQ(ierr); > return 0; > } > > Thanks very much! > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China From lua.byhh at gmail.com Mon Aug 18 00:35:22 2008 From: lua.byhh at gmail.com (Shengyong) Date: Mon, 18 Aug 2008 13:35:22 +0800 Subject: strange convergency behavior with petsc (intialy produces good result, suddenly diverges) In-Reply-To: References: Message-ID: hi, Barry Thanks for your kindly reply! Here I report some experiments accoording to your hints. -pc_type sor -pc_sor_local_symmetric _ksp_type cg failed. icc(1) also failed. When using -ksp_monitor_true _residual with icc(0), I have found the precond residual is osillation around some value about 1.06e-004, the true residual norm almost staying at 7.128 e-004. The ||Ae||/||Ax|| also stagnent at 6.3e-005. After the maximum number of iterations (=2000) reached, the iteration fails. Even when I set iteration number to 10000, it seems that petsc also fails. Below is some resiual information near iteration number 2000 when with ksp_monitor_true_residual. 1990 KSP preconditioned resid norm 1.064720311837e-004 true resid norm 7.1284721 18425e-004 ||Ae||/||Ax|| 6.302380221818e-005 1991 KSP preconditioned resid norm 1.062055494352e-004 true resid norm 7.1281202 17324e-004 ||Ae||/||Ax|| 6.302069101215e-005 1992 KSP preconditioned resid norm 1.061228895583e-004 true resid norm 7.1277740 15661e-004 ||Ae||/||Ax|| 6.301763019565e-005 1993 KSP preconditioned resid norm 1.062165148129e-004 true resid norm 7.1274335 10277e-004 ||Ae||/||Ax|| 6.301461974073e-005 1994 KSP preconditioned resid norm 1.064780917764e-004 true resid norm 7.1270986 90168e-004 ||Ae||/||Ax|| 6.301165955010e-005 1995 KSP preconditioned resid norm 1.068986431546e-004 true resid norm 7.1267695 37853e-004 ||Ae||/||Ax|| 6.300874946922e-005 1996 KSP preconditioned resid norm 1.074687043135e-004 true resid norm 7.1264460 30395e-004 ||Ae||/||Ax|| 6.300588929530e-005 1997 KSP preconditioned resid norm 1.081784773720e-004 true resid norm 7.1261281 40328e-004 ||Ae||/||Ax|| 6.300307878551e-005 1998 KSP preconditioned resid norm 1.090179775109e-004 true resid norm 7.1258158 36472e-004 ||Ae||/||Ax|| 6.300031766417e-005 1999 KSP preconditioned resid norm 1.099771672684e-004 true resid norm 7.1255090 84603e-004 ||Ae||/||Ax|| 6.299760562872e-005 2000 KSP preconditioned resid norm 1.110460758301e-004 true resid norm 7.1252078 48108e-004 ||Ae||/||Ax|| 6.299494235544e-005 On Sun, Aug 17, 2008 at 10:32 PM, Barry Smith wrote: > > Try using -pc_type sor -pc_sor_local_symmetric -ksp_type cg > > Also try running the original icc one with the additional option > -ksp_monitor_true_residual, see if funky stuff starts to happen. > > You could also try adding -pc_factor_levels 1 to try ICC(1) instead of > ICC(0). > > Your code looks fine, I don't see a problem there, > > Barry > > > Here is my guess, as the simulation proceeds the variable coefficient > problem changes enough so that the ICC produces > a badly scaled preconditioner that messes up the iterative method. I see > this on occasion and don't have a good fix, the shift > positive definite helps sometimes but not always. > > > > > On Aug 17, 2008, at 3:36 AM, Shengyong wrote: > > Hi, >> >> I am still struggling to use petsc to solve variable coefficient poisson >> problems (which orinates from a multi-phase(liquid-gas two phase flow with >> sharp interface method, the density ratio is 1000, and with surface tension) >> flow problem) successively. >> >> Initially petsc produces good results with icc pc_type , cg iterative >> method and with _pc_factor_shift_positive_define flag. I follow petsc's >> null space method to constrain the singular property of coefficient matrix. >> However, after several time steps good results of simulation, the KSP >> Residual Norm suddenly reaches to a number greater than 1000. I guess that >> petsc diverges. I have also tried to swith to other types of methods, e.g. >> to gmeres, it just behaves similar to cg method. However, when I use my >> previous self-coded SOR iterative solver, it produces nice results. And I >> have tested the same petsc solver class for a single phase driven cavity >> flow problem, it also produces nice results. >> >> It seems that I have missed something important setup procedure in my >> solver class. could anyone to point me the problem ? I have attached large >> part of the code below: >> >> //in Header >> class CPETScPoissonSolver >> { >> typedef struct { >> Vec x,b; //x, b >> Mat A; //A >> KSP ksp; //Krylov subspace preconditioner >> PC pc; >> MatNullSpace nullspace; >> PetscInt l, m, n;//L, M, N >> } UserContext; >> >> public: >> CPETScPoissonSolver(int argc, char** argv); >> ~CPETScPoissonSolver(void); >> >> //........ >> >> private: >> >> //Yale Sparse Matrix format matrix >> PetscScalar* A; >> PetscInt * I; >> PetscInt * J; >> >> // Number of Nonzero Element >> PetscInt nnz; >> //grid step >> PetscInt L, M, N; >> UserContext userctx; >> private: >> bool FirstTime; >> }; >> >> //in cpp >> static char helpPetscPoisson[] = "PETSc class Solves a variable Poisson >> problem with Null Space Method.\n\n"; >> >> CPETScPoissonSolver::CPETScPoissonSolver(int argc, char** argv) >> { >> PetscInitialize(&argc, &argv, (char*)0, helpPetscPoisson); >> FirstTime=true; >> } >> CPETScPoissonSolver::~CPETScPoissonSolver(void) >> { >> PetscFinalize(); >> } >> ...... >> void CPETScPoissonSolver::SetAIJ(PetscScalar *a, PetscInt *i, PetscInt *j, >> PetscInt Nnz) >> { >> A= a; I=i; J=j; nnz = Nnz; >> } >> >> PetscErrorCode CPETScPoissonSolver::UserInitializeLinearSolver() >> { >> PetscErrorCode ierr = 0; >> PetscInt Num = L*M*N; >> >> //Since we use successive solvers, so in the second time step we must >> deallocate the original matrix then setup a new one >> if(FirstTime==true) >> { >> ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, >> J, A, &userctx.A); CHKERRQ(ierr); >> } >> else >> { >> FirstTime = false; >> ierr = MatDestroy(userctx.A);CHKERRQ(ierr); >> ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, >> J, A, &userctx.A); CHKERRQ(ierr); >> } >> >> if(FirstTime==true) >> { >> >> ierr = >> VecCreateSeqWithArray(PETSC_COMM_SELF,Num,PETSC_NULL,&userctx.b);CHKERRQ(ierr); >> ierr = VecDuplicate(userctx.b,&userctx.x);CHKERRQ(ierr); >> ierr = MatNullSpaceCreate(PETSC_COMM_SELF, PETSC_TRUE, 0, >> PETSC_NULL, &userctx.nullspace); CHKERRQ(ierr); >> ierr = KSPCreate(PETSC_COMM_SELF,&userctx.ksp);CHKERRQ(ierr); >> /*Set Null Space for KSP*/ >> ierr = KSPSetNullSpace(userctx.ksp, >> userctx.nullspace);CHKERRQ(ierr); >> } >> return 0; >> } >> >> >> PetscErrorCode CPETScPoissonSolver::UserSetBX(PetscScalar *x, PetscScalar >> *b) >> { >> PetscErrorCode ierr ; >> //below code we must set it every time step >> ierr = VecPlaceArray(userctx.x,x);CHKERRQ(ierr); >> ierr = VecPlaceArray(userctx.b,b);CHKERRQ(ierr); >> ierr = MatNullSpaceRemove(userctx.nullspace,userctx.b, >> PETSC_NULL);CHKERRQ(ierr); >> return 0; >> } >> >> PetscInt CPETScPoissonSolver::UserSolve() >> { >> PetscErrorCode ierr; >> ierr = >> KSPSetOperators(userctx.ksp,userctx.A,userctx.A,SAME_NONZERO_PATTERN);CHKERRQ(ierr); >> ierr = KSPSetType(userctx.ksp, KSPCG); >> ierr = KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); >> ierr = KSPGetPC(userctx.ksp,&userctx.pc);CHKERRQ(ierr); >> ierr = PCSetType(userctx.pc,PCICC);CHKERRQ(ierr); >> ierr = PCFactorSetShiftPd(userctx.pc, PETSC_TRUE); >> ierr = >> KSPSetTolerances(userctx.ksp,1.e-4,PETSC_DEFAULT,PETSC_DEFAULT,2000); >> ierr = KSPSetFromOptions(userctx.ksp);CHKERRQ(ierr); >> /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - >> - >> Solve the linear system >> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - >> - */ >> ierr = KSPSolve(userctx.ksp,userctx.b,userctx.x);CHKERRQ(ierr); >> >> ierr = VecResetArray(userctx.x);CHKERRQ(ierr); >> ierr = VecResetArray(userctx.b);CHKERRQ(ierr); >> >> return 0; >> } >> >> PetscErrorCode CPETScPoissonSolver::ReleaseMem() >> { >> PetscErrorCode ierr; >> ierr = KSPDestroy(userctx.ksp);CHKERRQ(ierr); >> ierr = VecDestroy(userctx.x); CHKERRQ(ierr); >> ierr = VecDestroy(userctx.b); CHKERRQ(ierr); >> ierr = MatDestroy(userctx.A); CHKERRQ(ierr); >> ierr = MatNullSpaceDestroy(userctx.nullspace); CHKERRQ(ierr); >> return 0; >> } >> >> Thanks very much! >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Aug 18 10:24:04 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 18 Aug 2008 10:24:04 -0500 Subject: strange convergency behavior with petsc (intialy produces good result, suddenly diverges) In-Reply-To: References: Message-ID: <8B1B832E-7A10-4446-8664-F9753ADE82E3@mcs.anl.gov> On Aug 18, 2008, at 12:35 AM, Shengyong wrote: > hi, Barry > > Thanks for your kindly reply! > > Here I report some experiments accoording to your hints. > > -pc_type sor -pc_sor_local_symmetric _ksp_type cg failed. What does failed mean? Converged for a while then stagnated? Diverged (the residual norm got larger and larger)? Never converged at all? > > icc(1) also failed. What does failed mean? I suggest running with -pc_type lu (keep the -ksp_type gmres) and see what happens? Does you simulation run fine for an unlimited number of timesteps? If not, how does it fail? Barry > > > When using -ksp_monitor_true _residual with icc(0), I have found the > precond residual is osillation around some value about 1.06e-004, > the true residual norm almost staying at 7.128 e-004. The ||Ae||/|| > Ax|| also stagnent at 6.3e-005. > > After the maximum number of iterations (=2000) reached, the > iteration fails. Even when I set iteration number to 10000, it seems > that petsc also fails. > > Below is some resiual information near iteration number 2000 when > with ksp_monitor_true_residual. > > > 1990 KSP preconditioned resid norm 1.064720311837e-004 true resid > norm 7.1284721 > 18425e-004 ||Ae||/||Ax|| 6.302380221818e-005 > > 1991 KSP preconditioned resid norm 1.062055494352e-004 true resid > norm 7.1281202 > 17324e-004 ||Ae||/||Ax|| 6.302069101215e-005 > > 1992 KSP preconditioned resid norm 1.061228895583e-004 true resid > norm 7.1277740 > 15661e-004 ||Ae||/||Ax|| 6.301763019565e-005 > > 1993 KSP preconditioned resid norm 1.062165148129e-004 true resid > norm 7.1274335 > 10277e-004 ||Ae||/||Ax|| 6.301461974073e-005 > > 1994 KSP preconditioned resid norm 1.064780917764e-004 true resid > norm 7.1270986 > 90168e-004 ||Ae||/||Ax|| 6.301165955010e-005 > > 1995 KSP preconditioned resid norm 1.068986431546e-004 true resid > norm 7.1267695 > 37853e-004 ||Ae||/||Ax|| 6.300874946922e-005 > > 1996 KSP preconditioned resid norm 1.074687043135e-004 true resid > norm 7.1264460 > 30395e-004 ||Ae||/||Ax|| 6.300588929530e-005 > > 1997 KSP preconditioned resid norm 1.081784773720e-004 true resid > norm 7.1261281 > 40328e-004 ||Ae||/||Ax|| 6.300307878551e-005 > > 1998 KSP preconditioned resid norm 1.090179775109e-004 true resid > norm 7.1258158 > 36472e-004 ||Ae||/||Ax|| 6.300031766417e-005 > > 1999 KSP preconditioned resid norm 1.099771672684e-004 true resid > norm 7.1255090 > 84603e-004 ||Ae||/||Ax|| 6.299760562872e-005 > > 2000 KSP preconditioned resid norm 1.110460758301e-004 true resid > norm 7.1252078 > 48108e-004 ||Ae||/||Ax|| 6.299494235544e-005 > > On Sun, Aug 17, 2008 at 10:32 PM, Barry Smith > wrote: > > Try using -pc_type sor -pc_sor_local_symmetric -ksp_type cg > > Also try running the original icc one with the additional option - > ksp_monitor_true_residual, see if funky stuff starts to happen. > > You could also try adding -pc_factor_levels 1 to try ICC(1) > instead of ICC(0). > > Your code looks fine, I don't see a problem there, > > Barry > > > Here is my guess, as the simulation proceeds the variable > coefficient problem changes enough so that the ICC produces > a badly scaled preconditioner that messes up the iterative method. I > see this on occasion and don't have a good fix, the shift > positive definite helps sometimes but not always. > > > > > On Aug 17, 2008, at 3:36 AM, Shengyong wrote: > > Hi, > > I am still struggling to use petsc to solve variable coefficient > poisson problems (which orinates from a multi-phase(liquid-gas two > phase flow with sharp interface method, the density ratio is 1000, > and with surface tension) flow problem) successively. > > Initially petsc produces good results with icc pc_type , cg > iterative method and with _pc_factor_shift_positive_define flag. I > follow petsc's null space method to constrain the singular property > of coefficient matrix. However, after several time steps good > results of simulation, the KSP Residual Norm suddenly reaches to a > number greater than 1000. I guess that petsc diverges. I have also > tried to swith to other types of methods, e.g. to gmeres, it just > behaves similar to cg method. However, when I use my previous self- > coded SOR iterative solver, it produces nice results. And I have > tested the same petsc solver class for a single phase driven cavity > flow problem, it also produces nice results. > > It seems that I have missed something important setup procedure in > my solver class. could anyone to point me the problem ? I have > attached large part of the code below: > > //in Header > class CPETScPoissonSolver > { > typedef struct { > Vec x,b; //x, b > Mat A; //A > KSP ksp; //Krylov subspace preconditioner > PC pc; > MatNullSpace nullspace; > PetscInt l, m, n;//L, M, N > } UserContext; > > public: > CPETScPoissonSolver(int argc, char** argv); > ~CPETScPoissonSolver(void); > > //........ > > private: > > //Yale Sparse Matrix format matrix > PetscScalar* A; > PetscInt * I; > PetscInt * J; > > // Number of Nonzero Element > PetscInt nnz; > //grid step > PetscInt L, M, N; > UserContext userctx; > private: > bool FirstTime; > }; > > //in cpp > static char helpPetscPoisson[] = "PETSc class Solves a variable > Poisson problem with Null Space Method.\n\n"; > > CPETScPoissonSolver::CPETScPoissonSolver(int argc, char** argv) > { > PetscInitialize(&argc, &argv, (char*)0, helpPetscPoisson); > FirstTime=true; > } > CPETScPoissonSolver::~CPETScPoissonSolver(void) > { > PetscFinalize(); > } > ...... > void CPETScPoissonSolver::SetAIJ(PetscScalar *a, PetscInt *i, > PetscInt *j, PetscInt Nnz) > { > A= a; I=i; J=j; nnz = Nnz; > } > > PetscErrorCode CPETScPoissonSolver::UserInitializeLinearSolver() > { > PetscErrorCode ierr = 0; > PetscInt Num = L*M*N; > > //Since we use successive solvers, so in the second time step we > must deallocate the original matrix then setup a new one > if(FirstTime==true) > { > ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, > Num, I, J, A, &userctx.A); CHKERRQ(ierr); > } > else > { > FirstTime = false; > ierr = MatDestroy(userctx.A);CHKERRQ(ierr); > ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, > Num, I, J, A, &userctx.A); CHKERRQ(ierr); > } > > if(FirstTime==true) > { > > ierr = > VecCreateSeqWithArray > (PETSC_COMM_SELF,Num,PETSC_NULL,&userctx.b);CHKERRQ(ierr); > ierr = VecDuplicate(userctx.b,&userctx.x);CHKERRQ(ierr); > ierr = MatNullSpaceCreate(PETSC_COMM_SELF, PETSC_TRUE, 0, > PETSC_NULL, &userctx.nullspace); CHKERRQ(ierr); > ierr = KSPCreate(PETSC_COMM_SELF,&userctx.ksp);CHKERRQ(ierr); > /*Set Null Space for KSP*/ > ierr = KSPSetNullSpace(userctx.ksp, > userctx.nullspace);CHKERRQ(ierr); > } > return 0; > } > > > PetscErrorCode CPETScPoissonSolver::UserSetBX(PetscScalar *x, > PetscScalar *b) > { > PetscErrorCode ierr ; > //below code we must set it every time step > ierr = VecPlaceArray(userctx.x,x);CHKERRQ(ierr); > ierr = VecPlaceArray(userctx.b,b);CHKERRQ(ierr); > ierr = MatNullSpaceRemove(userctx.nullspace,userctx.b, > PETSC_NULL);CHKERRQ(ierr); > return 0; > } > > PetscInt CPETScPoissonSolver::UserSolve() > { > PetscErrorCode ierr; > ierr = > KSPSetOperators > (userctx.ksp,userctx.A,userctx.A,SAME_NONZERO_PATTERN);CHKERRQ(ierr); > ierr = KSPSetType(userctx.ksp, KSPCG); > ierr = KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); > ierr = KSPGetPC(userctx.ksp,&userctx.pc);CHKERRQ(ierr); > ierr = PCSetType(userctx.pc,PCICC);CHKERRQ(ierr); > ierr = PCFactorSetShiftPd(userctx.pc, PETSC_TRUE); > ierr = KSPSetTolerances(userctx.ksp, > 1.e-4,PETSC_DEFAULT,PETSC_DEFAULT,2000); > ierr = KSPSetFromOptions(userctx.ksp);CHKERRQ(ierr); > /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > - - - - > Solve the linear system > - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > - - - - */ > ierr = KSPSolve(userctx.ksp,userctx.b,userctx.x);CHKERRQ(ierr); > > ierr = VecResetArray(userctx.x);CHKERRQ(ierr); > ierr = VecResetArray(userctx.b);CHKERRQ(ierr); > > return 0; > } > > PetscErrorCode CPETScPoissonSolver::ReleaseMem() > { > PetscErrorCode ierr; > ierr = KSPDestroy(userctx.ksp);CHKERRQ(ierr); > ierr = VecDestroy(userctx.x); CHKERRQ(ierr); > ierr = VecDestroy(userctx.b); CHKERRQ(ierr); > ierr = MatDestroy(userctx.A); CHKERRQ(ierr); > ierr = MatNullSpaceDestroy(userctx.nullspace); CHKERRQ(ierr); > return 0; > } > > Thanks very much! > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China > > > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China From lua.byhh at gmail.com Mon Aug 18 12:01:27 2008 From: lua.byhh at gmail.com (Shengyong) Date: Tue, 19 Aug 2008 01:01:27 +0800 Subject: strange convergency behavior with petsc (intialy produces good result, suddenly diverges) In-Reply-To: <8B1B832E-7A10-4446-8664-F9753ADE82E3@mcs.anl.gov> References: <8B1B832E-7A10-4446-8664-F9753ADE82E3@mcs.anl.gov> Message-ID: On Mon, Aug 18, 2008 at 11:24 PM, Barry Smith wrote: > > On Aug 18, 2008, at 12:35 AM, Shengyong wrote: > > hi, Barry >> >> Thanks for your kindly reply! >> >> Here I report some experiments accoording to your hints. >> >> -pc_type sor -pc_sor_local_symmetric _ksp_type cg failed. >> > > What does failed mean? Converged for a while then stagnated? Diverged > (the residual norm got larger and larger)? Never converged at all? > >> >> icc(1) also failed. >> > > What does failed mean? > > I suggest running with -pc_type lu (keep the -ksp_type gmres) and see > what happens? Does you simulation run fine for an unlimited number > of timesteps? If not, how does it fail? > > Barry > > > >> >> When using -ksp_monitor_true _residual with icc(0), I have found the >> precond residual is osillation around some value about 1.06e-004, the true >> residual norm almost staying at 7.128 e-004. The ||Ae||/||Ax|| also >> stagnent at 6.3e-005. >> >> After the maximum number of iterations (=2000) reached, the iteration >> fails. Even when I set iteration number to 10000, it seems that petsc also >> fails. >> >> Below is some resiual information near iteration number 2000 when with >> ksp_monitor_true_residual. >> >> >> 1990 KSP preconditioned resid norm 1.064720311837e-004 true resid norm >> 7.1284721 >> 18425e-004 ||Ae||/||Ax|| 6.302380221818e-005 >> >> 1991 KSP preconditioned resid norm 1.062055494352e-004 true resid norm >> 7.1281202 >> 17324e-004 ||Ae||/||Ax|| 6.302069101215e-005 >> >> 1992 KSP preconditioned resid norm 1.061228895583e-004 true resid norm >> 7.1277740 >> 15661e-004 ||Ae||/||Ax|| 6.301763019565e-005 >> >> 1993 KSP preconditioned resid norm 1.062165148129e-004 true resid norm >> 7.1274335 >> 10277e-004 ||Ae||/||Ax|| 6.301461974073e-005 >> >> 1994 KSP preconditioned resid norm 1.064780917764e-004 true resid norm >> 7.1270986 >> 90168e-004 ||Ae||/||Ax|| 6.301165955010e-005 >> >> 1995 KSP preconditioned resid norm 1.068986431546e-004 true resid norm >> 7.1267695 >> 37853e-004 ||Ae||/||Ax|| 6.300874946922e-005 >> >> 1996 KSP preconditioned resid norm 1.074687043135e-004 true resid norm >> 7.1264460 >> 30395e-004 ||Ae||/||Ax|| 6.300588929530e-005 >> >> 1997 KSP preconditioned resid norm 1.081784773720e-004 true resid norm >> 7.1261281 >> 40328e-004 ||Ae||/||Ax|| 6.300307878551e-005 >> >> 1998 KSP preconditioned resid norm 1.090179775109e-004 true resid norm >> 7.1258158 >> 36472e-004 ||Ae||/||Ax|| 6.300031766417e-005 >> >> 1999 KSP preconditioned resid norm 1.099771672684e-004 true resid norm >> 7.1255090 >> 84603e-004 ||Ae||/||Ax|| 6.299760562872e-005 >> >> 2000 KSP preconditioned resid norm 1.110460758301e-004 true resid norm >> 7.1252078 >> 48108e-004 ||Ae||/||Ax|| 6.299494235544e-005 >> >> On Sun, Aug 17, 2008 at 10:32 PM, Barry Smith wrote: >> >> Try using -pc_type sor -pc_sor_local_symmetric -ksp_type cg >> >> Also try running the original icc one with the additional option >> -ksp_monitor_true_residual, see if funky stuff starts to happen. >> >> You could also try adding -pc_factor_levels 1 to try ICC(1) instead of >> ICC(0). >> >> Your code looks fine, I don't see a problem there, >> >> Barry >> >> >> Here is my guess, as the simulation proceeds the variable coefficient >> problem changes enough so that the ICC produces >> a badly scaled preconditioner that messes up the iterative method. I see >> this on occasion and don't have a good fix, the shift >> positive definite helps sometimes but not always. >> >> >> >> >> On Aug 17, 2008, at 3:36 AM, Shengyong wrote: >> >> Hi, >> >> I am still struggling to use petsc to solve variable coefficient poisson >> problems (which orinates from a multi-phase(liquid-gas two phase flow with >> sharp interface method, the density ratio is 1000, and with surface tension) >> flow problem) successively. >> >> Initially petsc produces good results with icc pc_type , cg iterative >> method and with _pc_factor_shift_positive_define flag. I follow petsc's >> null space method to constrain the singular property of coefficient matrix. >> However, after several time steps good results of simulation, the KSP >> Residual Norm suddenly reaches to a number greater than 1000. I guess that >> petsc diverges. I have also tried to swith to other types of methods, e.g. >> to gmeres, it just behaves similar to cg method. However, when I use my >> previous self-coded SOR iterative solver, it produces nice results. And I >> have tested the same petsc solver class for a single phase driven cavity >> flow problem, it also produces nice results. >> >> It seems that I have missed something important setup procedure in my >> solver class. could anyone to point me the problem ? I have attached large >> part of the code below: >> >> //in Header >> class CPETScPoissonSolver >> { >> typedef struct { >> Vec x,b; //x, b >> Mat A; //A >> KSP ksp; //Krylov subspace preconditioner >> PC pc; >> MatNullSpace nullspace; >> PetscInt l, m, n;//L, M, N >> } UserContext; >> >> public: >> CPETScPoissonSolver(int argc, char** argv); >> ~CPETScPoissonSolver(void); >> >> //........ >> >> private: >> >> //Yale Sparse Matrix format matrix >> PetscScalar* A; >> PetscInt * I; >> PetscInt * J; >> >> // Number of Nonzero Element >> PetscInt nnz; >> //grid step >> PetscInt L, M, N; >> UserContext userctx; >> private: >> bool FirstTime; >> }; >> >> //in cpp >> static char helpPetscPoisson[] = "PETSc class Solves a variable Poisson >> problem with Null Space Method.\n\n"; >> >> CPETScPoissonSolver::CPETScPoissonSolver(int argc, char** argv) >> { >> PetscInitialize(&argc, &argv, (char*)0, helpPetscPoisson); >> FirstTime=true; >> } >> CPETScPoissonSolver::~CPETScPoissonSolver(void) >> { >> PetscFinalize(); >> } >> ...... >> void CPETScPoissonSolver::SetAIJ(PetscScalar *a, PetscInt *i, PetscInt *j, >> PetscInt Nnz) >> { >> A= a; I=i; J=j; nnz = Nnz; >> } >> >> PetscErrorCode CPETScPoissonSolver::UserInitializeLinearSolver() >> { >> PetscErrorCode ierr = 0; >> PetscInt Num = L*M*N; >> >> //Since we use successive solvers, so in the second time step we must >> deallocate the original matrix then setup a new one >> if(FirstTime==true) >> { >> ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, >> J, A, &userctx.A); CHKERRQ(ierr); >> } >> else >> { >> FirstTime = false; >> ierr = MatDestroy(userctx.A);CHKERRQ(ierr); >> ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, >> J, A, &userctx.A); CHKERRQ(ierr); >> } >> >> if(FirstTime==true) >> { >> >> ierr = >> VecCreateSeqWithArray(PETSC_COMM_SELF,Num,PETSC_NULL,&userctx.b);CHKERRQ(ierr); >> ierr = VecDuplicate(userctx.b,&userctx.x);CHKERRQ(ierr); >> ierr = MatNullSpaceCreate(PETSC_COMM_SELF, PETSC_TRUE, 0, >> PETSC_NULL, &userctx.nullspace); CHKERRQ(ierr); >> ierr = KSPCreate(PETSC_COMM_SELF,&userctx.ksp);CHKERRQ(ierr); >> /*Set Null Space for KSP*/ >> ierr = KSPSetNullSpace(userctx.ksp, >> userctx.nullspace);CHKERRQ(ierr); >> } >> return 0; >> } >> >> >> PetscErrorCode CPETScPoissonSolver::UserSetBX(PetscScalar *x, PetscScalar >> *b) >> { >> PetscErrorCode ierr ; >> //below code we must set it every time step >> ierr = VecPlaceArray(userctx.x,x);CHKERRQ(ierr); >> ierr = VecPlaceArray(userctx.b,b);CHKERRQ(ierr); >> ierr = MatNullSpaceRemove(userctx.nullspace,userctx.b, >> PETSC_NULL);CHKERRQ(ierr); >> return 0; >> } >> >> PetscInt CPETScPoissonSolver::UserSolve() >> { >> PetscErrorCode ierr; >> ierr = >> KSPSetOperators(userctx.ksp,userctx.A,userctx.A,SAME_NONZERO_PATTERN);CHKERRQ(ierr); >> ierr = KSPSetType(userctx.ksp, KSPCG); >> ierr = KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); >> ierr = KSPGetPC(userctx.ksp,&userctx.pc);CHKERRQ(ierr); >> ierr = PCSetType(userctx.pc,PCICC);CHKERRQ(ierr); >> ierr = PCFactorSetShiftPd(userctx.pc, PETSC_TRUE); >> ierr = >> KSPSetTolerances(userctx.ksp,1.e-4,PETSC_DEFAULT,PETSC_DEFAULT,2000); >> ierr = KSPSetFromOptions(userctx.ksp);CHKERRQ(ierr); >> /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - >> Solve the linear system >> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - >> */ >> ierr = KSPSolve(userctx.ksp,userctx.b,userctx.x);CHKERRQ(ierr); >> >> ierr = VecResetArray(userctx.x);CHKERRQ(ierr); >> ierr = VecResetArray(userctx.b);CHKERRQ(ierr); >> >> return 0; >> } >> >> PetscErrorCode CPETScPoissonSolver::ReleaseMem() >> { >> PetscErrorCode ierr; >> ierr = KSPDestroy(userctx.ksp);CHKERRQ(ierr); >> ierr = VecDestroy(userctx.x); CHKERRQ(ierr); >> ierr = VecDestroy(userctx.b); CHKERRQ(ierr); >> ierr = MatDestroy(userctx.A); CHKERRQ(ierr); >> ierr = MatNullSpaceDestroy(userctx.nullspace); CHKERRQ(ierr); >> return 0; >> } >> >> Thanks very much! >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> >> >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From lua.byhh at gmail.com Mon Aug 18 12:02:18 2008 From: lua.byhh at gmail.com (Shengyong) Date: Tue, 19 Aug 2008 01:02:18 +0800 Subject: strange convergency behavior with petsc (intialy produces good result, suddenly diverges) In-Reply-To: <8B1B832E-7A10-4446-8664-F9753ADE82E3@mcs.anl.gov> References: <8B1B832E-7A10-4446-8664-F9753ADE82E3@mcs.anl.gov> Message-ID: Hi, Barry Thanks for your help! What I mean is that both of your suggested method converge for a while then stagnant. Once stagnation happens, then after maximum iteration number reaches, PETSc fill the solution with exactly zero. So error occurs in the next time step. I am a newbie and can not explain why this happens since I have set the initial guess is not zero in my code. E.g, I have called this function KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); For the stagnation, it seems that the rounding error dominates during iteration in some special situations since the coefficient Matrix A is very ill-conditioned(the Cond number over 10^6 in some cases). Although with null space method by project B vector to R(A) will remede the stagnant phenomena. Actually it does not. I have read a paper in J. Comp. Appl. Math. in year of 1988 : 'preconditioned conjugate gradient for solving singular systems'. The author suggests not only project B to R(A) space, but also X(i+1) (solution vector during iteration) and r(i+1) ( residual vector during iteration) to R(A*) (A* refers to preconditioned A). I am not clear about how icc and cg method are implemented in PETSc. Above is just my not so mature guess. My simulation code could run unlimited time steps with self-coded SOR method. The SOR method is implemented as matrix free method. In this method, the Neumann BC is not directly discretized in equations, but this way is adopted: assuming P[0] is BC, then P[0] cell does not particpate in any iteration, but rather after one step iteration ends, I set P[0] = P[1] while P[1] cell is inner fluid cell. While with PETSc method, I directly discrete the Neumann BC into boundary cells' equation. I think this difference does not matter the converge stuff at all since the initially severay time steps PETSc solver gives nice results. Best Regards, On Mon, Aug 18, 2008 at 11:24 PM, Barry Smith wrote: > > On Aug 18, 2008, at 12:35 AM, Shengyong wrote: > > hi, Barry >> >> Thanks for your kindly reply! >> >> Here I report some experiments accoording to your hints. >> >> -pc_type sor -pc_sor_local_symmetric _ksp_type cg failed. >> > > What does failed mean? Converged for a while then stagnated? Diverged > (the residual norm got larger and larger)? Never converged at all? > >> >> icc(1) also failed. >> > > What does failed mean? > > I suggest running with -pc_type lu (keep the -ksp_type gmres) and see > what happens? Does you simulation run fine for an unlimited number > of timesteps? If not, how does it fail? > > Barry > > > >> >> When using -ksp_monitor_true _residual with icc(0), I have found the >> precond residual is osillation around some value about 1.06e-004, the true >> residual norm almost staying at 7.128 e-004. The ||Ae||/||Ax|| also >> stagnent at 6.3e-005. >> >> After the maximum number of iterations (=2000) reached, the iteration >> fails. Even when I set iteration number to 10000, it seems that petsc also >> fails. >> >> Below is some resiual information near iteration number 2000 when with >> ksp_monitor_true_residual. >> >> >> 1990 KSP preconditioned resid norm 1.064720311837e-004 true resid norm >> 7.1284721 >> 18425e-004 ||Ae||/||Ax|| 6.302380221818e-005 >> >> 1991 KSP preconditioned resid norm 1.062055494352e-004 true resid norm >> 7.1281202 >> 17324e-004 ||Ae||/||Ax|| 6.302069101215e-005 >> >> 1992 KSP preconditioned resid norm 1.061228895583e-004 true resid norm >> 7.1277740 >> 15661e-004 ||Ae||/||Ax|| 6.301763019565e-005 >> >> 1993 KSP preconditioned resid norm 1.062165148129e-004 true resid norm >> 7.1274335 >> 10277e-004 ||Ae||/||Ax|| 6.301461974073e-005 >> >> 1994 KSP preconditioned resid norm 1.064780917764e-004 true resid norm >> 7.1270986 >> 90168e-004 ||Ae||/||Ax|| 6.301165955010e-005 >> >> 1995 KSP preconditioned resid norm 1.068986431546e-004 true resid norm >> 7.1267695 >> 37853e-004 ||Ae||/||Ax|| 6.300874946922e-005 >> >> 1996 KSP preconditioned resid norm 1.074687043135e-004 true resid norm >> 7.1264460 >> 30395e-004 ||Ae||/||Ax|| 6.300588929530e-005 >> >> 1997 KSP preconditioned resid norm 1.081784773720e-004 true resid norm >> 7.1261281 >> 40328e-004 ||Ae||/||Ax|| 6.300307878551e-005 >> >> 1998 KSP preconditioned resid norm 1.090179775109e-004 true resid norm >> 7.1258158 >> 36472e-004 ||Ae||/||Ax|| 6.300031766417e-005 >> >> 1999 KSP preconditioned resid norm 1.099771672684e-004 true resid norm >> 7.1255090 >> 84603e-004 ||Ae||/||Ax|| 6.299760562872e-005 >> >> 2000 KSP preconditioned resid norm 1.110460758301e-004 true resid norm >> 7.1252078 >> 48108e-004 ||Ae||/||Ax|| 6.299494235544e-005 >> >> On Sun, Aug 17, 2008 at 10:32 PM, Barry Smith wrote: >> >> Try using -pc_type sor -pc_sor_local_symmetric -ksp_type cg >> >> Also try running the original icc one with the additional option >> -ksp_monitor_true_residual, see if funky stuff starts to happen. >> >> You could also try adding -pc_factor_levels 1 to try ICC(1) instead of >> ICC(0). >> >> Your code looks fine, I don't see a problem there, >> >> Barry >> >> >> Here is my guess, as the simulation proceeds the variable coefficient >> problem changes enough so that the ICC produces >> a badly scaled preconditioner that messes up the iterative method. I see >> this on occasion and don't have a good fix, the shift >> positive definite helps sometimes but not always. >> >> >> >> >> On Aug 17, 2008, at 3:36 AM, Shengyong wrote: >> >> Hi, >> >> I am still struggling to use petsc to solve variable coefficient poisson >> problems (which orinates from a multi-phase(liquid-gas two phase flow with >> sharp interface method, the density ratio is 1000, and with surface tension) >> flow problem) successively. >> >> Initially petsc produces good results with icc pc_type , cg iterative >> method and with _pc_factor_shift_positive_define flag. I follow petsc's >> null space method to constrain the singular property of coefficient matrix. >> However, after several time steps good results of simulation, the KSP >> Residual Norm suddenly reaches to a number greater than 1000. I guess that >> petsc diverges. I have also tried to swith to other types of methods, e.g. >> to gmeres, it just behaves similar to cg method. However, when I use my >> previous self-coded SOR iterative solver, it produces nice results. And I >> have tested the same petsc solver class for a single phase driven cavity >> flow problem, it also produces nice results. >> >> It seems that I have missed something important setup procedure in my >> solver class. could anyone to point me the problem ? I have attached large >> part of the code below: >> >> //in Header >> class CPETScPoissonSolver >> { >> typedef struct { >> Vec x,b; //x, b >> Mat A; //A >> KSP ksp; //Krylov subspace preconditioner >> PC pc; >> MatNullSpace nullspace; >> PetscInt l, m, n;//L, M, N >> } UserContext; >> >> public: >> CPETScPoissonSolver(int argc, char** argv); >> ~CPETScPoissonSolver(void); >> >> //........ >> >> private: >> >> //Yale Sparse Matrix format matrix >> PetscScalar* A; >> PetscInt * I; >> PetscInt * J; >> >> // Number of Nonzero Element >> PetscInt nnz; >> //grid step >> PetscInt L, M, N; >> UserContext userctx; >> private: >> bool FirstTime; >> }; >> >> //in cpp >> static char helpPetscPoisson[] = "PETSc class Solves a variable Poisson >> problem with Null Space Method.\n\n"; >> >> CPETScPoissonSolver::CPETScPoissonSolver(int argc, char** argv) >> { >> PetscInitialize(&argc, &argv, (char*)0, helpPetscPoisson); >> FirstTime=true; >> } >> CPETScPoissonSolver::~CPETScPoissonSolver(void) >> { >> PetscFinalize(); >> } >> ...... >> void CPETScPoissonSolver::SetAIJ(PetscScalar *a, PetscInt *i, PetscInt *j, >> PetscInt Nnz) >> { >> A= a; I=i; J=j; nnz = Nnz; >> } >> >> PetscErrorCode CPETScPoissonSolver::UserInitializeLinearSolver() >> { >> PetscErrorCode ierr = 0; >> PetscInt Num = L*M*N; >> >> //Since we use successive solvers, so in the second time step we must >> deallocate the original matrix then setup a new one >> if(FirstTime==true) >> { >> ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, >> J, A, &userctx.A); CHKERRQ(ierr); >> } >> else >> { >> FirstTime = false; >> ierr = MatDestroy(userctx.A);CHKERRQ(ierr); >> ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, >> J, A, &userctx.A); CHKERRQ(ierr); >> } >> >> if(FirstTime==true) >> { >> >> ierr = >> VecCreateSeqWithArray(PETSC_COMM_SELF,Num,PETSC_NULL,&userctx.b);CHKERRQ(ierr); >> ierr = VecDuplicate(userctx.b,&userctx.x);CHKERRQ(ierr); >> ierr = MatNullSpaceCreate(PETSC_COMM_SELF, PETSC_TRUE, 0, >> PETSC_NULL, &userctx.nullspace); CHKERRQ(ierr); >> ierr = KSPCreate(PETSC_COMM_SELF,&userctx.ksp);CHKERRQ(ierr); >> /*Set Null Space for KSP*/ >> ierr = KSPSetNullSpace(userctx.ksp, >> userctx.nullspace);CHKERRQ(ierr); >> } >> return 0; >> } >> >> >> PetscErrorCode CPETScPoissonSolver::UserSetBX(PetscScalar *x, PetscScalar >> *b) >> { >> PetscErrorCode ierr ; >> //below code we must set it every time step >> ierr = VecPlaceArray(userctx.x,x);CHKERRQ(ierr); >> ierr = VecPlaceArray(userctx.b,b);CHKERRQ(ierr); >> ierr = MatNullSpaceRemove(userctx.nullspace,userctx.b, >> PETSC_NULL);CHKERRQ(ierr); >> return 0; >> } >> >> PetscInt CPETScPoissonSolver::UserSolve() >> { >> PetscErrorCode ierr; >> ierr = >> KSPSetOperators(userctx.ksp,userctx.A,userctx.A,SAME_NONZERO_PATTERN);CHKERRQ(ierr); >> ierr = KSPSetType(userctx.ksp, KSPCG); >> ierr = KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); >> ierr = KSPGetPC(userctx.ksp,&userctx.pc);CHKERRQ(ierr); >> ierr = PCSetType(userctx.pc,PCICC);CHKERRQ(ierr); >> ierr = PCFactorSetShiftPd(userctx.pc, PETSC_TRUE); >> ierr = >> KSPSetTolerances(userctx.ksp,1.e-4,PETSC_DEFAULT,PETSC_DEFAULT,2000); >> ierr = KSPSetFromOptions(userctx.ksp);CHKERRQ(ierr); >> /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - >> Solve the linear system >> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - >> */ >> ierr = KSPSolve(userctx.ksp,userctx.b,userctx.x);CHKERRQ(ierr); >> >> ierr = VecResetArray(userctx.x);CHKERRQ(ierr); >> ierr = VecResetArray(userctx.b);CHKERRQ(ierr); >> >> return 0; >> } >> >> PetscErrorCode CPETScPoissonSolver::ReleaseMem() >> { >> PetscErrorCode ierr; >> ierr = KSPDestroy(userctx.ksp);CHKERRQ(ierr); >> ierr = VecDestroy(userctx.x); CHKERRQ(ierr); >> ierr = VecDestroy(userctx.b); CHKERRQ(ierr); >> ierr = MatDestroy(userctx.A); CHKERRQ(ierr); >> ierr = MatNullSpaceDestroy(userctx.nullspace); CHKERRQ(ierr); >> return 0; >> } >> >> Thanks very much! >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> >> >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Aug 18 14:23:38 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 18 Aug 2008 14:23:38 -0500 Subject: strange convergency behavior with petsc (intialy produces good result, suddenly diverges) In-Reply-To: References: <8B1B832E-7A10-4446-8664-F9753ADE82E3@mcs.anl.gov> Message-ID: <4A1CBC94-C9D6-4CB9-8E89-1A528BAB9F01@mcs.anl.gov> Again, what happens with -pc_type lu? Does the simulation run for any number of time steps with correct answers? Barry On Aug 18, 2008, at 12:02 PM, Shengyong wrote: > Hi, Barry > > Thanks for your help! > > What I mean is that both of your suggested method converge for a > while then stagnant. Once stagnation happens, then after maximum > iteration number reaches, PETSc fill the solution with exactly > zero. So error occurs in the next time step. I am a newbie and can > not explain why this happens since I have set the initial guess is > not zero in my code. E.g, I have called this function > KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); > > For the stagnation, it seems that the rounding error dominates > during iteration in some special situations since the coefficient > Matrix A is very ill-conditioned(the Cond number over 10^6 in some > cases). Although with null space method by project B vector to R(A) > will remede the stagnant phenomena. Actually it does not. I have > read a paper in J. Comp. Appl. Math. in year of 1988 : > 'preconditioned conjugate gradient for solving singular systems'. > The author suggests not only project B to R(A) space, but also X(i > +1) (solution vector during iteration) and r(i+1) ( residual vector > during iteration) to R(A*) (A* refers to preconditioned A). I am not > clear about how icc and cg method are implemented in PETSc. Above is > just my not so mature guess. > > My simulation code could run unlimited time steps with self-coded > SOR method. The SOR method is implemented as matrix free method. In > this method, the Neumann BC is not directly discretized in > equations, but this way is adopted: assuming P[0] is BC, then P[0] > cell does not particpate in any iteration, but rather after one step > iteration ends, I set P[0] = P[1] while P[1] cell is inner fluid > cell. While with PETSc method, I directly discrete the Neumann BC > into boundary cells' equation. I think this difference does not > matter the converge stuff at all since the initially severay time > steps PETSc solver gives nice results. > > Best Regards, > > > > On Mon, Aug 18, 2008 at 11:24 PM, Barry Smith > wrote: > > On Aug 18, 2008, at 12:35 AM, Shengyong wrote: > > hi, Barry > > Thanks for your kindly reply! > > Here I report some experiments accoording to your hints. > > -pc_type sor -pc_sor_local_symmetric _ksp_type cg failed. > > What does failed mean? Converged for a while then stagnated? > Diverged (the residual norm got larger and larger)? Never converged > at all? > > icc(1) also failed. > > What does failed mean? > > I suggest running with -pc_type lu (keep the -ksp_type gmres) and > see what happens? Does you simulation run fine for an unlimited number > of timesteps? If not, how does it fail? > > Barry > > > > > When using -ksp_monitor_true _residual with icc(0), I have found the > precond residual is osillation around some value about 1.06e-004, > the true residual norm almost staying at 7.128 e-004. The ||Ae||/|| > Ax|| also stagnent at 6.3e-005. > > After the maximum number of iterations (=2000) reached, the > iteration fails. Even when I set iteration number to 10000, it seems > that petsc also fails. > > Below is some resiual information near iteration number 2000 when > with ksp_monitor_true_residual. > > > 1990 KSP preconditioned resid norm 1.064720311837e-004 true resid > norm 7.1284721 > 18425e-004 ||Ae||/||Ax|| 6.302380221818e-005 > > 1991 KSP preconditioned resid norm 1.062055494352e-004 true resid > norm 7.1281202 > 17324e-004 ||Ae||/||Ax|| 6.302069101215e-005 > > 1992 KSP preconditioned resid norm 1.061228895583e-004 true resid > norm 7.1277740 > 15661e-004 ||Ae||/||Ax|| 6.301763019565e-005 > > 1993 KSP preconditioned resid norm 1.062165148129e-004 true resid > norm 7.1274335 > 10277e-004 ||Ae||/||Ax|| 6.301461974073e-005 > > 1994 KSP preconditioned resid norm 1.064780917764e-004 true resid > norm 7.1270986 > 90168e-004 ||Ae||/||Ax|| 6.301165955010e-005 > > 1995 KSP preconditioned resid norm 1.068986431546e-004 true resid > norm 7.1267695 > 37853e-004 ||Ae||/||Ax|| 6.300874946922e-005 > > 1996 KSP preconditioned resid norm 1.074687043135e-004 true resid > norm 7.1264460 > 30395e-004 ||Ae||/||Ax|| 6.300588929530e-005 > > 1997 KSP preconditioned resid norm 1.081784773720e-004 true resid > norm 7.1261281 > 40328e-004 ||Ae||/||Ax|| 6.300307878551e-005 > > 1998 KSP preconditioned resid norm 1.090179775109e-004 true resid > norm 7.1258158 > 36472e-004 ||Ae||/||Ax|| 6.300031766417e-005 > > 1999 KSP preconditioned resid norm 1.099771672684e-004 true resid > norm 7.1255090 > 84603e-004 ||Ae||/||Ax|| 6.299760562872e-005 > > 2000 KSP preconditioned resid norm 1.110460758301e-004 true resid > norm 7.1252078 > 48108e-004 ||Ae||/||Ax|| 6.299494235544e-005 > > On Sun, Aug 17, 2008 at 10:32 PM, Barry Smith > wrote: > > Try using -pc_type sor -pc_sor_local_symmetric -ksp_type cg > > Also try running the original icc one with the additional option - > ksp_monitor_true_residual, see if funky stuff starts to happen. > > You could also try adding -pc_factor_levels 1 to try ICC(1) instead > of ICC(0). > > Your code looks fine, I don't see a problem there, > > Barry > > > Here is my guess, as the simulation proceeds the variable > coefficient problem changes enough so that the ICC produces > a badly scaled preconditioner that messes up the iterative method. I > see this on occasion and don't have a good fix, the shift > positive definite helps sometimes but not always. > > > > > On Aug 17, 2008, at 3:36 AM, Shengyong wrote: > > Hi, > > I am still struggling to use petsc to solve variable coefficient > poisson problems (which orinates from a multi-phase(liquid-gas two > phase flow with sharp interface method, the density ratio is 1000, > and with surface tension) flow problem) successively. > > Initially petsc produces good results with icc pc_type , cg > iterative method and with _pc_factor_shift_positive_define flag. I > follow petsc's null space method to constrain the singular property > of coefficient matrix. However, after several time steps good > results of simulation, the KSP Residual Norm suddenly reaches to a > number greater than 1000. I guess that petsc diverges. I have also > tried to swith to other types of methods, e.g. to gmeres, it just > behaves similar to cg method. However, when I use my previous self- > coded SOR iterative solver, it produces nice results. And I have > tested the same petsc solver class for a single phase driven cavity > flow problem, it also produces nice results. > > It seems that I have missed something important setup procedure in > my solver class. could anyone to point me the problem ? I have > attached large part of the code below: > > //in Header > class CPETScPoissonSolver > { > typedef struct { > Vec x,b; //x, b > Mat A; //A > KSP ksp; //Krylov subspace preconditioner > PC pc; > MatNullSpace nullspace; > PetscInt l, m, n;//L, M, N > } UserContext; > > public: > CPETScPoissonSolver(int argc, char** argv); > ~CPETScPoissonSolver(void); > > //........ > > private: > > //Yale Sparse Matrix format matrix > PetscScalar* A; > PetscInt * I; > PetscInt * J; > > // Number of Nonzero Element > PetscInt nnz; > //grid step > PetscInt L, M, N; > UserContext userctx; > private: > bool FirstTime; > }; > > //in cpp > static char helpPetscPoisson[] = "PETSc class Solves a variable > Poisson problem with Null Space Method.\n\n"; > > CPETScPoissonSolver::CPETScPoissonSolver(int argc, char** argv) > { > PetscInitialize(&argc, &argv, (char*)0, helpPetscPoisson); > FirstTime=true; > } > CPETScPoissonSolver::~CPETScPoissonSolver(void) > { > PetscFinalize(); > } > ...... > void CPETScPoissonSolver::SetAIJ(PetscScalar *a, PetscInt *i, > PetscInt *j, PetscInt Nnz) > { > A= a; I=i; J=j; nnz = Nnz; > } > > PetscErrorCode CPETScPoissonSolver::UserInitializeLinearSolver() > { > PetscErrorCode ierr = 0; > PetscInt Num = L*M*N; > > //Since we use successive solvers, so in the second time step we > must deallocate the original matrix then setup a new one > if(FirstTime==true) > { > ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, > I, J, A, &userctx.A); CHKERRQ(ierr); > } > else > { > FirstTime = false; > ierr = MatDestroy(userctx.A);CHKERRQ(ierr); > ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, > Num, I, J, A, &userctx.A); CHKERRQ(ierr); > } > > if(FirstTime==true) > { > > ierr = > VecCreateSeqWithArray > (PETSC_COMM_SELF,Num,PETSC_NULL,&userctx.b);CHKERRQ(ierr); > ierr = VecDuplicate(userctx.b,&userctx.x);CHKERRQ(ierr); > ierr = MatNullSpaceCreate(PETSC_COMM_SELF, PETSC_TRUE, 0, > PETSC_NULL, &userctx.nullspace); CHKERRQ(ierr); > ierr = KSPCreate(PETSC_COMM_SELF,&userctx.ksp);CHKERRQ(ierr); > /*Set Null Space for KSP*/ > ierr = KSPSetNullSpace(userctx.ksp, > userctx.nullspace);CHKERRQ(ierr); > } > return 0; > } > > > PetscErrorCode CPETScPoissonSolver::UserSetBX(PetscScalar *x, > PetscScalar *b) > { > PetscErrorCode ierr ; > //below code we must set it every time step > ierr = VecPlaceArray(userctx.x,x);CHKERRQ(ierr); > ierr = VecPlaceArray(userctx.b,b);CHKERRQ(ierr); > ierr = MatNullSpaceRemove(userctx.nullspace,userctx.b, > PETSC_NULL);CHKERRQ(ierr); > return 0; > } > > PetscInt CPETScPoissonSolver::UserSolve() > { > PetscErrorCode ierr; > ierr = > KSPSetOperators > (userctx.ksp,userctx.A,userctx.A,SAME_NONZERO_PATTERN);CHKERRQ(ierr); > ierr = KSPSetType(userctx.ksp, KSPCG); > ierr = KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); > ierr = KSPGetPC(userctx.ksp,&userctx.pc);CHKERRQ(ierr); > ierr = PCSetType(userctx.pc,PCICC);CHKERRQ(ierr); > ierr = PCFactorSetShiftPd(userctx.pc, PETSC_TRUE); > ierr = KSPSetTolerances(userctx.ksp, > 1.e-4,PETSC_DEFAULT,PETSC_DEFAULT,2000); > ierr = KSPSetFromOptions(userctx.ksp);CHKERRQ(ierr); > /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > - - - > Solve the linear system > - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > - - - */ > ierr = KSPSolve(userctx.ksp,userctx.b,userctx.x);CHKERRQ(ierr); > > ierr = VecResetArray(userctx.x);CHKERRQ(ierr); > ierr = VecResetArray(userctx.b);CHKERRQ(ierr); > > return 0; > } > > PetscErrorCode CPETScPoissonSolver::ReleaseMem() > { > PetscErrorCode ierr; > ierr = KSPDestroy(userctx.ksp);CHKERRQ(ierr); > ierr = VecDestroy(userctx.x); CHKERRQ(ierr); > ierr = VecDestroy(userctx.b); CHKERRQ(ierr); > ierr = MatDestroy(userctx.A); CHKERRQ(ierr); > ierr = MatNullSpaceDestroy(userctx.nullspace); CHKERRQ(ierr); > return 0; > } > > Thanks very much! > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China > > > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China > > > > > -- > Pang Shengyong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China From lua.byhh at gmail.com Tue Aug 19 01:07:42 2008 From: lua.byhh at gmail.com (Shengyong) Date: Tue, 19 Aug 2008 14:07:42 +0800 Subject: strange convergency behavior with petsc (intialy produces good result, suddenly diverges) In-Reply-To: <4A1CBC94-C9D6-4CB9-8E89-1A528BAB9F01@mcs.anl.gov> References: <8B1B832E-7A10-4446-8664-F9753ADE82E3@mcs.anl.gov> <4A1CBC94-C9D6-4CB9-8E89-1A528BAB9F01@mcs.anl.gov> Message-ID: If do not use PETSc or laspack,the result is fine. I have comparied with some literature results. Maybe there are some bugs in my coeffient matrix generation code. I will try to fix it. Thanks. On Tue, Aug 19, 2008 at 3:23 AM, Barry Smith wrote: > > Again, what happens with -pc_type lu? Does the simulation run for any > number of time steps with correct answers? > > Barry > > > On Aug 18, 2008, at 12:02 PM, Shengyong wrote: > > Hi, Barry >> >> Thanks for your help! >> >> What I mean is that both of your suggested method converge for a while >> then stagnant. Once stagnation happens, then after maximum iteration number >> reaches, PETSc fill the solution with exactly zero. So error occurs in the >> next time step. I am a newbie and can not explain why this happens since I >> have set the initial guess is not zero in my code. E.g, I have called this >> function KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); >> >> For the stagnation, it seems that the rounding error dominates during >> iteration in some special situations since the coefficient Matrix A is very >> ill-conditioned(the Cond number over 10^6 in some cases). Although with null >> space method by project B vector to R(A) will remede the stagnant phenomena. >> Actually it does not. I have read a paper in J. Comp. Appl. Math. in year of >> 1988 : 'preconditioned conjugate gradient for solving singular systems'. The >> author suggests not only project B to R(A) space, but also X(i+1) (solution >> vector during iteration) and r(i+1) ( residual vector during iteration) to >> R(A*) (A* refers to preconditioned A). I am not clear about how icc and cg >> method are implemented in PETSc. Above is just my not so mature guess. >> >> My simulation code could run unlimited time steps with self-coded SOR >> method. The SOR method is implemented as matrix free method. In this method, >> the Neumann BC is not directly discretized in equations, but this way is >> adopted: assuming P[0] is BC, then P[0] cell does not particpate in any >> iteration, but rather after one step iteration ends, I set P[0] = P[1] while >> P[1] cell is inner fluid cell. While with PETSc method, I directly discrete >> the Neumann BC into boundary cells' equation. I think this difference does >> not matter the converge stuff at all since the initially severay time steps >> PETSc solver gives nice results. >> >> Best Regards, >> >> >> >> On Mon, Aug 18, 2008 at 11:24 PM, Barry Smith wrote: >> >> On Aug 18, 2008, at 12:35 AM, Shengyong wrote: >> >> hi, Barry >> >> Thanks for your kindly reply! >> >> Here I report some experiments accoording to your hints. >> >> -pc_type sor -pc_sor_local_symmetric _ksp_type cg failed. >> >> What does failed mean? Converged for a while then stagnated? Diverged >> (the residual norm got larger and larger)? Never converged at all? >> >> icc(1) also failed. >> >> What does failed mean? >> >> I suggest running with -pc_type lu (keep the -ksp_type gmres) and see >> what happens? Does you simulation run fine for an unlimited number >> of timesteps? If not, how does it fail? >> >> Barry >> >> >> >> >> When using -ksp_monitor_true _residual with icc(0), I have found the >> precond residual is osillation around some value about 1.06e-004, the true >> residual norm almost staying at 7.128 e-004. The ||Ae||/||Ax|| also >> stagnent at 6.3e-005. >> >> After the maximum number of iterations (=2000) reached, the iteration >> fails. Even when I set iteration number to 10000, it seems that petsc also >> fails. >> >> Below is some resiual information near iteration number 2000 when with >> ksp_monitor_true_residual. >> >> >> 1990 KSP preconditioned resid norm 1.064720311837e-004 true resid norm >> 7.1284721 >> 18425e-004 ||Ae||/||Ax|| 6.302380221818e-005 >> >> 1991 KSP preconditioned resid norm 1.062055494352e-004 true resid norm >> 7.1281202 >> 17324e-004 ||Ae||/||Ax|| 6.302069101215e-005 >> >> 1992 KSP preconditioned resid norm 1.061228895583e-004 true resid norm >> 7.1277740 >> 15661e-004 ||Ae||/||Ax|| 6.301763019565e-005 >> >> 1993 KSP preconditioned resid norm 1.062165148129e-004 true resid norm >> 7.1274335 >> 10277e-004 ||Ae||/||Ax|| 6.301461974073e-005 >> >> 1994 KSP preconditioned resid norm 1.064780917764e-004 true resid norm >> 7.1270986 >> 90168e-004 ||Ae||/||Ax|| 6.301165955010e-005 >> >> 1995 KSP preconditioned resid norm 1.068986431546e-004 true resid norm >> 7.1267695 >> 37853e-004 ||Ae||/||Ax|| 6.300874946922e-005 >> >> 1996 KSP preconditioned resid norm 1.074687043135e-004 true resid norm >> 7.1264460 >> 30395e-004 ||Ae||/||Ax|| 6.300588929530e-005 >> >> 1997 KSP preconditioned resid norm 1.081784773720e-004 true resid norm >> 7.1261281 >> 40328e-004 ||Ae||/||Ax|| 6.300307878551e-005 >> >> 1998 KSP preconditioned resid norm 1.090179775109e-004 true resid norm >> 7.1258158 >> 36472e-004 ||Ae||/||Ax|| 6.300031766417e-005 >> >> 1999 KSP preconditioned resid norm 1.099771672684e-004 true resid norm >> 7.1255090 >> 84603e-004 ||Ae||/||Ax|| 6.299760562872e-005 >> >> 2000 KSP preconditioned resid norm 1.110460758301e-004 true resid norm >> 7.1252078 >> 48108e-004 ||Ae||/||Ax|| 6.299494235544e-005 >> >> On Sun, Aug 17, 2008 at 10:32 PM, Barry Smith wrote: >> >> Try using -pc_type sor -pc_sor_local_symmetric -ksp_type cg >> >> Also try running the original icc one with the additional option >> -ksp_monitor_true_residual, see if funky stuff starts to happen. >> >> You could also try adding -pc_factor_levels 1 to try ICC(1) instead of >> ICC(0). >> >> Your code looks fine, I don't see a problem there, >> >> Barry >> >> >> Here is my guess, as the simulation proceeds the variable coefficient >> problem changes enough so that the ICC produces >> a badly scaled preconditioner that messes up the iterative method. I see >> this on occasion and don't have a good fix, the shift >> positive definite helps sometimes but not always. >> >> >> >> >> On Aug 17, 2008, at 3:36 AM, Shengyong wrote: >> >> Hi, >> >> I am still struggling to use petsc to solve variable coefficient poisson >> problems (which orinates from a multi-phase(liquid-gas two phase flow with >> sharp interface method, the density ratio is 1000, and with surface tension) >> flow problem) successively. >> >> Initially petsc produces good results with icc pc_type , cg iterative >> method and with _pc_factor_shift_positive_define flag. I follow petsc's >> null space method to constrain the singular property of coefficient matrix. >> However, after several time steps good results of simulation, the KSP >> Residual Norm suddenly reaches to a number greater than 1000. I guess that >> petsc diverges. I have also tried to swith to other types of methods, e.g. >> to gmeres, it just behaves similar to cg method. However, when I use my >> previous self-coded SOR iterative solver, it produces nice results. And I >> have tested the same petsc solver class for a single phase driven cavity >> flow problem, it also produces nice results. >> >> It seems that I have missed something important setup procedure in my >> solver class. could anyone to point me the problem ? I have attached large >> part of the code below: >> >> //in Header >> class CPETScPoissonSolver >> { >> typedef struct { >> Vec x,b; //x, b >> Mat A; //A >> KSP ksp; //Krylov subspace preconditioner >> PC pc; >> MatNullSpace nullspace; >> PetscInt l, m, n;//L, M, N >> } UserContext; >> >> public: >> CPETScPoissonSolver(int argc, char** argv); >> ~CPETScPoissonSolver(void); >> >> //........ >> >> private: >> >> //Yale Sparse Matrix format matrix >> PetscScalar* A; >> PetscInt * I; >> PetscInt * J; >> >> // Number of Nonzero Element >> PetscInt nnz; >> //grid step >> PetscInt L, M, N; >> UserContext userctx; >> private: >> bool FirstTime; >> }; >> >> //in cpp >> static char helpPetscPoisson[] = "PETSc class Solves a variable Poisson >> problem with Null Space Method.\n\n"; >> >> CPETScPoissonSolver::CPETScPoissonSolver(int argc, char** argv) >> { >> PetscInitialize(&argc, &argv, (char*)0, helpPetscPoisson); >> FirstTime=true; >> } >> CPETScPoissonSolver::~CPETScPoissonSolver(void) >> { >> PetscFinalize(); >> } >> ...... >> void CPETScPoissonSolver::SetAIJ(PetscScalar *a, PetscInt *i, PetscInt *j, >> PetscInt Nnz) >> { >> A= a; I=i; J=j; nnz = Nnz; >> } >> >> PetscErrorCode CPETScPoissonSolver::UserInitializeLinearSolver() >> { >> PetscErrorCode ierr = 0; >> PetscInt Num = L*M*N; >> >> //Since we use successive solvers, so in the second time step we must >> deallocate the original matrix then setup a new one >> if(FirstTime==true) >> { >> ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, J, >> A, &userctx.A); CHKERRQ(ierr); >> } >> else >> { >> FirstTime = false; >> ierr = MatDestroy(userctx.A);CHKERRQ(ierr); >> ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_SELF, Num, Num, I, >> J, A, &userctx.A); CHKERRQ(ierr); >> } >> >> if(FirstTime==true) >> { >> >> ierr = >> VecCreateSeqWithArray(PETSC_COMM_SELF,Num,PETSC_NULL,&userctx.b);CHKERRQ(ierr); >> ierr = VecDuplicate(userctx.b,&userctx.x);CHKERRQ(ierr); >> ierr = MatNullSpaceCreate(PETSC_COMM_SELF, PETSC_TRUE, 0, PETSC_NULL, >> &userctx.nullspace); CHKERRQ(ierr); >> ierr = KSPCreate(PETSC_COMM_SELF,&userctx.ksp);CHKERRQ(ierr); >> /*Set Null Space for KSP*/ >> ierr = KSPSetNullSpace(userctx.ksp, userctx.nullspace);CHKERRQ(ierr); >> } >> return 0; >> } >> >> >> PetscErrorCode CPETScPoissonSolver::UserSetBX(PetscScalar *x, PetscScalar >> *b) >> { >> PetscErrorCode ierr ; >> //below code we must set it every time step >> ierr = VecPlaceArray(userctx.x,x);CHKERRQ(ierr); >> ierr = VecPlaceArray(userctx.b,b);CHKERRQ(ierr); >> ierr = MatNullSpaceRemove(userctx.nullspace,userctx.b, >> PETSC_NULL);CHKERRQ(ierr); >> return 0; >> } >> >> PetscInt CPETScPoissonSolver::UserSolve() >> { >> PetscErrorCode ierr; >> ierr = >> KSPSetOperators(userctx.ksp,userctx.A,userctx.A,SAME_NONZERO_PATTERN);CHKERRQ(ierr); >> ierr = KSPSetType(userctx.ksp, KSPCG); >> ierr = KSPSetInitialGuessNonzero(userctx.ksp, PETSC_TRUE); >> ierr = KSPGetPC(userctx.ksp,&userctx.pc);CHKERRQ(ierr); >> ierr = PCSetType(userctx.pc,PCICC);CHKERRQ(ierr); >> ierr = PCFactorSetShiftPd(userctx.pc, PETSC_TRUE); >> ierr = >> KSPSetTolerances(userctx.ksp,1.e-4,PETSC_DEFAULT,PETSC_DEFAULT,2000); >> ierr = KSPSetFromOptions(userctx.ksp);CHKERRQ(ierr); >> /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - >> Solve the linear system >> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - >> */ >> ierr = KSPSolve(userctx.ksp,userctx.b,userctx.x);CHKERRQ(ierr); >> >> ierr = VecResetArray(userctx.x);CHKERRQ(ierr); >> ierr = VecResetArray(userctx.b);CHKERRQ(ierr); >> >> return 0; >> } >> >> PetscErrorCode CPETScPoissonSolver::ReleaseMem() >> { >> PetscErrorCode ierr; >> ierr = KSPDestroy(userctx.ksp);CHKERRQ(ierr); >> ierr = VecDestroy(userctx.x); CHKERRQ(ierr); >> ierr = VecDestroy(userctx.b); CHKERRQ(ierr); >> ierr = MatDestroy(userctx.A); CHKERRQ(ierr); >> ierr = MatNullSpaceDestroy(userctx.nullspace); CHKERRQ(ierr); >> return 0; >> } >> >> Thanks very much! >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> >> >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> >> >> >> >> -- >> Pang Shengyong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From z.sheng at ewi.tudelft.nl Wed Aug 20 07:19:36 2008 From: z.sheng at ewi.tudelft.nl (Zhifeng Sheng) Date: Wed, 20 Aug 2008 14:19:36 +0200 Subject: How use scalar type and complex type at the same time? In-Reply-To: References: Message-ID: <48AC0BD8.4060307@ewi.tudelft.nl> Dear all I am working on a FEM program, which can be used to solve time domain and frequency domain EM problems. I finished the time domain solver part and now I need to implement the frequency domain solver.... So I built Petsc with --scalar-type = complex, and then my time domain solver can not be compiled, it says " can not convert double to PetscScalar". Is there any way that I can handle both real matrix and complex matrix at the same time? (without changing my old code too much?) Thanks Best regards Zhifeng Sheng From knepley at gmail.com Wed Aug 20 07:42:58 2008 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 20 Aug 2008 07:42:58 -0500 Subject: How use scalar type and complex type at the same time? In-Reply-To: <48AC0BD8.4060307@ewi.tudelft.nl> References: <48AC0BD8.4060307@ewi.tudelft.nl> Message-ID: On Wed, Aug 20, 2008 at 7:19 AM, Zhifeng Sheng wrote: > Dear all > > I am working on a FEM program, which can be used to solve time domain and > frequency domain EM problems. I finished the time domain solver part and now > I need to implement the frequency domain solver.... So I built Petsc with > --scalar-type = complex, and then my time domain solver can not be compiled, > it says " can not convert double to PetscScalar". > > Is there any way that I can handle both real matrix and complex matrix at > the same time? (without changing my old code too much?) Right now, there is no way to do this, and there seems to be little possibility in C. Templating can handle this smoothly, but we would have to convert to C++. We are weighing the options. Matt > Thanks > Best regards > Zhifeng Sheng -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Wed Aug 20 08:44:43 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 20 Aug 2008 08:44:43 -0500 Subject: How use scalar type and complex type at the same time? In-Reply-To: References: <48AC0BD8.4060307@ewi.tudelft.nl> Message-ID: On Aug 20, 2008, at 7:42 AM, Matthew Knepley wrote: > On Wed, Aug 20, 2008 at 7:19 AM, Zhifeng Sheng > wrote: >> Dear all >> >> I am working on a FEM program, which can be used to solve time >> domain and >> frequency domain EM problems. I finished the time domain solver >> part and now >> I need to implement the frequency domain solver.... So I built >> Petsc with >> --scalar-type = complex, and then my time domain solver can not be >> compiled, >> it says " can not convert double to PetscScalar". >> >> Is there any way that I can handle both real matrix and complex >> matrix at >> the same time? (without changing my old code too much?) > > Right now, there is no way to do this, and there seems to be little > possibility in C. > Templating can handle this smoothly, but we would have to convert to > C++. We ^^^^^^^^^^^^^^^^^^^^^^^^ Templating this can handle it; but it definitely cannot handle it SMOOTHLY! If it could handle it smoothly we would have switched a long time ago! The problem with C++ templating is that since it is compile time (and source code based, that is the template information must be directly wired into the source code), one must indicate directly in the code the type it is templated over, this then eliminates the awesome power of the data encapsulation. Yes, since C++ is a Turing language one can get around this, but the extra complexity of working around the basic design of templates in C++ makes the "getting around" worrisomely painful thus we have not penetrated in that direction. Convince me otherwise! Barry > > are weighing the options. > > Matt > >> Thanks >> Best regards >> Zhifeng Sheng > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > From dalcinl at gmail.com Wed Aug 20 10:16:28 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Wed, 20 Aug 2008 12:16:28 -0300 Subject: How use scalar type and complex type at the same time? In-Reply-To: <48AC0BD8.4060307@ewi.tudelft.nl> References: <48AC0BD8.4060307@ewi.tudelft.nl> Message-ID: I believe the only way you can handle this is by using separate programs linked with different PETSc configurations for the time domain solver and the frequency domain solver. Of course, this requires that both problems are somewhat "uncoupled" and that you use some MPI-2 features (dynamic process management) to make your two programs "chat" each other. If you have some knowledge of Python, I can offer you a rather simple solution based on mpi4py (MPI for Python). You write a "master" Python script communicating with each of your programs and managing the interchange of data using MPI calls. With a bit of luck, you will not need to add too much to your current code. Of course, you can skip the Python way and code all the "master" part in C/C++/Fortran. Just let me know if you are interested in this approach. On Wed, Aug 20, 2008 at 9:19 AM, Zhifeng Sheng wrote: > Dear all > > I am working on a FEM program, which can be used to solve time domain and > frequency domain EM problems. I finished the time domain solver part and now > I need to implement the frequency domain solver.... So I built Petsc with > --scalar-type = complex, and then my time domain solver can not be compiled, > it says " can not convert double to PetscScalar". > > Is there any way that I can handle both real matrix and complex matrix at > the same time? (without changing my old code too much?) > > Thanks > Best regards > Zhifeng Sheng > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From chetan at ices.utexas.edu Wed Aug 20 12:57:22 2008 From: chetan at ices.utexas.edu (Chetan Jhurani) Date: Wed, 20 Aug 2008 12:57:22 -0500 Subject: How use scalar type and complex type at the same time? In-Reply-To: <48AC0BD8.4060307@ewi.tudelft.nl> References: <48AC0BD8.4060307@ewi.tudelft.nl> Message-ID: <95CA5AD1ED2243EA8E4608968D7BB12A@spiff> > On Wed, Aug 20, 2008 at 7:19 AM, Zhifeng Sheng wrote: > > Is there any way that I can handle both real matrix and complex matrix at > the same time? (without changing my old code too much?) Not that I've tried this workaround in context of PETSc, but one could create two shared objects/DLLs, one compiled for complex and the other one for double and use them together in a single executable using the standard dynamic linking functions like dlopen/LoadLibrary. As to how it will be affected by MPI, I am not so sure. Chetan From dalcinl at gmail.com Wed Aug 20 14:14:43 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Wed, 20 Aug 2008 16:14:43 -0300 Subject: How use scalar type and complex type at the same time? In-Reply-To: <95CA5AD1ED2243EA8E4608968D7BB12A@spiff> References: <48AC0BD8.4060307@ewi.tudelft.nl> <95CA5AD1ED2243EA8E4608968D7BB12A@spiff> Message-ID: On Wed, Aug 20, 2008 at 2:57 PM, Chetan Jhurani wrote: > >> On Wed, Aug 20, 2008 at 7:19 AM, Zhifeng Sheng wrote: >> >> Is there any way that I can handle both real matrix and complex matrix at >> the same time? (without changing my old code too much?) > > Not that I've tried this workaround in context of PETSc, but > one could create two shared objects/DLLs, > > As to how it will be affected by MPI, I am not so sure. > As long as each PETSc configuration use the same MPI, and the MPI libs are shared (so, dll, or equivalent) this should also work. But this approach still requires the codes being "uncoupled". And then, IMHO, I still believe that using a "chat" protocol based on MPI is more powerful. For example, you can easily assign different number of processors for each problem. Additionally, as long as your MPI have decent support for dynamic process management (MPICH2 and OpenMPI do have), the "MPI chatting" approach is simpler, more portable, and you do not even need to have an MPI with shared libs. -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From rlmackie862 at gmail.com Wed Aug 20 14:44:44 2008 From: rlmackie862 at gmail.com (Randall Mackie) Date: Wed, 20 Aug 2008 12:44:44 -0700 Subject: How use scalar type and complex type at the same time? In-Reply-To: <48AC0BD8.4060307@ewi.tudelft.nl> References: <48AC0BD8.4060307@ewi.tudelft.nl> Message-ID: <48AC742C.9040207@gmail.com> I have a code where I'm solving Maxwell's equations in 3D, so this is complex, but then later I'm also solving a system of equations on the model space, and this is real. I just simply use complex for both - it's just easier to use complex for solving the real systems and not worry about the additional overhead, at least to me. Randy Zhifeng Sheng wrote: > Dear all > > I am working on a FEM program, which can be used to solve time domain > and frequency domain EM problems. I finished the time domain solver part > and now I need to implement the frequency domain solver.... So I built > Petsc with --scalar-type = complex, and then my time domain solver can > not be compiled, it says " can not convert double to PetscScalar". > > Is there any way that I can handle both real matrix and complex matrix > at the same time? (without changing my old code too much?) > > Thanks > Best regards > Zhifeng Sheng > From griffith at courant.nyu.edu Thu Aug 21 07:01:16 2008 From: griffith at courant.nyu.edu (Boyce Griffith) Date: Thu, 21 Aug 2008 08:01:16 -0400 (EDT) Subject: MatGetVecs and MATMFFD Message-ID: Hi, Folks -- I am using a home-grown "multi-Vec" (i.e., vector of vectors) implementation with a PETSc nonlinear solver. I'd like to be able to use PETSc's matrix-free Jacobian with this solver along with a PCCOMPOSITE preconditioner; however, it seems like I need to override the default implementation of MatGetVecs in order to get this to work. Is there a kosher way to do this? Thanks, -- Boyce From dalcinl at gmail.com Thu Aug 21 11:00:51 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 21 Aug 2008 13:00:51 -0300 Subject: MatGetVecs and MATMFFD In-Reply-To: References: Message-ID: On Thu, Aug 21, 2008 at 9:01 AM, Boyce Griffith wrote: > Hi, Folks -- > > I am using a home-grown "multi-Vec" (i.e., vector of vectors) implementation > with a PETSc nonlinear solver. I'd like to be able to use PETSc's > matrix-free Jacobian with this solver along with a PCCOMPOSITE > preconditioner; however, it seems like I need to override the default > implementation of MatGetVecs in order to get this to work. > > Is there a kosher way to do this? You can use MatShellSetOperation() and set the operation MATOP_GET_VECS. Despite the name, that function can let you set/change the implementation of Mat operations for any matrix type. Try to make the call to MatShellSetOperation() in the Jacobian routine and let me know if you have trouble. > Thanks, > > -- Boyce > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From griffith at courant.nyu.edu Thu Aug 21 11:23:49 2008 From: griffith at courant.nyu.edu (Boyce Griffith) Date: Thu, 21 Aug 2008 12:23:49 -0400 (EDT) Subject: MatGetVecs and MATMFFD In-Reply-To: References: Message-ID: Hi, Lisandro -- Is there any way to associate a context with a MFFD matrix? It seems like the Otherwise, it seems like the implementation of MatGetVecs would need to use global variables in order to create the appropriate vectors. Or is it possible to get access to the solution and right-hand-side vectors used by an associated SNES and use VecDuplicate on them? Thanks, -- Boyce On Thu, 21 Aug 2008, Lisandro Dalcin wrote: > On Thu, Aug 21, 2008 at 9:01 AM, Boyce Griffith > wrote: >> Hi, Folks -- >> >> I am using a home-grown "multi-Vec" (i.e., vector of vectors) implementation >> with a PETSc nonlinear solver. I'd like to be able to use PETSc's >> matrix-free Jacobian with this solver along with a PCCOMPOSITE >> preconditioner; however, it seems like I need to override the default >> implementation of MatGetVecs in order to get this to work. >> >> Is there a kosher way to do this? > > You can use MatShellSetOperation() and set the operation > MATOP_GET_VECS. Despite the name, that function can let you set/change > the implementation of Mat operations for any matrix type. Try to make > the call to MatShellSetOperation() in the Jacobian routine and let me > know if you have trouble. > > >> Thanks, >> >> -- Boyce >> >> > > > > -- > Lisandro Dalc?n > --------------- > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > > From dalcinl at gmail.com Thu Aug 21 11:44:51 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 21 Aug 2008 13:44:51 -0300 Subject: MatGetVecs and MATMFFD In-Reply-To: References: Message-ID: On Thu, Aug 21, 2008 at 1:23 PM, Boyce Griffith wrote: > Hi, Lisandro -- Hi > Is there any way to associate a context with a MFFD matrix? It seems like > the Otherwise, it seems like the implementation of MatGetVecs would need to > use global variables in order to create the appropriate vectors. You can use a PetscContainer (see PetscContainerCreate() and friends) to save your user data and then you can set put that container in any PETSc object with PetscObjectCompose(), and next retrieve the container with PetscObjectQuery(), and finally recover your user data with PetscContainerGetPointer(). > > Or is it possible to get access to the solution and right-hand-side vectors > used by an associated SNES and use VecDuplicate on them? > You can use SNESGetRhs() and SNESGetSolution(). At least in petsc-dev (not so sure in last public release, I do not remember), you will get back a reference to the 'b' and 'x' Vec's you passed to 'SNESSolve(snes, b, x)'. You can even SNESGetSolutionUpdate() for getting the Vec where the solution update for the Newton step is formed. > > On Thu, 21 Aug 2008, Lisandro Dalcin wrote: > >> On Thu, Aug 21, 2008 at 9:01 AM, Boyce Griffith >> wrote: >>> >>> Hi, Folks -- >>> >>> I am using a home-grown "multi-Vec" (i.e., vector of vectors) >>> implementation >>> with a PETSc nonlinear solver. I'd like to be able to use PETSc's >>> matrix-free Jacobian with this solver along with a PCCOMPOSITE >>> preconditioner; however, it seems like I need to override the default >>> implementation of MatGetVecs in order to get this to work. >>> >>> Is there a kosher way to do this? >> >> You can use MatShellSetOperation() and set the operation >> MATOP_GET_VECS. Despite the name, that function can let you set/change >> the implementation of Mat operations for any matrix type. Try to make >> the call to MatShellSetOperation() in the Jacobian routine and let me >> know if you have trouble. >> >> >>> Thanks, >>> >>> -- Boyce >>> >>> >> >> >> >> -- >> Lisandro Dalc?n >> --------------- >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >> Tel/Fax: +54-(0)342-451.1594 >> > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From griffith at courant.nyu.edu Thu Aug 21 11:58:58 2008 From: griffith at courant.nyu.edu (Boyce Griffith) Date: Thu, 21 Aug 2008 12:58:58 -0400 (EDT) Subject: MatGetVecs and MATMFFD In-Reply-To: References: Message-ID: On Thu, 21 Aug 2008, Lisandro Dalcin wrote: > On Thu, Aug 21, 2008 at 1:23 PM, Boyce Griffith > wrote: >> Is there any way to associate a context with a MFFD matrix? It seems like >> the Otherwise, it seems like the implementation of MatGetVecs would need to >> use global variables in order to create the appropriate vectors. > > You can use a PetscContainer (see PetscContainerCreate() and friends) > to save your user data and then you can set put that container in any > PETSc object with PetscObjectCompose(), and next retrieve the > container with PetscObjectQuery(), and finally recover your user data > with PetscContainerGetPointer(). Sounds like that should do the trick. >> Or is it possible to get access to the solution and right-hand-side vectors >> used by an associated SNES and use VecDuplicate on them? > > You can use SNESGetRhs() and SNESGetSolution(). At least in petsc-dev > (not so sure in last public release, I do not remember), you will get > back a reference to the 'b' and 'x' Vec's you passed to > 'SNESSolve(snes, b, x)'. You can even SNESGetSolutionUpdate() for > getting the Vec where the solution update for the Newton step is > formed. Right, but if all I have is the Mat, is there a userland function which will return the corresponding SNES? Thanks, -- Boyce From knepley at gmail.com Thu Aug 21 12:09:16 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 21 Aug 2008 12:09:16 -0500 Subject: MatGetVecs and MATMFFD In-Reply-To: References: Message-ID: On Thu, Aug 21, 2008 at 11:58 AM, Boyce Griffith wrote: > > > On Thu, 21 Aug 2008, Lisandro Dalcin wrote: > >> On Thu, Aug 21, 2008 at 1:23 PM, Boyce Griffith >> wrote: >>> >>> Is there any way to associate a context with a MFFD matrix? It seems >>> like >>> the Otherwise, it seems like the implementation of MatGetVecs would need >>> to >>> use global variables in order to create the appropriate vectors. >> >> You can use a PetscContainer (see PetscContainerCreate() and friends) >> to save your user data and then you can set put that container in any >> PETSc object with PetscObjectCompose(), and next retrieve the >> container with PetscObjectQuery(), and finally recover your user data >> with PetscContainerGetPointer(). > > Sounds like that should do the trick. > >>> Or is it possible to get access to the solution and right-hand-side >>> vectors >>> used by an associated SNES and use VecDuplicate on them? >> >> You can use SNESGetRhs() and SNESGetSolution(). At least in petsc-dev >> (not so sure in last public release, I do not remember), you will get >> back a reference to the 'b' and 'x' Vec's you passed to >> 'SNESSolve(snes, b, x)'. You can even SNESGetSolutionUpdate() for >> getting the Vec where the solution update for the Newton step is >> formed. > > Right, but if all I have is the Mat, is there a userland function which will > return the corresponding SNES? There is no SNES that corresponds to a Mat, rather the SNES holds a Mat, which is oblivious. The MatGetVecs() routine is there to provide vectors which have a layout that matches the Mat. This is done by using the same PetscMap as the matrix rows. Matt > Thanks, > > -- Boyce -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From griffith at courant.nyu.edu Thu Aug 21 12:29:46 2008 From: griffith at courant.nyu.edu (Boyce Griffith) Date: Thu, 21 Aug 2008 13:29:46 -0400 (EDT) Subject: MatGetVecs and MATMFFD In-Reply-To: References: Message-ID: On Thu, 21 Aug 2008, Matthew Knepley wrote: > On Thu, Aug 21, 2008 at 11:58 AM, Boyce Griffith > wrote: >> >> >> On Thu, 21 Aug 2008, Lisandro Dalcin wrote: >> >>> On Thu, Aug 21, 2008 at 1:23 PM, Boyce Griffith >>> wrote: >>>> >>>> Is there any way to associate a context with a MFFD matrix? It seems >>>> like >>>> the Otherwise, it seems like the implementation of MatGetVecs would need >>>> to >>>> use global variables in order to create the appropriate vectors. >>> >>> You can use a PetscContainer (see PetscContainerCreate() and friends) >>> to save your user data and then you can set put that container in any >>> PETSc object with PetscObjectCompose(), and next retrieve the >>> container with PetscObjectQuery(), and finally recover your user data >>> with PetscContainerGetPointer(). >> >> Sounds like that should do the trick. >> >>>> Or is it possible to get access to the solution and right-hand-side >>>> vectors >>>> used by an associated SNES and use VecDuplicate on them? >>> >>> You can use SNESGetRhs() and SNESGetSolution(). At least in petsc-dev >>> (not so sure in last public release, I do not remember), you will get >>> back a reference to the 'b' and 'x' Vec's you passed to >>> 'SNESSolve(snes, b, x)'. You can even SNESGetSolutionUpdate() for >>> getting the Vec where the solution update for the Newton step is >>> formed. >> >> Right, but if all I have is the Mat, is there a userland function which will >> return the corresponding SNES? > > There is no SNES that corresponds to a Mat, rather the SNES holds a Mat, > which is oblivious. The MatGetVecs() routine is there to provide vectors > which have a layout that matches the Mat. This is done by using the same > PetscMap as the matrix rows. Presumably it isn't *totally* oblivious, otherwise MatCreateMFFD wouldn't take a SNES as an argument? -- Boyce > > Matt > >> Thanks, >> >> -- Boyce > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > From knepley at gmail.com Thu Aug 21 13:51:13 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 21 Aug 2008 13:51:13 -0500 Subject: MatGetVecs and MATMFFD In-Reply-To: References: Message-ID: On Thu, Aug 21, 2008 at 12:29 PM, Boyce Griffith wrote: > On Thu, 21 Aug 2008, Matthew Knepley wrote: > >> On Thu, Aug 21, 2008 at 11:58 AM, Boyce Griffith >> wrote: >>> >>> >>> On Thu, 21 Aug 2008, Lisandro Dalcin wrote: >>> >>>> On Thu, Aug 21, 2008 at 1:23 PM, Boyce Griffith >>>> wrote: >>>>> >>>>> Is there any way to associate a context with a MFFD matrix? It seems >>>>> like >>>>> the Otherwise, it seems like the implementation of MatGetVecs would >>>>> need >>>>> to >>>>> use global variables in order to create the appropriate vectors. >>>> >>>> You can use a PetscContainer (see PetscContainerCreate() and friends) >>>> to save your user data and then you can set put that container in any >>>> PETSc object with PetscObjectCompose(), and next retrieve the >>>> container with PetscObjectQuery(), and finally recover your user data >>>> with PetscContainerGetPointer(). >>> >>> Sounds like that should do the trick. >>> >>>>> Or is it possible to get access to the solution and right-hand-side >>>>> vectors >>>>> used by an associated SNES and use VecDuplicate on them? >>>> >>>> You can use SNESGetRhs() and SNESGetSolution(). At least in petsc-dev >>>> (not so sure in last public release, I do not remember), you will get >>>> back a reference to the 'b' and 'x' Vec's you passed to >>>> 'SNESSolve(snes, b, x)'. You can even SNESGetSolutionUpdate() for >>>> getting the Vec where the solution update for the Newton step is >>>> formed. >>> >>> Right, but if all I have is the Mat, is there a userland function which >>> will >>> return the corresponding SNES? >> >> There is no SNES that corresponds to a Mat, rather the SNES holds a Mat, >> which is oblivious. The MatGetVecs() routine is there to provide vectors >> which have a layout that matches the Mat. This is done by using the same >> PetscMap as the matrix rows. > > Presumably it isn't *totally* oblivious, otherwise MatCreateMFFD wouldn't > take a SNES as an argument? I don't think it takes it just to create vectors. The MatMFFD is created by the SNES itself and is guaranteed to be associated with that SNES. However, above you state that you create your matrix in isolation. Thus, I would suggest creating vectors using the PetscMap information, VecCreate() VecSetSizes(map.n, map.N) VecSetFromOptions() If it is always associated with a SNES, I would set the SNES as a context like we do in MatMFFD. Matt > -- Boyce > >> >> Matt >> >>> Thanks, >>> >>> -- Boyce >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From lua.byhh at gmail.com Sun Aug 24 07:18:48 2008 From: lua.byhh at gmail.com (Shengyong) Date: Sun, 24 Aug 2008 20:18:48 +0800 Subject: small space step lead to floating point exception Message-ID: Hi, Does anyone happen to meet such convergency problem? I have implemented a petsc solver to simulate driven cavity flow problem. I use petsc to solve the poisson equation. When I set the space step dx = 1 or 0.01, it works fine. However, when i tried to simulate a meso scale problem, i set dx = 0.0001, dt = 0.00001. I got a error : floating point execption.. Does anyone point me out how to solve this problem ? ong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Aug 24 12:19:58 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 24 Aug 2008 12:19:58 -0500 Subject: small space step lead to floating point exception In-Reply-To: References: Message-ID: <81EADC6A-A868-4C80-9DCA-E05244D1C28A@mcs.anl.gov> You need to determine where the floating point exception occurs. On some computers this is easy, just run the code in the debugger and use the debugger option to trap floating point exceptions. Unfortunately this may not always be possible: in that case I would start with running with -info -snes_monitor -ksp_monitor to see what is happening before the floating point exception that is causing the difficulty. Barry On Aug 24, 2008, at 7:18 AM, Shengyong wrote: > Hi, > > Does anyone happen to meet such convergency problem? I have > implemented a petsc solver > to simulate driven cavity flow problem. I use petsc to solve the > poisson equation. When I set the space step > dx = 1 or 0.01, it works fine. However, when i tried to simulate a > meso scale problem, i set dx = 0.0001, dt = 0.00001. > I got a error : floating point execption.. > > Does anyone point me out how to solve this problem ? > > > > > > ong > Solidification Simulation Lab, > State Key Lab of Mould & Die Technology, > Huazhong Univ. of Sci. & Tech. China From lua.byhh at gmail.com Mon Aug 25 07:11:37 2008 From: lua.byhh at gmail.com (Shengyong) Date: Mon, 25 Aug 2008 20:11:37 +0800 Subject: small space step lead to floating point exception In-Reply-To: <81EADC6A-A868-4C80-9DCA-E05244D1C28A@mcs.anl.gov> References: <81EADC6A-A868-4C80-9DCA-E05244D1C28A@mcs.anl.gov> Message-ID: Barry, Thank you very much! On Mon, Aug 25, 2008 at 1:19 AM, Barry Smith wrote: > > You need to determine where the floating point exception occurs. On some > computers > this is easy, just run the code in the debugger and use the debugger option > to trap floating > point exceptions. Unfortunately this may not always be possible: in that > case I would > start with running with -info -snes_monitor -ksp_monitor to see what is > happening before > the floating point exception that is causing the difficulty. > > Barry > > > On Aug 24, 2008, at 7:18 AM, Shengyong wrote: > > Hi, >> >> Does anyone happen to meet such convergency problem? I have implemented a >> petsc solver >> to simulate driven cavity flow problem. I use petsc to solve the poisson >> equation. When I set the space step >> dx = 1 or 0.01, it works fine. However, when i tried to simulate a meso >> scale problem, i set dx = 0.0001, dt = 0.00001. >> I got a error : floating point execption.. >> >> Does anyone point me out how to solve this problem ? >> >> >> >> >> >> ong >> Solidification Simulation Lab, >> State Key Lab of Mould & Die Technology, >> Huazhong Univ. of Sci. & Tech. China >> > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaatenek at irisa.fr Tue Aug 26 03:50:47 2008 From: gaatenek at irisa.fr (gaatenek at irisa.fr) Date: Tue, 26 Aug 2008 10:50:47 +0200 (CEST) Subject: profiling Message-ID: <61629.41.202.212.111.1219740647.squirrel@mail.irisa.fr> Hi, I am trying to profile my petsc code. Particularly I am trying to have a graph dependance of processor. Thank you for help me. Guy From knepley at gmail.com Tue Aug 26 06:44:55 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 26 Aug 2008 06:44:55 -0500 Subject: profiling In-Reply-To: <61629.41.202.212.111.1219740647.squirrel@mail.irisa.fr> References: <61629.41.202.212.111.1219740647.squirrel@mail.irisa.fr> Message-ID: On Tue, Aug 26, 2008 at 3:50 AM, wrote: > Hi, > I am trying to profile my petsc code. Particularly I am trying to have a > graph dependance of processor. Not sure exactly what you want to plot. However, all the profiling information can be printed at the end of the run using -log_summary. Thanks, Matt > Thank you for help me. > Guy -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From Hung.V.Nguyen at usace.army.mil Wed Aug 27 13:23:52 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Wed, 27 Aug 2008 13:23:52 -0500 Subject: PETSc convergence criteria Message-ID: All, We run a test case with -ksp_rtol 1.0e-15 and -ksp_max_it 50000. Then we compute the infinity and 2 norms of |a * xsolution -b| and the results are below. Why are these values of norms in a range 10e-8 to 10e-10 while -ksp_rtol is set 1.0e-15? What is the PETSc convergence criteria for the matrix solver? Thank you in advance, -Hung > hvnguyen:jade20% aprun -n 16 ./fw -ksp_type cg -pc_type bjacobi > -ksp_rtol 1.0e-15 -ksp_max_it 50000 > Time in PETSc solver = 0.7661848068237305 sec > Computed solution - 2 norm of the residual error = > 1.0662412498046400E-008 > Computed solution - maximum residual error (infinity norm) = > 5.2750692702829838E-010 > Number of Krylov iterations = 257 > Application 281373 resources: utime 0, stime 0 From knepley at gmail.com Wed Aug 27 13:39:16 2008 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 27 Aug 2008 13:39:16 -0500 Subject: PETSc convergence criteria In-Reply-To: References: Message-ID: You are using the preconditioned norm for the convergence test. You can use KSPSetNormType(ksp, KSP_NORM_UNPRECONDITIONED) However, this will not work for all KSP types. Matt On Wed, Aug 27, 2008 at 1:23 PM, Nguyen, Hung V ERDC-ITL-MS wrote: > All, > > We run a test case with -ksp_rtol 1.0e-15 and -ksp_max_it 50000. Then we > compute the infinity and 2 norms of |a * xsolution -b| and the results are > below. Why are these values of norms in a range 10e-8 to 10e-10 while > -ksp_rtol is set 1.0e-15? What is the PETSc convergence criteria for the > matrix solver? > > Thank you in advance, > > -Hung > >> hvnguyen:jade20% aprun -n 16 ./fw -ksp_type cg -pc_type bjacobi >> -ksp_rtol 1.0e-15 -ksp_max_it 50000 >> Time in PETSc solver = 0.7661848068237305 sec >> Computed solution - 2 norm of the residual error = >> 1.0662412498046400E-008 >> Computed solution - maximum residual error (infinity norm) = >> 5.2750692702829838E-010 >> Number of Krylov iterations = 257 >> Application 281373 resources: utime 0, stime 0 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Wed Aug 27 13:51:49 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 27 Aug 2008 13:51:49 -0500 Subject: PETSc convergence criteria In-Reply-To: References: Message-ID: Also run with -ksp_converged_reason to see why the KSP solver is stopping. You can also run with -ksp_monitor_true_residual to see what is happening to the preconditioned residual norm and nonpreconditioned residual norm during the computation. Barry On Aug 27, 2008, at 1:39 PM, Matthew Knepley wrote: > You are using the preconditioned norm for the convergence test. You > can use > > KSPSetNormType(ksp, KSP_NORM_UNPRECONDITIONED) > > However, this will not work for all KSP types. > > Matt > > On Wed, Aug 27, 2008 at 1:23 PM, Nguyen, Hung V ERDC-ITL-MS > wrote: >> All, >> >> We run a test case with -ksp_rtol 1.0e-15 and -ksp_max_it 50000. >> Then we >> compute the infinity and 2 norms of |a * xsolution -b| and the >> results are >> below. Why are these values of norms in a range 10e-8 to 10e-10 while >> -ksp_rtol is set 1.0e-15? What is the PETSc convergence criteria >> for the >> matrix solver? >> >> Thank you in advance, >> >> -Hung >> >>> hvnguyen:jade20% aprun -n 16 ./fw -ksp_type cg -pc_type bjacobi >>> -ksp_rtol 1.0e-15 -ksp_max_it 50000 >>> Time in PETSc solver = 0.7661848068237305 sec >>> Computed solution - 2 norm of the residual error = >>> 1.0662412498046400E-008 >>> Computed solution - maximum residual error (infinity norm) = >>> 5.2750692702829838E-010 >>> Number of Krylov iterations = 257 >>> Application 281373 resources: utime 0, stime 0 >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > From Hung.V.Nguyen at usace.army.mil Wed Aug 27 14:23:34 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Wed, 27 Aug 2008 14:23:34 -0500 Subject: PETSc convergence criteria In-Reply-To: References: Message-ID: Barry and Matt, Thank you for the info. I rerun and got the information I need. -Hung -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith Sent: Wednesday, August 27, 2008 1:52 PM To: petsc-users at mcs.anl.gov Subject: Re: PETSc convergence criteria Also run with -ksp_converged_reason to see why the KSP solver is stopping. You can also run with -ksp_monitor_true_residual to see what is happening to the preconditioned residual norm and nonpreconditioned residual norm during the computation. Barry On Aug 27, 2008, at 1:39 PM, Matthew Knepley wrote: > You are using the preconditioned norm for the convergence test. You > can use > > KSPSetNormType(ksp, KSP_NORM_UNPRECONDITIONED) > > However, this will not work for all KSP types. > > Matt > > On Wed, Aug 27, 2008 at 1:23 PM, Nguyen, Hung V ERDC-ITL-MS > wrote: >> All, >> >> We run a test case with -ksp_rtol 1.0e-15 and -ksp_max_it 50000. >> Then we >> compute the infinity and 2 norms of |a * xsolution -b| and the >> results are below. Why are these values of norms in a range 10e-8 to >> 10e-10 while -ksp_rtol is set 1.0e-15? What is the PETSc convergence >> criteria for the matrix solver? >> >> Thank you in advance, >> >> -Hung >> >>> hvnguyen:jade20% aprun -n 16 ./fw -ksp_type cg -pc_type bjacobi >>> -ksp_rtol 1.0e-15 -ksp_max_it 50000 >>> Time in PETSc solver = 0.7661848068237305 sec >>> Computed solution - 2 norm of the residual error = >>> 1.0662412498046400E-008 >>> Computed solution - maximum residual error (infinity norm) = >>> 5.2750692702829838E-010 >>> Number of Krylov iterations = 257 >>> Application 281373 resources: utime 0, stime 0 >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > From z.sheng at ewi.tudelft.nl Thu Aug 28 11:07:37 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Thu, 28 Aug 2008 18:07:37 +0200 Subject: const char* matrix ordering In-Reply-To: References: Message-ID: <48B6CD49.2040102@ewi.tudelft.nl> Dear Petsc-developer I found MatOrderingType is defined as #define MatOrderingType char* therefore, g++ complains with a warning when I do something like MatOrderingType x = MATORDERING_ND ....So, I don't know whether it should be defined as #define MatOrderingType const char* Best regards Zhifeng From balay at mcs.anl.gov Thu Aug 28 11:09:27 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 28 Aug 2008 11:09:27 -0500 (CDT) Subject: const char* matrix ordering In-Reply-To: <48B6CD49.2040102@ewi.tudelft.nl> References: <48B6CD49.2040102@ewi.tudelft.nl> Message-ID: On Thu, 28 Aug 2008, zhifeng sheng wrote: > Dear Petsc-developer > > I found MatOrderingType is defined as > > #define MatOrderingType char* > > therefore, g++ complains with a warning when I do something like > > MatOrderingType x = MATORDERING_ND > > ....So, I don't know whether it should be defined as > > #define MatOrderingType const char* Due to other issues - the definition of we have to use the above definition ofr MatOrderingType. So the user coude should be: const MatOrderingType x = MATORDERING_ND Satish From pbauman at ices.utexas.edu Sat Aug 30 09:21:10 2008 From: pbauman at ices.utexas.edu (Paul T. Bauman) Date: Sat, 30 Aug 2008 09:21:10 -0500 Subject: compiling PETSc with Intel MKL 10.0.1.14 In-Reply-To: References: <485970E1.1090104@gmail.com> <381F69EF-1C27-420C-8337-0EAEB70093E0@mcs.anl.gov> <48A5CD78.9070201@ices.utexas.edu> Message-ID: <48B95756.7020702@ices.utexas.edu> Sorry this took so long to get around to doing. So it turns out that there's a newer version of 2.3.3p13 posted at the PETSc ftp server. This worked flawlessly with the new MKL. I guess it got fixed and checked-in, but not under a new patch? Thanks, Paul Barry Smith wrote: > > Please send the configure.log to petsc-maint at mcs.anl.gov > > Barry > > On Aug 15, 2008, at 1:39 PM, Paul T. Bauman wrote: > >> Was there ever a fix/workaround introduced for this? I'm using >> 2.3.3p13 and I'm having trouble getting the config to recognize mkl >> 10.0.3.020. >> >> Thanks, >> >> Paul >> >> Barry Smith wrote: >>> >>> Could you email to petsc-maint at mcs.anl.gov ALL the messages as to >>> what goes wrong with >>> our current linking so we can fix it? >>> >>> Thanks >>> >>> Barry >>> >>> On Jun 18, 2008, at 3:32 PM, Randall Mackie wrote: >>> >>>> We've upgraded Intel MKL to version 10.0, but in this version, >>>> Intel has >>>> changed how libraries are suppose to be linked. For example, the >>>> libmkl_lapack.a >>>> is a dummy library, but that's what the PETSc configure script >>>> looks for. >>>> >>>> The documentation says, for example, to compile LAPACK in the >>>> static case, >>>> use libmkl_lapack.a libmkl_em64t.a >>>> >>>> and in the layered pure case to use >>>> libmkl_intel_lp64.a libmkl_intel_thread.a libmkl_core.a >>>> >>>> However, the PETSC configuration wants -lmkl_lapack -lmkl -lguide >>>> -lpthread >>>> >>>> Any suggestions are appreciated. >>>> >>>> Randy >>>> >>> >> > > From balay at mcs.anl.gov Sat Aug 30 09:39:33 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 30 Aug 2008 09:39:33 -0500 (CDT) Subject: compiling PETSc with Intel MKL 10.0.1.14 In-Reply-To: <48B95756.7020702@ices.utexas.edu> References: <485970E1.1090104@gmail.com> <381F69EF-1C27-420C-8337-0EAEB70093E0@mcs.anl.gov> <48A5CD78.9070201@ices.utexas.edu> <48B95756.7020702@ices.utexas.edu> Message-ID: On Sat, 30 Aug 2008, Paul T. Bauman wrote: > Sorry this took so long to get around to doing. So it turns out that there's a > newer version of 2.3.3p13 posted at the PETSc ftp server. This worked > flawlessly with the new MKL. I guess it got fixed and checked-in, but not > under a new patch? I don't remember generating 2 tarballs with the same patchlevel. If it happened - then the first tarball must not have existed for more than an hour on the ftp site.. Satish