From danyang.su at gmail.com Mon Dec 1 01:46:22 2014 From: danyang.su at gmail.com (Danyang Su) Date: Sun, 30 Nov 2014 23:46:22 -0800 Subject: [petsc-users] Question on the compiler flags in Makefile Message-ID: <547C1CCE.4060606@gmail.com> Hi All, I have a PETSc application that need additional compiling flags to build Hybrid MPI-OpenMP parallel application on WestGrid Supercomputer (Canada) system. The code and makefile work fine on my local machine for both Windows and Linux, but when compiled on WestGrid Orcinus System for the OpenMP version and Hybrid version, the OpenMP parallel part does not take effect while only MPI parallel part takes effect. There is no error while compiling the code. I am not sure if there is something wrong with the makefile or something other setting. The compiler flags need to build Hybrid MPI-OpenMP version on WestGrid Orcinus is "-shared-intel -openmp -O2 -xSSSE3 -axSSE4.2,SSE4.1 -ip" . For the sequential version or MPI parallel version, these flags are not needed. Would anybody help to check if the compiler flag (red) in the makefile is correct? Thanks, Danyang The makefile is shown below include ${PETSC_DIR}/conf/variables include ${PETSC_DIR}/conf/rules #FC = ifort SRC =./../ # Additional flags that may be required by the compiler ... # This is required for the OpenMP parallel version and Hybrid MPI-OpenMP parallel version # not necessary for the sequential version and MPI version DFCFLAG = -shared-intel -openmp -O2 -xSSSE3 -axSSE4.2,SSE4.1 -ip #Flag for WestGrid Orcinus System #Load PETSc module before make #module load /global/system/Modules/modulefiles/intel-2011/petsc FPPFLAGS = -DLINUX -DRELEASE -DPETSC -DMPI -DOPENMP SOURCES = $(SRC)gas_advection/relpfsat_g.o\ $(SRC)int_h_ovendry.o\ $(SRC)dhconst.o\ ... min3p: $(SOURCES) chkopts -${FLINKER} $(FPPFLAGS) $(DFCFLAG) -o ex $(SOURCES) ${PETSC_LIB} ${DLIB} -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 10:21:34 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 11:21:34 -0500 Subject: [petsc-users] valgrind Message-ID: Hi All, How to enable the valgrind flag? I installed that by myself locally. It appears you do not have valgrind installed on your system. We HIGHLY recommend you install it from www.valgrind.org Or install valgrind-devel or equivalent using your package manager. Then rerun ./configure Thanks, Paul Huaibao (Paul) Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 1 10:28:36 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 1 Dec 2014 10:28:36 -0600 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: On Mon, Dec 1, 2014 at 10:21 AM, paul zhang wrote: > Hi All, > > How to enable the valgrind flag? I installed that by myself locally. > > It appears you do not have valgrind installed on your system. > > > We HIGHLY recommend you install it from www.valgrind.org > > > Or install valgrind-devel or equivalent using > your package manager. > > Then rerun ./configure > > > > We could not find the valgrind header (valgrind.h). You can use --with-valgrind-dir= so that it can find the path/include/valgrind/valgrind.h Thanks, Matt > Thanks, > Paul > > > Huaibao (Paul) Zhang > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 10:43:35 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 11:43:35 -0500 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: Matt, Thanks for your reply. I am able to compile PETSc. And I went through the default tests. Now when I go to my code, I got problems. [hzh225 at dlxlogin2-1 petsc-3.5]$ make all [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): catastrophic error: cannot open source file "valgrind/valgrind.h" # include ^ compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc (code 4) make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 make[1]: *** [CMakeFiles/kats.dir/all] Error 2 make: *** [all] Error 2 Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley wrote: > On Mon, Dec 1, 2014 at 10:21 AM, paul zhang > wrote: > >> Hi All, >> >> How to enable the valgrind flag? I installed that by myself locally. >> >> It appears you do not have valgrind installed on your system. >> >> >> We HIGHLY recommend you install it from www.valgrind.org >> >> >> Or install valgrind-devel or equivalent using >> your package manager. >> >> Then rerun ./configure >> >> >> >> > > We could not find the valgrind header (valgrind.h). You can use > > --with-valgrind-dir= > > so that it can find the path/include/valgrind/valgrind.h > > Thanks, > > Matt > > >> Thanks, >> Paul >> >> >> Huaibao (Paul) Zhang >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 1 10:55:29 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 1 Dec 2014 10:55:29 -0600 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: On Mon, Dec 1, 2014 at 10:43 AM, paul zhang wrote: > Matt, > > Thanks for your reply. I am able to compile PETSc. And I went through the > default tests. Now when I go to my code, I got problems. > I am assuming that you put flags in your makefiles rather than using the PETSc makefiles. You need all the includes you get from make getincludedirs Matt > [hzh225 at dlxlogin2-1 petsc-3.5]$ make all > [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o > /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): catastrophic > error: cannot open source file "valgrind/valgrind.h" > # include > ^ > > compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc (code 4) > make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 > make[1]: *** [CMakeFiles/kats.dir/all] Error 2 > make: *** [all] Error 2 > > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley > wrote: > >> On Mon, Dec 1, 2014 at 10:21 AM, paul zhang >> wrote: >> >>> Hi All, >>> >>> How to enable the valgrind flag? I installed that by myself locally. >>> >>> It appears you do not have valgrind installed on your system. >>> >>> >>> We HIGHLY recommend you install it from >>> www.valgrind.org >>> >>> Or install valgrind-devel >>> or equivalent using your package manager. >>> >>> Then rerun >>> ./configure >>> >>> >>> >> >> We could not find the valgrind header (valgrind.h). You can use >> >> --with-valgrind-dir= >> >> so that it can find the path/include/valgrind/valgrind.h >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Paul >>> >>> >>> Huaibao (Paul) Zhang >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 12:28:12 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 13:28:12 -0500 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: I did use the PETSc makefiles. Should I include the valgrind path in my own make file again? [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd /home/hzh225/LIB_CFD/nP/petsc-3.5.2 [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley wrote: > On Mon, Dec 1, 2014 at 10:43 AM, paul zhang > wrote: > >> Matt, >> >> Thanks for your reply. I am able to compile PETSc. And I went through the >> default tests. Now when I go to my code, I got problems. >> > > I am assuming that you put flags in your makefiles rather than using the > PETSc makefiles. You need all the includes you get from > > make getincludedirs > > Matt > > >> [hzh225 at dlxlogin2-1 petsc-3.5]$ make all >> [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o >> /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): >> catastrophic error: cannot open source file "valgrind/valgrind.h" >> # include >> ^ >> >> compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc (code >> 4) >> make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 >> make[1]: *** [CMakeFiles/kats.dir/all] Error 2 >> make: *** [all] Error 2 >> >> >> Huaibao (Paul) Zhang >> *Gas Surface Interactions Lab* >> Department of Mechanical Engineering >> University of Kentucky, >> Lexington, >> KY, 40506-0503 >> *Office*: 216 Ralph G. Anderson Building >> *Web*:gsil.engineering.uky.edu >> >> On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley >> wrote: >> >>> On Mon, Dec 1, 2014 at 10:21 AM, paul zhang >>> wrote: >>> >>>> Hi All, >>>> >>>> How to enable the valgrind flag? I installed that by myself locally. >>>> >>>> It appears you do not have valgrind installed on your system. >>>> >>>> >>>> We HIGHLY recommend you install it from >>>> www.valgrind.org >>>> >>>> Or install valgrind-devel >>>> or equivalent using your package manager. >>>> >>>> Then rerun >>>> ./configure >>>> >>>> >>>> >>> >>> We could not find the valgrind header (valgrind.h). You can use >>> >>> --with-valgrind-dir= >>> >>> so that it can find the path/include/valgrind/valgrind.h >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Paul >>>> >>>> >>>> Huaibao (Paul) Zhang >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 12:33:02 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 13:33:02 -0500 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: That is my new configuration. Is that OK? export PETSC_DIR=`pwd` export PETSC_ARCH=linux-gnu-intel ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 1:28 PM, paul zhang wrote: > I did use the PETSc makefiles. Should I include the valgrind path in my > own make file again? > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd > /home/hzh225/LIB_CFD/nP/petsc-3.5.2 > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include > -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley > wrote: > >> On Mon, Dec 1, 2014 at 10:43 AM, paul zhang >> wrote: >> >>> Matt, >>> >>> Thanks for your reply. I am able to compile PETSc. And I went through >>> the default tests. Now when I go to my code, I got problems. >>> >> >> I am assuming that you put flags in your makefiles rather than using the >> PETSc makefiles. You need all the includes you get from >> >> make getincludedirs >> >> Matt >> >> >>> [hzh225 at dlxlogin2-1 petsc-3.5]$ make all >>> [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o >>> /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): >>> catastrophic error: cannot open source file "valgrind/valgrind.h" >>> # include >>> ^ >>> >>> compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc (code >>> 4) >>> make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 >>> make[1]: *** [CMakeFiles/kats.dir/all] Error 2 >>> make: *** [all] Error 2 >>> >>> >>> Huaibao (Paul) Zhang >>> *Gas Surface Interactions Lab* >>> Department of Mechanical Engineering >>> University of Kentucky, >>> Lexington, >>> KY, 40506-0503 >>> *Office*: 216 Ralph G. Anderson Building >>> *Web*:gsil.engineering.uky.edu >>> >>> On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley >>> wrote: >>> >>>> On Mon, Dec 1, 2014 at 10:21 AM, paul zhang >>>> wrote: >>>> >>>>> Hi All, >>>>> >>>>> How to enable the valgrind flag? I installed that by myself locally. >>>>> >>>>> It appears you do not have valgrind installed on your system. >>>>> >>>>> >>>>> We HIGHLY recommend you install it from >>>>> www.valgrind.org >>>>> >>>>> Or install >>>>> valgrind-devel or equivalent using your package manager. >>>>> >>>>> >>>>> Then rerun ./configure >>>>> >>>>> >>>>> >>>>> >>>> >>>> We could not find the valgrind header (valgrind.h). You can use >>>> >>>> --with-valgrind-dir= >>>> >>>> so that it can find the path/include/valgrind/valgrind.h >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thanks, >>>>> Paul >>>>> >>>>> >>>>> Huaibao (Paul) Zhang >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 12:34:00 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 13:34:00 -0500 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: Sorry. A typo is found export PETSC_DIR=`pwd` export PETSC_ARCH=linux-gnu-intel ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 1:33 PM, paul zhang wrote: > That is my new configuration. Is that OK? > > export PETSC_DIR=`pwd` > export PETSC_ARCH=linux-gnu-intel > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran > --download-fblaslapack --download-mpich > --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 > --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 1:28 PM, paul zhang > wrote: > >> I did use the PETSc makefiles. Should I include the valgrind path in my >> own make file again? >> >> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd >> /home/hzh225/LIB_CFD/nP/petsc-3.5.2 >> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs >> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include >> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include >> -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include >> >> Huaibao (Paul) Zhang >> *Gas Surface Interactions Lab* >> Department of Mechanical Engineering >> University of Kentucky, >> Lexington, >> KY, 40506-0503 >> *Office*: 216 Ralph G. Anderson Building >> *Web*:gsil.engineering.uky.edu >> >> On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley >> wrote: >> >>> On Mon, Dec 1, 2014 at 10:43 AM, paul zhang >>> wrote: >>> >>>> Matt, >>>> >>>> Thanks for your reply. I am able to compile PETSc. And I went through >>>> the default tests. Now when I go to my code, I got problems. >>>> >>> >>> I am assuming that you put flags in your makefiles rather than using the >>> PETSc makefiles. You need all the includes you get from >>> >>> make getincludedirs >>> >>> Matt >>> >>> >>>> [hzh225 at dlxlogin2-1 petsc-3.5]$ make all >>>> [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o >>>> /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): >>>> catastrophic error: cannot open source file "valgrind/valgrind.h" >>>> # include >>>> ^ >>>> >>>> compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc >>>> (code 4) >>>> make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 >>>> make[1]: *** [CMakeFiles/kats.dir/all] Error 2 >>>> make: *** [all] Error 2 >>>> >>>> >>>> Huaibao (Paul) Zhang >>>> *Gas Surface Interactions Lab* >>>> Department of Mechanical Engineering >>>> University of Kentucky, >>>> Lexington, >>>> KY, 40506-0503 >>>> *Office*: 216 Ralph G. Anderson Building >>>> *Web*:gsil.engineering.uky.edu >>>> >>>> On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley >>>> wrote: >>>> >>>>> On Mon, Dec 1, 2014 at 10:21 AM, paul zhang >>>>> wrote: >>>>> >>>>>> Hi All, >>>>>> >>>>>> How to enable the valgrind flag? I installed that by myself locally. >>>>>> >>>>>> It appears you do not have valgrind installed on your system. >>>>>> >>>>>> >>>>>> We HIGHLY recommend you install it from >>>>>> www.valgrind.org >>>>>> >>>>>> Or install >>>>>> valgrind-devel or equivalent using your package manager. >>>>>> >>>>>> >>>>>> Then rerun ./configure >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> We could not find the valgrind header (valgrind.h). You can use >>>>> >>>>> --with-valgrind-dir= >>>>> >>>>> so that it can find the path/include/valgrind/valgrind.h >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks, >>>>>> Paul >>>>>> >>>>>> >>>>>> Huaibao (Paul) Zhang >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 1 12:34:13 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 1 Dec 2014 12:34:13 -0600 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: On Mon, Dec 1, 2014 at 12:33 PM, paul zhang wrote: > That is my new configuration. Is that OK? > > export PETSC_DIR=`pwd` > export PETSC_ARCH=linux-gnu-intel > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran > --download-fblaslapack --download-mpich > --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 > --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > That looks correct. When I say "using PETSc makefiles", I mean for your own project. You appear to be using CMake. Matt > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 1:28 PM, paul zhang > wrote: > >> I did use the PETSc makefiles. Should I include the valgrind path in my >> own make file again? >> >> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd >> /home/hzh225/LIB_CFD/nP/petsc-3.5.2 >> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs >> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include >> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include >> -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include >> >> Huaibao (Paul) Zhang >> *Gas Surface Interactions Lab* >> Department of Mechanical Engineering >> University of Kentucky, >> Lexington, >> KY, 40506-0503 >> *Office*: 216 Ralph G. Anderson Building >> *Web*:gsil.engineering.uky.edu >> >> On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley >> wrote: >> >>> On Mon, Dec 1, 2014 at 10:43 AM, paul zhang >>> wrote: >>> >>>> Matt, >>>> >>>> Thanks for your reply. I am able to compile PETSc. And I went through >>>> the default tests. Now when I go to my code, I got problems. >>>> >>> >>> I am assuming that you put flags in your makefiles rather than using the >>> PETSc makefiles. You need all the includes you get from >>> >>> make getincludedirs >>> >>> Matt >>> >>> >>>> [hzh225 at dlxlogin2-1 petsc-3.5]$ make all >>>> [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o >>>> /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): >>>> catastrophic error: cannot open source file "valgrind/valgrind.h" >>>> # include >>>> ^ >>>> >>>> compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc >>>> (code 4) >>>> make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 >>>> make[1]: *** [CMakeFiles/kats.dir/all] Error 2 >>>> make: *** [all] Error 2 >>>> >>>> >>>> Huaibao (Paul) Zhang >>>> *Gas Surface Interactions Lab* >>>> Department of Mechanical Engineering >>>> University of Kentucky, >>>> Lexington, >>>> KY, 40506-0503 >>>> *Office*: 216 Ralph G. Anderson Building >>>> *Web*:gsil.engineering.uky.edu >>>> >>>> On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley >>>> wrote: >>>> >>>>> On Mon, Dec 1, 2014 at 10:21 AM, paul zhang >>>>> wrote: >>>>> >>>>>> Hi All, >>>>>> >>>>>> How to enable the valgrind flag? I installed that by myself locally. >>>>>> >>>>>> It appears you do not have valgrind installed on your system. >>>>>> >>>>>> >>>>>> We HIGHLY recommend you install it from >>>>>> www.valgrind.org >>>>>> >>>>>> Or install >>>>>> valgrind-devel or equivalent using your package manager. >>>>>> >>>>>> >>>>>> Then rerun ./configure >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> We could not find the valgrind header (valgrind.h). You can use >>>>> >>>>> --with-valgrind-dir= >>>>> >>>>> so that it can find the path/include/valgrind/valgrind.h >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks, >>>>>> Paul >>>>>> >>>>>> >>>>>> Huaibao (Paul) Zhang >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 12:55:44 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 13:55:44 -0500 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: Matt, Sorry to poke you again. I am in a dilemma. If I use ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpif77 --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ Then I am told to TESTING: checkMPICompilerOverride from config.setCompilers(config/BuildSystem/config/setCompilers.py:1501) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- --with-cc=mpicc is specified with --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3. However /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc exists and should be the prefered compiler! Suggest not specifying --with-cc option so that configure can use /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc instead. ******************************************************************************* However if I skip those compilers, ./configure --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 My problem now is =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: checkFortranCompiler from config.setCompilers(config/BuildSystem/config/setCompilers.py:910) ******************************************************************************* UNABLE to EXECUTE BINARIES for ./configure ------------------------------------------------------------------------------- Cannot run executables created with FC. If this machine uses a batch system to submit jobs you will need to configure using ./configure with the additional option --with-batch. Otherwise there is problem with the compilers. Can you compile and run code with your C/C++ (and maybe Fortran) compilers? Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 1:34 PM, Matthew Knepley wrote: > On Mon, Dec 1, 2014 at 12:33 PM, paul zhang > wrote: > >> That is my new configuration. Is that OK? >> >> export PETSC_DIR=`pwd` >> export PETSC_ARCH=linux-gnu-intel >> ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran >> --download-fblaslapack --download-mpich >> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ >> > > That looks correct. > > When I say "using PETSc makefiles", I mean for your own project. You > appear to be using CMake. > > Matt > > >> >> Huaibao (Paul) Zhang >> *Gas Surface Interactions Lab* >> Department of Mechanical Engineering >> University of Kentucky, >> Lexington, >> KY, 40506-0503 >> *Office*: 216 Ralph G. Anderson Building >> *Web*:gsil.engineering.uky.edu >> >> On Mon, Dec 1, 2014 at 1:28 PM, paul zhang >> wrote: >> >>> I did use the PETSc makefiles. Should I include the valgrind path in my >>> own make file again? >>> >>> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd >>> /home/hzh225/LIB_CFD/nP/petsc-3.5.2 >>> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs >>> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include >>> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include >>> -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include >>> >>> Huaibao (Paul) Zhang >>> *Gas Surface Interactions Lab* >>> Department of Mechanical Engineering >>> University of Kentucky, >>> Lexington, >>> KY, 40506-0503 >>> *Office*: 216 Ralph G. Anderson Building >>> *Web*:gsil.engineering.uky.edu >>> >>> On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley >>> wrote: >>> >>>> On Mon, Dec 1, 2014 at 10:43 AM, paul zhang >>>> wrote: >>>> >>>>> Matt, >>>>> >>>>> Thanks for your reply. I am able to compile PETSc. And I went through >>>>> the default tests. Now when I go to my code, I got problems. >>>>> >>>> >>>> I am assuming that you put flags in your makefiles rather than using >>>> the PETSc makefiles. You need all the includes you get from >>>> >>>> make getincludedirs >>>> >>>> Matt >>>> >>>> >>>>> [hzh225 at dlxlogin2-1 petsc-3.5]$ make all >>>>> [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o >>>>> /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): >>>>> catastrophic error: cannot open source file "valgrind/valgrind.h" >>>>> # include >>>>> ^ >>>>> >>>>> compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc >>>>> (code 4) >>>>> make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 >>>>> make[1]: *** [CMakeFiles/kats.dir/all] Error 2 >>>>> make: *** [all] Error 2 >>>>> >>>>> >>>>> Huaibao (Paul) Zhang >>>>> *Gas Surface Interactions Lab* >>>>> Department of Mechanical Engineering >>>>> University of Kentucky, >>>>> Lexington, >>>>> KY, 40506-0503 >>>>> *Office*: 216 Ralph G. Anderson Building >>>>> *Web*:gsil.engineering.uky.edu >>>>> >>>>> On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Mon, Dec 1, 2014 at 10:21 AM, paul zhang >>>>>> wrote: >>>>>> >>>>>>> Hi All, >>>>>>> >>>>>>> How to enable the valgrind flag? I installed that by myself locally. >>>>>>> >>>>>>> It appears you do not have valgrind installed on your system. >>>>>>> >>>>>>> >>>>>>> We HIGHLY recommend you install it from >>>>>>> www.valgrind.org >>>>>>> >>>>>>> Or install >>>>>>> valgrind-devel or equivalent using your package manager. >>>>>>> >>>>>>> >>>>>>> Then rerun ./configure >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> We could not find the valgrind header (valgrind.h). You can use >>>>>> >>>>>> --with-valgrind-dir= >>>>>> >>>>>> so that it can find the path/include/valgrind/valgrind.h >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Thanks, >>>>>>> Paul >>>>>>> >>>>>>> >>>>>>> Huaibao (Paul) Zhang >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 1 12:57:40 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 1 Dec 2014 12:57:40 -0600 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: On Mon, Dec 1, 2014 at 12:55 PM, paul zhang wrote: > Matt, > > Sorry to poke you again. I am in a dilemma. > > If I use > > ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpif77 > --download-fblaslapack > --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 > --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > > Then I am told to > > TESTING: checkMPICompilerOverride from > config.setCompilers(config/BuildSystem/config/setCompilers.py:1501) > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > ------------------------------------------------------------------------------- > --with-cc=mpicc is specified with > --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3. However > /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc exists and should be the > prefered compiler! Suggest not specifying --with-cc option so that > configure can use /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc instead. > > ******************************************************************************* > > > However if I skip those compilers, > > ./configure --download-fblaslapack > --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 > --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 > > > My problem now is > > > =============================================================================== > Configuring PETSc to compile on your system > > > =============================================================================== > TESTING: checkFortranCompiler from > config.setCompilers(config/BuildSystem/config/setCompilers.py:910) > > > ******************************************************************************* > UNABLE to EXECUTE BINARIES for ./configure > > ------------------------------------------------------------------------------- > Cannot run executables created with FC. If this machine uses a batch > system > to submit jobs you will need to configure using ./configure with the > additional option --with-batch. > Otherwise there is problem with the compilers. Can you compile and run > code with your C/C++ (and maybe Fortran) compilers? > It looks like your Fortran is broken here. Send configure.log so we can see what the problem is. If you do not need Fortran, use --with-fc=0 in the configuration. Thanks, matt > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 1:34 PM, Matthew Knepley wrote: > >> On Mon, Dec 1, 2014 at 12:33 PM, paul zhang >> wrote: >> >>> That is my new configuration. Is that OK? >>> >>> export PETSC_DIR=`pwd` >>> export PETSC_ARCH=linux-gnu-intel >>> ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran >>> --download-fblaslapack --download-mpich >>> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >>> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ >>> >> >> That looks correct. >> >> When I say "using PETSc makefiles", I mean for your own project. You >> appear to be using CMake. >> >> Matt >> >> >>> >>> Huaibao (Paul) Zhang >>> *Gas Surface Interactions Lab* >>> Department of Mechanical Engineering >>> University of Kentucky, >>> Lexington, >>> KY, 40506-0503 >>> *Office*: 216 Ralph G. Anderson Building >>> *Web*:gsil.engineering.uky.edu >>> >>> On Mon, Dec 1, 2014 at 1:28 PM, paul zhang >>> wrote: >>> >>>> I did use the PETSc makefiles. Should I include the valgrind path in my >>>> own make file again? >>>> >>>> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd >>>> /home/hzh225/LIB_CFD/nP/petsc-3.5.2 >>>> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs >>>> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include >>>> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include >>>> -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include >>>> >>>> Huaibao (Paul) Zhang >>>> *Gas Surface Interactions Lab* >>>> Department of Mechanical Engineering >>>> University of Kentucky, >>>> Lexington, >>>> KY, 40506-0503 >>>> *Office*: 216 Ralph G. Anderson Building >>>> *Web*:gsil.engineering.uky.edu >>>> >>>> On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley >>>> wrote: >>>> >>>>> On Mon, Dec 1, 2014 at 10:43 AM, paul zhang >>>>> wrote: >>>>> >>>>>> Matt, >>>>>> >>>>>> Thanks for your reply. I am able to compile PETSc. And I went through >>>>>> the default tests. Now when I go to my code, I got problems. >>>>>> >>>>> >>>>> I am assuming that you put flags in your makefiles rather than using >>>>> the PETSc makefiles. You need all the includes you get from >>>>> >>>>> make getincludedirs >>>>> >>>>> Matt >>>>> >>>>> >>>>>> [hzh225 at dlxlogin2-1 petsc-3.5]$ make all >>>>>> [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o >>>>>> /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): >>>>>> catastrophic error: cannot open source file "valgrind/valgrind.h" >>>>>> # include >>>>>> ^ >>>>>> >>>>>> compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc >>>>>> (code 4) >>>>>> make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 >>>>>> make[1]: *** [CMakeFiles/kats.dir/all] Error 2 >>>>>> make: *** [all] Error 2 >>>>>> >>>>>> >>>>>> Huaibao (Paul) Zhang >>>>>> *Gas Surface Interactions Lab* >>>>>> Department of Mechanical Engineering >>>>>> University of Kentucky, >>>>>> Lexington, >>>>>> KY, 40506-0503 >>>>>> *Office*: 216 Ralph G. Anderson Building >>>>>> *Web*:gsil.engineering.uky.edu >>>>>> >>>>>> On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley >>>>>> wrote: >>>>>> >>>>>>> On Mon, Dec 1, 2014 at 10:21 AM, paul zhang >>>>>> > wrote: >>>>>>> >>>>>>>> Hi All, >>>>>>>> >>>>>>>> How to enable the valgrind flag? I installed that by myself >>>>>>>> locally. >>>>>>>> >>>>>>>> It appears you do not have valgrind installed on your system. >>>>>>>> >>>>>>>> >>>>>>>> We HIGHLY recommend you install it from >>>>>>>> www.valgrind.org >>>>>>>> >>>>>>>> Or install >>>>>>>> valgrind-devel or equivalent using your package manager. >>>>>>>> >>>>>>>> >>>>>>>> Then rerun ./configure >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> We could not find the valgrind header (valgrind.h). You can use >>>>>>> >>>>>>> --with-valgrind-dir= >>>>>>> >>>>>>> so that it can find the path/include/valgrind/valgrind.h >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> Thanks, >>>>>>>> Paul >>>>>>>> >>>>>>>> >>>>>>>> Huaibao (Paul) Zhang >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Dec 1 12:59:32 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 1 Dec 2014 12:59:32 -0600 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: Send configure.log for the ./configure with ./configure --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 Barry > On Dec 1, 2014, at 12:55 PM, paul zhang wrote: > > Matt, > > Sorry to poke you again. I am in a dilemma. > > If I use > > ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpif77 --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > > Then I am told to > > TESTING: checkMPICompilerOverride from config.setCompilers(config/BuildSystem/config/setCompilers.py:1501) ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > ------------------------------------------------------------------------------- > --with-cc=mpicc is specified with --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3. However /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc exists and should be the prefered compiler! Suggest not specifying --with-cc option so that configure can use /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc instead. > ******************************************************************************* > > > However if I skip those compilers, > > ./configure --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 > > > My problem now is > > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: checkFortranCompiler from config.setCompilers(config/BuildSystem/config/setCompilers.py:910) ******************************************************************************* > UNABLE to EXECUTE BINARIES for ./configure > ------------------------------------------------------------------------------- > Cannot run executables created with FC. If this machine uses a batch system > to submit jobs you will need to configure using ./configure with the additional option --with-batch. > Otherwise there is problem with the compilers. Can you compile and run code with your C/C++ (and maybe Fortran) compilers? > > > > > > > > > > > > > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 1:34 PM, Matthew Knepley wrote: > On Mon, Dec 1, 2014 at 12:33 PM, paul zhang wrote: > That is my new configuration. Is that OK? > > export PETSC_DIR=`pwd` > export PETSC_ARCH=linux-gnu-intel > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > That looks correct. > > When I say "using PETSc makefiles", I mean for your own project. You appear to be using CMake. > > Matt > > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 1:28 PM, paul zhang wrote: > I did use the PETSc makefiles. Should I include the valgrind path in my own make file again? > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd > /home/hzh225/LIB_CFD/nP/petsc-3.5.2 > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley wrote: > On Mon, Dec 1, 2014 at 10:43 AM, paul zhang wrote: > Matt, > > Thanks for your reply. I am able to compile PETSc. And I went through the default tests. Now when I go to my code, I got problems. > > I am assuming that you put flags in your makefiles rather than using the PETSc makefiles. You need all the includes you get from > > make getincludedirs > > Matt > > [hzh225 at dlxlogin2-1 petsc-3.5]$ make all > [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o > /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): catastrophic error: cannot open source file "valgrind/valgrind.h" > # include > ^ > > compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc (code 4) > make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 > make[1]: *** [CMakeFiles/kats.dir/all] Error 2 > make: *** [all] Error 2 > > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley wrote: > On Mon, Dec 1, 2014 at 10:21 AM, paul zhang wrote: > Hi All, > > How to enable the valgrind flag? I installed that by myself locally. > > It appears you do not have valgrind installed on your system. We HIGHLY recommend you install it from www.valgrind.org Or install valgrind-devel or equivalent using your package manager. Then rerun ./configure > > We could not find the valgrind header (valgrind.h). You can use > > --with-valgrind-dir= > > so that it can find the path/include/valgrind/valgrind.h > > Thanks, > > Matt > > Thanks, > Paul > > > Huaibao (Paul) Zhang > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From paulhuaizhang at gmail.com Mon Dec 1 13:02:32 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 14:02:32 -0500 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: I should have installed openmpi successfully... Attached. Thanks, Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 1:59 PM, Barry Smith wrote: > > Send configure.log for the ./configure with > > ./configure --download-fblaslapack > --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 > --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 > > > Barry > > > > On Dec 1, 2014, at 12:55 PM, paul zhang wrote: > > > > Matt, > > > > Sorry to poke you again. I am in a dilemma. > > > > If I use > > > > ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpif77 > --download-fblaslapack > --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 > --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > > > > > Then I am told to > > > > TESTING: checkMPICompilerOverride from > config.setCompilers(config/BuildSystem/config/setCompilers.py:1501) > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for details): > > > ------------------------------------------------------------------------------- > > --with-cc=mpicc is specified with > --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3. However > /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc exists and should be the > prefered compiler! Suggest not specifying --with-cc option so that > configure can use /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc instead. > > > ******************************************************************************* > > > > > > However if I skip those compilers, > > > > ./configure --download-fblaslapack > --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 > --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 > > > > > > My problem now is > > > > > =============================================================================== > > Configuring PETSc to compile on your system > > > =============================================================================== > > TESTING: checkFortranCompiler from > config.setCompilers(config/BuildSystem/config/setCompilers.py:910) > > > ******************************************************************************* > > UNABLE to EXECUTE BINARIES for ./configure > > > ------------------------------------------------------------------------------- > > Cannot run executables created with FC. If this machine uses a batch > system > > to submit jobs you will need to configure using ./configure with the > additional option --with-batch. > > Otherwise there is problem with the compilers. Can you compile and run > code with your C/C++ (and maybe Fortran) compilers? > > > > > > > > > > > > > > > > > > > > > > > > > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 1:34 PM, Matthew Knepley > wrote: > > On Mon, Dec 1, 2014 at 12:33 PM, paul zhang > wrote: > > That is my new configuration. Is that OK? > > > > export PETSC_DIR=`pwd` > > export PETSC_ARCH=linux-gnu-intel > > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran > --download-fblaslapack --download-mpich > --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 > --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > > > That looks correct. > > > > When I say "using PETSc makefiles", I mean for your own project. You > appear to be using CMake. > > > > Matt > > > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 1:28 PM, paul zhang > wrote: > > I did use the PETSc makefiles. Should I include the valgrind path in my > own make file again? > > > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd > > /home/hzh225/LIB_CFD/nP/petsc-3.5.2 > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs > > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include > -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley > wrote: > > On Mon, Dec 1, 2014 at 10:43 AM, paul zhang > wrote: > > Matt, > > > > Thanks for your reply. I am able to compile PETSc. And I went through > the default tests. Now when I go to my code, I got problems. > > > > I am assuming that you put flags in your makefiles rather than using the > PETSc makefiles. You need all the includes you get from > > > > make getincludedirs > > > > Matt > > > > [hzh225 at dlxlogin2-1 petsc-3.5]$ make all > > [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o > > /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): > catastrophic error: cannot open source file "valgrind/valgrind.h" > > # include > > ^ > > > > compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc (code > 4) > > make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 > > make[1]: *** [CMakeFiles/kats.dir/all] Error 2 > > make: *** [all] Error 2 > > > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley > wrote: > > On Mon, Dec 1, 2014 at 10:21 AM, paul zhang > wrote: > > Hi All, > > > > How to enable the valgrind flag? I installed that by myself locally. > > > > It appears you do not have valgrind installed on your system. > > > We HIGHLY recommend you install it from www.valgrind.org > > > Or install valgrind-devel or equivalent using > your package manager. > > Then rerun ./configure > > > > We could not find the valgrind header (valgrind.h). You can use > > > > --with-valgrind-dir= > > > > so that it can find the path/include/valgrind/valgrind.h > > > > Thanks, > > > > Matt > > > > Thanks, > > Paul > > > > > > Huaibao (Paul) Zhang > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 102703 bytes Desc: not available URL: From knepley at gmail.com Mon Dec 1 13:06:25 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 1 Dec 2014 13:06:25 -0600 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: On Mon, Dec 1, 2014 at 1:02 PM, paul zhang wrote: > I should have installed openmpi successfully... > The Fortran wrapper does not seem to correctly link the libraries: ERROR while running executable: Could not execute "/tmp/petsc-rVaKfJ/config.setCompilers/conftest": /tmp/petsc-rVaKfJ/config.setCompilers/conftest: symbol lookup error: /home/hzh225/LIB_CFD/openmpi-1.8.3/lib/libmpi_mpifh.so.2: undefined symbol: mpi_fortran_weights_empty Or else you need something in your LD_LIBRARY_PATH. Either way, so you need Fortran? If so, use --download-mpich, otherwise use --with-fc=0. Thanks, Matt > Attached. > > Thanks, > Paul > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 1:59 PM, Barry Smith wrote: > >> >> Send configure.log for the ./configure with >> >> ./configure --download-fblaslapack >> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 >> >> >> Barry >> >> >> > On Dec 1, 2014, at 12:55 PM, paul zhang >> wrote: >> > >> > Matt, >> > >> > Sorry to poke you again. I am in a dilemma. >> > >> > If I use >> > >> > ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpif77 >> --download-fblaslapack >> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ >> > >> > >> > Then I am told to >> > >> > TESTING: checkMPICompilerOverride from >> config.setCompilers(config/BuildSystem/config/setCompilers.py:1501) >> >> >> ******************************************************************************* >> > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >> for details): >> > >> ------------------------------------------------------------------------------- >> > --with-cc=mpicc is specified with >> --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3. However >> /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc exists and should be the >> prefered compiler! Suggest not specifying --with-cc option so that >> configure can use /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc instead. >> > >> ******************************************************************************* >> > >> > >> > However if I skip those compilers, >> > >> > ./configure --download-fblaslapack >> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 >> > >> > >> > My problem now is >> > >> > >> =============================================================================== >> > Configuring PETSc to compile on your system >> > >> =============================================================================== >> > TESTING: checkFortranCompiler from >> config.setCompilers(config/BuildSystem/config/setCompilers.py:910) >> >> >> ******************************************************************************* >> > UNABLE to EXECUTE BINARIES for ./configure >> > >> ------------------------------------------------------------------------------- >> > Cannot run executables created with FC. If this machine uses a batch >> system >> > to submit jobs you will need to configure using ./configure with the >> additional option --with-batch. >> > Otherwise there is problem with the compilers. Can you compile and run >> code with your C/C++ (and maybe Fortran) compilers? >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > Huaibao (Paul) Zhang >> > Gas Surface Interactions Lab >> > Department of Mechanical Engineering >> > University of Kentucky, >> > Lexington, >> > KY, 40506-0503 >> > Office: 216 Ralph G. Anderson Building >> > Web:gsil.engineering.uky.edu >> > >> > On Mon, Dec 1, 2014 at 1:34 PM, Matthew Knepley >> wrote: >> > On Mon, Dec 1, 2014 at 12:33 PM, paul zhang >> wrote: >> > That is my new configuration. Is that OK? >> > >> > export PETSC_DIR=`pwd` >> > export PETSC_ARCH=linux-gnu-intel >> > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran >> --download-fblaslapack --download-mpich >> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ >> > >> > That looks correct. >> > >> > When I say "using PETSc makefiles", I mean for your own project. You >> appear to be using CMake. >> > >> > Matt >> > >> > >> > Huaibao (Paul) Zhang >> > Gas Surface Interactions Lab >> > Department of Mechanical Engineering >> > University of Kentucky, >> > Lexington, >> > KY, 40506-0503 >> > Office: 216 Ralph G. Anderson Building >> > Web:gsil.engineering.uky.edu >> > >> > On Mon, Dec 1, 2014 at 1:28 PM, paul zhang >> wrote: >> > I did use the PETSc makefiles. Should I include the valgrind path in my >> own make file again? >> > >> > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd >> > /home/hzh225/LIB_CFD/nP/petsc-3.5.2 >> > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs >> > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include >> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include >> -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include >> > >> > Huaibao (Paul) Zhang >> > Gas Surface Interactions Lab >> > Department of Mechanical Engineering >> > University of Kentucky, >> > Lexington, >> > KY, 40506-0503 >> > Office: 216 Ralph G. Anderson Building >> > Web:gsil.engineering.uky.edu >> > >> > On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley >> wrote: >> > On Mon, Dec 1, 2014 at 10:43 AM, paul zhang >> wrote: >> > Matt, >> > >> > Thanks for your reply. I am able to compile PETSc. And I went through >> the default tests. Now when I go to my code, I got problems. >> > >> > I am assuming that you put flags in your makefiles rather than using >> the PETSc makefiles. You need all the includes you get from >> > >> > make getincludedirs >> > >> > Matt >> > >> > [hzh225 at dlxlogin2-1 petsc-3.5]$ make all >> > [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o >> > /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): >> catastrophic error: cannot open source file "valgrind/valgrind.h" >> > # include >> > ^ >> > >> > compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc >> (code 4) >> > make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 >> > make[1]: *** [CMakeFiles/kats.dir/all] Error 2 >> > make: *** [all] Error 2 >> > >> > >> > Huaibao (Paul) Zhang >> > Gas Surface Interactions Lab >> > Department of Mechanical Engineering >> > University of Kentucky, >> > Lexington, >> > KY, 40506-0503 >> > Office: 216 Ralph G. Anderson Building >> > Web:gsil.engineering.uky.edu >> > >> > On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley >> wrote: >> > On Mon, Dec 1, 2014 at 10:21 AM, paul zhang >> wrote: >> > Hi All, >> > >> > How to enable the valgrind flag? I installed that by myself locally. >> > >> > It appears you do not have valgrind installed on your system. >> >> >> We HIGHLY recommend you install it from >> www.valgrind.org >> >> Or install valgrind-devel >> or equivalent using your package manager. >> >> Then rerun >> ./configure >> > >> > We could not find the valgrind header (valgrind.h). You can use >> > >> > --with-valgrind-dir= >> > >> > so that it can find the path/include/valgrind/valgrind.h >> > >> > Thanks, >> > >> > Matt >> > >> > Thanks, >> > Paul >> > >> > >> > Huaibao (Paul) Zhang >> > >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Dec 1 13:13:34 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 1 Dec 2014 13:13:34 -0600 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: Paul, ./configure is trying to compile a trivial Fortran program with /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpif90 and failing. See full output from below. You can try the same thing from the command line and see that there is something wrong with the MPI install. Barry TEST checkFortranCompiler from config.setCompilers(/home/hzh225/LIB_CFD/nP/petsc-3.5.2/config/BuildSystem/config/setCompilers.py:910) TESTING: checkFortranCompiler from config.setCompilers(config/BuildSystem/config/setCompilers.py:910) Locate a functional Fortran compiler Checking for program /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpif90...found Defined make macro "FC" to "/home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpif90" Executing: /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpif90 -c -o /tmp/petsc-rVaKfJ/config.setCompilers/conftest.o -I/tmp/petsc-rVaKfJ/config.setCompilers /tmp/petsc-rVaKfJ/config.setCompilers/conftest.F Successful compile: Source: program main end Executing: /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpif90 -o /tmp/petsc-rVaKfJ/config.setCompilers/conftest /tmp/petsc-rVaKfJ/config.setCompilers/conftest.o Executing: /tmp/petsc-rVaKfJ/config.setCompilers/conftest Executing: /tmp/petsc-rVaKfJ/config.setCompilers/conftest ERROR while running executable: Could not execute "/tmp/petsc-rVaKfJ/config.setCompilers/conftest": /tmp/petsc-rVaKfJ/config.setCompilers/conftest: symbol lookup error: /home/hzh225/LIB_CFD/openmpi-1.8.3/lib/libmpi_mpifh.so.2: undefined symbol: mpi_fortran_weights_empty > On Dec 1, 2014, at 1:02 PM, paul zhang wrote: > > I should have installed openmpi successfully... > > > Attached. > > Thanks, > Paul > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 1:59 PM, Barry Smith wrote: > > Send configure.log for the ./configure with > > ./configure --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 > > > Barry > > > > On Dec 1, 2014, at 12:55 PM, paul zhang wrote: > > > > Matt, > > > > Sorry to poke you again. I am in a dilemma. > > > > If I use > > > > ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpif77 --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > > > > > Then I am told to > > > > TESTING: checkMPICompilerOverride from config.setCompilers(config/BuildSystem/config/setCompilers.py:1501) ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > > ------------------------------------------------------------------------------- > > --with-cc=mpicc is specified with --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3. However /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc exists and should be the prefered compiler! Suggest not specifying --with-cc option so that configure can use /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc instead. > > ******************************************************************************* > > > > > > However if I skip those compilers, > > > > ./configure --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 > > > > > > My problem now is > > > > =============================================================================== > > Configuring PETSc to compile on your system > > =============================================================================== > > TESTING: checkFortranCompiler from config.setCompilers(config/BuildSystem/config/setCompilers.py:910) ******************************************************************************* > > UNABLE to EXECUTE BINARIES for ./configure > > ------------------------------------------------------------------------------- > > Cannot run executables created with FC. If this machine uses a batch system > > to submit jobs you will need to configure using ./configure with the additional option --with-batch. > > Otherwise there is problem with the compilers. Can you compile and run code with your C/C++ (and maybe Fortran) compilers? > > > > > > > > > > > > > > > > > > > > > > > > > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 1:34 PM, Matthew Knepley wrote: > > On Mon, Dec 1, 2014 at 12:33 PM, paul zhang wrote: > > That is my new configuration. Is that OK? > > > > export PETSC_DIR=`pwd` > > export PETSC_ARCH=linux-gnu-intel > > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > > > That looks correct. > > > > When I say "using PETSc makefiles", I mean for your own project. You appear to be using CMake. > > > > Matt > > > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 1:28 PM, paul zhang wrote: > > I did use the PETSc makefiles. Should I include the valgrind path in my own make file again? > > > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd > > /home/hzh225/LIB_CFD/nP/petsc-3.5.2 > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs > > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley wrote: > > On Mon, Dec 1, 2014 at 10:43 AM, paul zhang wrote: > > Matt, > > > > Thanks for your reply. I am able to compile PETSc. And I went through the default tests. Now when I go to my code, I got problems. > > > > I am assuming that you put flags in your makefiles rather than using the PETSc makefiles. You need all the includes you get from > > > > make getincludedirs > > > > Matt > > > > [hzh225 at dlxlogin2-1 petsc-3.5]$ make all > > [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o > > /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): catastrophic error: cannot open source file "valgrind/valgrind.h" > > # include > > ^ > > > > compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc (code 4) > > make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 > > make[1]: *** [CMakeFiles/kats.dir/all] Error 2 > > make: *** [all] Error 2 > > > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley wrote: > > On Mon, Dec 1, 2014 at 10:21 AM, paul zhang wrote: > > Hi All, > > > > How to enable the valgrind flag? I installed that by myself locally. > > > > It appears you do not have valgrind installed on your system. We HIGHLY recommend you install it from www.valgrind.org Or install valgrind-devel or equivalent using your package manager. Then rerun ./configure > > > > We could not find the valgrind header (valgrind.h). You can use > > > > --with-valgrind-dir= > > > > so that it can find the path/include/valgrind/valgrind.h > > > > Thanks, > > > > Matt > > > > Thanks, > > Paul > > > > > > Huaibao (Paul) Zhang > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > > From paulhuaizhang at gmail.com Mon Dec 1 13:22:33 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 14:22:33 -0500 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: Thanks for checking. My program is coded with C++ actually, so fortran may not be necessary. I compiled PETSc on our university cluster, where a MPI package has already been installed as universal module. It seems not compatible with PETSc. So I am trying to install my own version of MPI and link it to PETSc. As compile PETSc using its default configuration, it automatically downloaded the mpi . ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich I was wondering if it does the same thing as I installed MPI by my own. Thanks again. Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 2:06 PM, Matthew Knepley wrote: > On Mon, Dec 1, 2014 at 1:02 PM, paul zhang > wrote: > >> I should have installed openmpi successfully... >> > > The Fortran wrapper does not seem to correctly link the libraries: > > ERROR while running executable: Could not execute > "/tmp/petsc-rVaKfJ/config.setCompilers/conftest": > /tmp/petsc-rVaKfJ/config.setCompilers/conftest: symbol lookup error: > /home/hzh225/LIB_CFD/openmpi-1.8.3/lib/libmpi_mpifh.so.2: undefined symbol: > mpi_fortran_weights_empty > > Or else you need something in your LD_LIBRARY_PATH. Either way, so you > need Fortran? If so, > use --download-mpich, otherwise use --with-fc=0. > > Thanks, > > Matt > > >> Attached. >> >> Thanks, >> Paul >> >> Huaibao (Paul) Zhang >> *Gas Surface Interactions Lab* >> Department of Mechanical Engineering >> University of Kentucky, >> Lexington, >> KY, 40506-0503 >> *Office*: 216 Ralph G. Anderson Building >> *Web*:gsil.engineering.uky.edu >> >> On Mon, Dec 1, 2014 at 1:59 PM, Barry Smith wrote: >> >>> >>> Send configure.log for the ./configure with >>> >>> ./configure --download-fblaslapack >>> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >>> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 >>> >>> >>> Barry >>> >>> >>> > On Dec 1, 2014, at 12:55 PM, paul zhang >>> wrote: >>> > >>> > Matt, >>> > >>> > Sorry to poke you again. I am in a dilemma. >>> > >>> > If I use >>> > >>> > ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpif77 >>> --download-fblaslapack >>> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >>> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ >>> > >>> > >>> > Then I am told to >>> > >>> > TESTING: checkMPICompilerOverride from >>> config.setCompilers(config/BuildSystem/config/setCompilers.py:1501) >>> >>> >>> ******************************************************************************* >>> > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >>> for details): >>> > >>> ------------------------------------------------------------------------------- >>> > --with-cc=mpicc is specified with >>> --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3. However >>> /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc exists and should be the >>> prefered compiler! Suggest not specifying --with-cc option so that >>> configure can use /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc instead. >>> > >>> ******************************************************************************* >>> > >>> > >>> > However if I skip those compilers, >>> > >>> > ./configure --download-fblaslapack >>> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >>> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 >>> > >>> > >>> > My problem now is >>> > >>> > >>> =============================================================================== >>> > Configuring PETSc to compile on your system >>> > >>> =============================================================================== >>> > TESTING: checkFortranCompiler from >>> config.setCompilers(config/BuildSystem/config/setCompilers.py:910) >>> >>> >>> ******************************************************************************* >>> > UNABLE to EXECUTE BINARIES for ./configure >>> > >>> ------------------------------------------------------------------------------- >>> > Cannot run executables created with FC. If this machine uses a batch >>> system >>> > to submit jobs you will need to configure using ./configure with the >>> additional option --with-batch. >>> > Otherwise there is problem with the compilers. Can you compile and >>> run code with your C/C++ (and maybe Fortran) compilers? >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > Huaibao (Paul) Zhang >>> > Gas Surface Interactions Lab >>> > Department of Mechanical Engineering >>> > University of Kentucky, >>> > Lexington, >>> > KY, 40506-0503 >>> > Office: 216 Ralph G. Anderson Building >>> > Web:gsil.engineering.uky.edu >>> > >>> > On Mon, Dec 1, 2014 at 1:34 PM, Matthew Knepley >>> wrote: >>> > On Mon, Dec 1, 2014 at 12:33 PM, paul zhang >>> wrote: >>> > That is my new configuration. Is that OK? >>> > >>> > export PETSC_DIR=`pwd` >>> > export PETSC_ARCH=linux-gnu-intel >>> > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran >>> --download-fblaslapack --download-mpich >>> --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 >>> --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ >>> > >>> > That looks correct. >>> > >>> > When I say "using PETSc makefiles", I mean for your own project. You >>> appear to be using CMake. >>> > >>> > Matt >>> > >>> > >>> > Huaibao (Paul) Zhang >>> > Gas Surface Interactions Lab >>> > Department of Mechanical Engineering >>> > University of Kentucky, >>> > Lexington, >>> > KY, 40506-0503 >>> > Office: 216 Ralph G. Anderson Building >>> > Web:gsil.engineering.uky.edu >>> > >>> > On Mon, Dec 1, 2014 at 1:28 PM, paul zhang >>> wrote: >>> > I did use the PETSc makefiles. Should I include the valgrind path in >>> my own make file again? >>> > >>> > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd >>> > /home/hzh225/LIB_CFD/nP/petsc-3.5.2 >>> > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs >>> > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include >>> -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include >>> -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include >>> > >>> > Huaibao (Paul) Zhang >>> > Gas Surface Interactions Lab >>> > Department of Mechanical Engineering >>> > University of Kentucky, >>> > Lexington, >>> > KY, 40506-0503 >>> > Office: 216 Ralph G. Anderson Building >>> > Web:gsil.engineering.uky.edu >>> > >>> > On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley >>> wrote: >>> > On Mon, Dec 1, 2014 at 10:43 AM, paul zhang >>> wrote: >>> > Matt, >>> > >>> > Thanks for your reply. I am able to compile PETSc. And I went through >>> the default tests. Now when I go to my code, I got problems. >>> > >>> > I am assuming that you put flags in your makefiles rather than using >>> the PETSc makefiles. You need all the includes you get from >>> > >>> > make getincludedirs >>> > >>> > Matt >>> > >>> > [hzh225 at dlxlogin2-1 petsc-3.5]$ make all >>> > [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o >>> > /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): >>> catastrophic error: cannot open source file "valgrind/valgrind.h" >>> > # include >>> > ^ >>> > >>> > compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc >>> (code 4) >>> > make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 >>> > make[1]: *** [CMakeFiles/kats.dir/all] Error 2 >>> > make: *** [all] Error 2 >>> > >>> > >>> > Huaibao (Paul) Zhang >>> > Gas Surface Interactions Lab >>> > Department of Mechanical Engineering >>> > University of Kentucky, >>> > Lexington, >>> > KY, 40506-0503 >>> > Office: 216 Ralph G. Anderson Building >>> > Web:gsil.engineering.uky.edu >>> > >>> > On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley >>> wrote: >>> > On Mon, Dec 1, 2014 at 10:21 AM, paul zhang >>> wrote: >>> > Hi All, >>> > >>> > How to enable the valgrind flag? I installed that by myself locally. >>> > >>> > It appears you do not have valgrind installed on your system. >>> >>> >>> We HIGHLY recommend you install it from >>> www.valgrind.org >>> >>> Or install valgrind-devel >>> or equivalent using your package manager. >>> >>> Then rerun >>> ./configure >>> > >>> > We could not find the valgrind header (valgrind.h). You can use >>> > >>> > --with-valgrind-dir= >>> > >>> > so that it can find the path/include/valgrind/valgrind.h >>> > >>> > Thanks, >>> > >>> > Matt >>> > >>> > Thanks, >>> > Paul >>> > >>> > >>> > Huaibao (Paul) Zhang >>> > >>> > >>> > >>> > >>> > -- >>> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> > -- Norbert Wiener >>> > >>> > >>> > >>> > >>> > -- >>> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> > -- Norbert Wiener >>> > >>> > >>> > >>> > >>> > >>> > -- >>> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> > -- Norbert Wiener >>> > >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Dec 1 13:28:07 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 1 Dec 2014 13:28:07 -0600 Subject: [petsc-users] valgrind In-Reply-To: References: Message-ID: <16E11431-0E2B-4463-BC3F-85A57580CB51@mcs.anl.gov> If you don't need fortran than just use your (already compiled) OpenMPI but pass to PETSc's ./configure --with-fc=0 and PETSc should configure. Barry > On Dec 1, 2014, at 1:22 PM, paul zhang wrote: > > Thanks for checking. > > My program is coded with C++ actually, so fortran may not be necessary. I compiled PETSc on our university cluster, where a MPI package has already been installed as universal module. It seems not compatible with PETSc. So I am trying to install my own version of MPI and link it to PETSc. > > As compile PETSc using its default configuration, it automatically downloaded the mpi . > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich > > I was wondering if it does the same thing as I installed MPI by my own. > > Thanks again. > Paul > > > > > > > > > > > > > > > > > > > > > > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 2:06 PM, Matthew Knepley wrote: > On Mon, Dec 1, 2014 at 1:02 PM, paul zhang wrote: > I should have installed openmpi successfully... > > The Fortran wrapper does not seem to correctly link the libraries: > > ERROR while running executable: Could not execute "/tmp/petsc-rVaKfJ/config.setCompilers/conftest": > /tmp/petsc-rVaKfJ/config.setCompilers/conftest: symbol lookup error: /home/hzh225/LIB_CFD/openmpi-1.8.3/lib/libmpi_mpifh.so.2: undefined symbol: mpi_fortran_weights_empty > > Or else you need something in your LD_LIBRARY_PATH. Either way, so you need Fortran? If so, > use --download-mpich, otherwise use --with-fc=0. > > Thanks, > > Matt > > Attached. > > Thanks, > Paul > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 1:59 PM, Barry Smith wrote: > > Send configure.log for the ./configure with > > ./configure --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 > > > Barry > > > > On Dec 1, 2014, at 12:55 PM, paul zhang wrote: > > > > Matt, > > > > Sorry to poke you again. I am in a dilemma. > > > > If I use > > > > ./configure --with-cc=mpicc --with-cxx=mpiCC --with-fc=mpif77 --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > > > > > Then I am told to > > > > TESTING: checkMPICompilerOverride from config.setCompilers(config/BuildSystem/config/setCompilers.py:1501) ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > > ------------------------------------------------------------------------------- > > --with-cc=mpicc is specified with --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3. However /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc exists and should be the prefered compiler! Suggest not specifying --with-cc option so that configure can use /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpicc instead. > > ******************************************************************************* > > > > > > However if I skip those compilers, > > > > ./configure --download-fblaslapack --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3 > > > > > > My problem now is > > > > =============================================================================== > > Configuring PETSc to compile on your system > > =============================================================================== > > TESTING: checkFortranCompiler from config.setCompilers(config/BuildSystem/config/setCompilers.py:910) ******************************************************************************* > > UNABLE to EXECUTE BINARIES for ./configure > > ------------------------------------------------------------------------------- > > Cannot run executables created with FC. If this machine uses a batch system > > to submit jobs you will need to configure using ./configure with the additional option --with-batch. > > Otherwise there is problem with the compilers. Can you compile and run code with your C/C++ (and maybe Fortran) compilers? > > > > > > > > > > > > > > > > > > > > > > > > > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 1:34 PM, Matthew Knepley wrote: > > On Mon, Dec 1, 2014 at 12:33 PM, paul zhang wrote: > > That is my new configuration. Is that OK? > > > > export PETSC_DIR=`pwd` > > export PETSC_ARCH=linux-gnu-intel > > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack --download-mpich --with-valgrind-dir=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 --with-mpi=1 --with-mpi-dir=/home/hzh225/LIB_CFD/openmpi-1.8.3/ > > > > That looks correct. > > > > When I say "using PETSc makefiles", I mean for your own project. You appear to be using CMake. > > > > Matt > > > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 1:28 PM, paul zhang wrote: > > I did use the PETSc makefiles. Should I include the valgrind path in my own make file again? > > > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ pwd > > /home/hzh225/LIB_CFD/nP/petsc-3.5.2 > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make getincludedirs > > -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 11:55 AM, Matthew Knepley wrote: > > On Mon, Dec 1, 2014 at 10:43 AM, paul zhang wrote: > > Matt, > > > > Thanks for your reply. I am able to compile PETSc. And I went through the default tests. Now when I go to my code, I got problems. > > > > I am assuming that you put flags in your makefiles rather than using the PETSc makefiles. You need all the includes you get from > > > > make getincludedirs > > > > Matt > > > > [hzh225 at dlxlogin2-1 petsc-3.5]$ make all > > [100%] Building CXX object CMakeFiles/kats.dir/main.cc.o > > /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include/petscsys.h(1760): catastrophic error: cannot open source file "valgrind/valgrind.h" > > # include > > ^ > > > > compilation aborted for /home/hzh225/CMake/petsc/petsc-3.5/main.cc (code 4) > > make[2]: *** [CMakeFiles/kats.dir/main.cc.o] Error 4 > > make[1]: *** [CMakeFiles/kats.dir/all] Error 2 > > make: *** [all] Error 2 > > > > > > Huaibao (Paul) Zhang > > Gas Surface Interactions Lab > > Department of Mechanical Engineering > > University of Kentucky, > > Lexington, > > KY, 40506-0503 > > Office: 216 Ralph G. Anderson Building > > Web:gsil.engineering.uky.edu > > > > On Mon, Dec 1, 2014 at 11:28 AM, Matthew Knepley wrote: > > On Mon, Dec 1, 2014 at 10:21 AM, paul zhang wrote: > > Hi All, > > > > How to enable the valgrind flag? I installed that by myself locally. > > > > It appears you do not have valgrind installed on your system. We HIGHLY recommend you install it from www.valgrind.org Or install valgrind-devel or equivalent using your package manager. Then rerun ./configure > > > > We could not find the valgrind header (valgrind.h). You can use > > > > --with-valgrind-dir= > > > > so that it can find the path/include/valgrind/valgrind.h > > > > Thanks, > > > > Matt > > > > Thanks, > > Paul > > > > > > Huaibao (Paul) Zhang > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From bisheshkh at gmail.com Mon Dec 1 14:42:18 2014 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Mon, 1 Dec 2014 21:42:18 +0100 Subject: [petsc-users] interpolate staggered grid values in parallel In-Reply-To: References: Message-ID: On Wed, Nov 26, 2014 at 10:13 PM, Bishesh Khanal wrote: > > > On Tue, Nov 25, 2014 at 6:40 PM, Matthew Knepley > wrote: > >> On Tue, Nov 25, 2014 at 11:36 AM, Bishesh Khanal >> wrote: >> >>> Dear all, >>> I'm solving a system using petsc and I get a global Vec, say X . It uses >>> DMDA with 4 dofs (3 velocity + 1 pressure). X contains velocity at the cell >>> faces since I solve using staggered grid. >>> Now I'd like to create one array for velocity with values at cell >>> centers and another array for pressure (not the Vec so that I can send the >>> pointer to the array to other part of the code that does not use Petsc). >>> >>> Currently what I do is : >>> ------ Vec X, X_local; >>> ------ PetscScalar *X_array; >>> >>> // Scatter X to X_local and then use: >>> >>> ------- VecGetArray(X_local, &X_array) >>> >>> And then have a function, say >>> getVelocityAt(x, y, z, component) { >>> // interpolate velocity at (x,y,z) cell center using X_array >>> } >>> >>> The function getVelocityAt() gets called from outside petsc in a loop >>> over all (x,y,z) positions. >>> This is not done in parallel. >>> >>> Now, how do I use Petsc to instead interpolate the cell center >>> velocities in parallel and store it >>> in an array say >>> PetscScalar *X_array_cellCenter; >>> ? >>> This would need to have size one less along each axis compared to the >>> orginal DMDA size. >>> This way I intend to return X_array_cellCenter to the code outside Petsc. >>> >> >> SNES ex30 is an example of a staggered grid code using DMDA. It does this >> kind of interpolation, >> and puts the result in a Vec. >> > > After looking at the example, I tried the following but I have a problem: > > The idea I take from the e.g is to use DMDALocalInfo *info to get local > grid coordinates and use for loop to iterate through local grid values > that are > available in the Field **x array to do the interpolation. > > Here is what I tried in my case where I get the solution from a ksp > attached to a dmda: > > // Get the solution to Vec x from ksp. > // ksp is attached to DMDA da which has 4 dofs: vx,vy,vz,p; > KSPGetSoution(ksp, &x); > //x has velocity solution in faces of the cell. > > //Get the array from x: > Field ***xArray; //Field is a struct with 4 members: vx,vy,vz and p. > DMDAGetVectorArray(da,xVec,&xArray); > > //Now I want to have a new array to store the cell center velocity values > //by interpolating from xArray. > > DMCreateGlobalVector(daV, &xC); //daV has same size as da but with dof=3 > FieldVelocity ***xCArray; //FieldVelocity is a struct with 3 members: > vx,vy and vz. > DMDAVecGetArray(daV,xC,&xCArray); > > //Do the interpolation > DMDAGetLocalInfo(da,&info); > for (PetscInt k=info.zs; k for (PetscInt j=info.ys; j for (PetscInt i=info.xs; i //do interpolation which requires access such as: > if(i==0 || j==0 || k==0) { > xCArray[k][j][i].vx = 0; //boundary condition; > xCArray[k][j][i].vy = 0; > xCArray[k][j][i].vz = 0 > } else { > xCArray[k][j][i].vx = 0.5*(xArray[k][j][i].vx + > xArray[k][j][i-1].vx); > xCArray[k][j][i].vy = 0.5*(xArray[k][j][i].vy + > xArray[k][j][i-1].vy); > xCArray[k][j][i].vz = 0.5*(xArray[k][j][i].vz + > xArray[k][j][i-1].vz); > } > > I get runtime error within this loop. > What am I doing wrong here ? > > Ok, I now see that KSPGetSolution(ksp, &x) gives a GLOBAL vector in x and not a local vector; hence the runtime memory error above since I can't access ghost values in indices such as [i-1] wth the global vec. Was there any reason not to make this information explicit in the docs to prevent the confusion I had ? Or should it have been obvious to me and that I'm missing something here ? http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPGetSolution.html Thanks > >> >> Thanks, >> >> Matt >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Dec 1 14:56:55 2014 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 1 Dec 2014 21:56:55 +0100 Subject: [petsc-users] interpolate staggered grid values in parallel In-Reply-To: References: Message-ID: It's clear as it is. The function name indicates it returns a vector which is the solution associated with the Krylov method. The local part of the vector will not be a solution. On 1 December 2014 at 21:42, Bishesh Khanal wrote: > > > On Wed, Nov 26, 2014 at 10:13 PM, Bishesh Khanal > wrote: > >> >> >> On Tue, Nov 25, 2014 at 6:40 PM, Matthew Knepley >> wrote: >> >>> On Tue, Nov 25, 2014 at 11:36 AM, Bishesh Khanal >>> wrote: >>> >>>> Dear all, >>>> I'm solving a system using petsc and I get a global Vec, say X . It >>>> uses DMDA with 4 dofs (3 velocity + 1 pressure). X contains velocity at the >>>> cell faces since I solve using staggered grid. >>>> Now I'd like to create one array for velocity with values at cell >>>> centers and another array for pressure (not the Vec so that I can send the >>>> pointer to the array to other part of the code that does not use Petsc). >>>> >>>> Currently what I do is : >>>> ------ Vec X, X_local; >>>> ------ PetscScalar *X_array; >>>> >>>> // Scatter X to X_local and then use: >>>> >>>> ------- VecGetArray(X_local, &X_array) >>>> >>>> And then have a function, say >>>> getVelocityAt(x, y, z, component) { >>>> // interpolate velocity at (x,y,z) cell center using X_array >>>> } >>>> >>>> The function getVelocityAt() gets called from outside petsc in a loop >>>> over all (x,y,z) positions. >>>> This is not done in parallel. >>>> >>>> Now, how do I use Petsc to instead interpolate the cell center >>>> velocities in parallel and store it >>>> in an array say >>>> PetscScalar *X_array_cellCenter; >>>> ? >>>> This would need to have size one less along each axis compared to the >>>> orginal DMDA size. >>>> This way I intend to return X_array_cellCenter to the code outside >>>> Petsc. >>>> >>> >>> SNES ex30 is an example of a staggered grid code using DMDA. It does >>> this kind of interpolation, >>> and puts the result in a Vec. >>> >> >> After looking at the example, I tried the following but I have a problem: >> >> The idea I take from the e.g is to use DMDALocalInfo *info to get local >> grid coordinates and use for loop to iterate through local grid values >> that are >> available in the Field **x array to do the interpolation. >> >> Here is what I tried in my case where I get the solution from a ksp >> attached to a dmda: >> >> // Get the solution to Vec x from ksp. >> // ksp is attached to DMDA da which has 4 dofs: vx,vy,vz,p; >> KSPGetSoution(ksp, &x); >> //x has velocity solution in faces of the cell. >> >> //Get the array from x: >> Field ***xArray; //Field is a struct with 4 members: vx,vy,vz and p. >> DMDAGetVectorArray(da,xVec,&xArray); >> >> //Now I want to have a new array to store the cell center velocity values >> //by interpolating from xArray. >> >> DMCreateGlobalVector(daV, &xC); //daV has same size as da but with dof=3 >> FieldVelocity ***xCArray; //FieldVelocity is a struct with 3 members: >> vx,vy and vz. >> DMDAVecGetArray(daV,xC,&xCArray); >> >> //Do the interpolation >> DMDAGetLocalInfo(da,&info); >> for (PetscInt k=info.zs; k> for (PetscInt j=info.ys; j> for (PetscInt i=info.xs; i> //do interpolation which requires access such as: >> if(i==0 || j==0 || k==0) { >> xCArray[k][j][i].vx = 0; //boundary condition; >> xCArray[k][j][i].vy = 0; >> xCArray[k][j][i].vz = 0 >> } else { >> xCArray[k][j][i].vx = 0.5*(xArray[k][j][i].vx + >> xArray[k][j][i-1].vx); >> xCArray[k][j][i].vy = 0.5*(xArray[k][j][i].vy + >> xArray[k][j][i-1].vy); >> xCArray[k][j][i].vz = 0.5*(xArray[k][j][i].vz + >> xArray[k][j][i-1].vz); >> } >> >> I get runtime error within this loop. >> What am I doing wrong here ? >> >> > Ok, I now see that KSPGetSolution(ksp, &x) gives a GLOBAL vector in x and > not a local vector; hence the runtime memory error above since I can't > access ghost values in indices such as [i-1] wth the global vec. > Was there any reason not to make this information explicit in the docs to > prevent the confusion I had ? Or should it have been obvious to me and that > I'm missing something here ? > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPGetSolution.html > Thanks > > >> >>> >>> Thanks, >>> >>> Matt >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 1 15:00:28 2014 From: jed at jedbrown.org (Jed Brown) Date: Mon, 01 Dec 2014 14:00:28 -0700 Subject: [petsc-users] interpolate staggered grid values in parallel In-Reply-To: References: Message-ID: <87a937dr0z.fsf@jedbrown.org> Bishesh Khanal writes: > Ok, I now see that KSPGetSolution(ksp, &x) gives a GLOBAL vector in x and > not a local vector; hence the runtime memory error above since I can't > access ghost values in indices such as [i-1] wth the global vec. > Was there any reason not to make this information explicit in the docs to > prevent the confusion I had ? Or should it have been obvious to me and that > I'm missing something here ? Adding to Dave's comment, KSP has no concept of a "local vector". That is a DM concept. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From paulhuaizhang at gmail.com Mon Dec 1 15:08:19 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 16:08:19 -0500 Subject: [petsc-users] running error In-Reply-To: <87iohwgia5.fsf@jedbrown.org> References: <87lhmsgjjg.fsf@jedbrown.org> <87iohwgia5.fsf@jedbrown.org> Message-ID: Hi Jed, Does this mean I've passed the default test? Is the "open matplotlib " an issue? Thanks, Paul [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel streams NPMAX=2 cd src/benchmarks/streams; /usr/bin/gmake --no-print-directory streams /home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/bin/mpicc -o MPIVersion.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/include -I/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include -I/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include `pwd`/MPIVersion.c /home/hzh225/LIB_CFD/nP/petsc-3.5.2/src/benchmarks/streams/MPIVersion.c: In function ?main?: /home/hzh225/LIB_CFD/nP/petsc-3.5.2/src/benchmarks/streams/MPIVersion.c:99: warning: value computed is not used /home/hzh225/LIB_CFD/nP/petsc-3.5.2/src/benchmarks/streams/MPIVersion.c:103: warning: value computed is not used Number of MPI processes 1 Process 0 dlxlogin2-2.local Function Rate (MB/s) Copy: 10939.5817 Scale: 10774.4825 Add: 12114.9712 Triad: 11215.3413 Number of MPI processes 2 Process 0 dlxlogin2-2.local Process 1 dlxlogin2-2.local Function Rate (MB/s) Copy: 20189.9660 Scale: 19714.0058 Add: 22403.2262 Triad: 21046.0602 ------------------------------------------------ np speedup 1 1.0 2 1.88 Estimation of possible speedup of MPI programs based on Streams benchmark. It appears you have 1 node(s) Unable to open matplotlib to plot speedup Unable to open matplotlib to plot speedup Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Sun, Nov 30, 2014 at 10:28 PM, Jed Brown wrote: > Please keep the discussion on-list. > > paul zhang writes: > > > I set some breakpoints, which shows the code breaks down at the > > PetscInitialize(&argc,&argv,(char *)0,help); > > Run in a debugger and send a stack trace. This is most likely due to a > misconfigured environment/MPI. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 1 15:18:19 2014 From: jed at jedbrown.org (Jed Brown) Date: Mon, 01 Dec 2014 14:18:19 -0700 Subject: [petsc-users] running error In-Reply-To: References: <87lhmsgjjg.fsf@jedbrown.org> <87iohwgia5.fsf@jedbrown.org> Message-ID: <874mtfdq78.fsf@jedbrown.org> paul zhang writes: > Hi Jed, > Does this mean I've passed the default test? It's an MPI test. Run this to see if PETSc solvers are running correctly: make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel test > Is the "open matplotlib " an issue? No, it's just a Python library that would be used to create a nice figure if you had it installed. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From paulhuaizhang at gmail.com Mon Dec 1 15:20:47 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 16:20:47 -0500 Subject: [petsc-users] running error In-Reply-To: <874mtfdq78.fsf@jedbrown.org> References: <87lhmsgjjg.fsf@jedbrown.org> <87iohwgia5.fsf@jedbrown.org> <874mtfdq78.fsf@jedbrown.org> Message-ID: Sorry. I should reply it to the lists. [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel test Running test examples to verify correct installation Using PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 and PETSC_ARCH=linux-gnu-intel C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI processes Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI process Completed test examples ========================================= Now to evaluate the computer systems you plan use - do: make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel streams NPMAX= Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 4:18 PM, Jed Brown wrote: > paul zhang writes: > > > Hi Jed, > > Does this mean I've passed the default test? > > It's an MPI test. Run this to see if PETSc solvers are running correctly: > > make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 > PETSC_ARCH=linux-gnu-intel test > > > Is the "open matplotlib " an issue? > > No, it's just a Python library that would be used to create a nice > figure if you had it installed. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 15:33:23 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 16:33:23 -0500 Subject: [petsc-users] running error In-Reply-To: References: <87lhmsgjjg.fsf@jedbrown.org> <87iohwgia5.fsf@jedbrown.org> <874mtfdq78.fsf@jedbrown.org> Message-ID: Hi Jed, Now I see PETSc is compiled correctly. However, when I attempted to call "petscksp.h" in my own program (quite simple one), it failed for some reason. Attached you can see two cases. The first is just the test of MPI, which is fine. The second is one added PETSc, which has segment fault as it went to MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current process id */ Can you shed some light? The MPI version is 1.8.3. Thanks, Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 4:20 PM, paul zhang wrote: > > Sorry. I should reply it to the lists. > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make > PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel > test > > Running test examples to verify correct installation > Using PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 and > PETSC_ARCH=linux-gnu-intel > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI > process > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI > processes > Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 > MPI process > Completed test examples > ========================================= > Now to evaluate the computer systems you plan use - do: > make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 > PETSC_ARCH=linux-gnu-intel streams NPMAX= intend to use> > > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 4:18 PM, Jed Brown wrote: > >> paul zhang writes: >> >> > Hi Jed, >> > Does this mean I've passed the default test? >> >> It's an MPI test. Run this to see if PETSc solvers are running correctly: >> >> make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 >> PETSC_ARCH=linux-gnu-intel test >> >> > Is the "open matplotlib " an issue? >> >> No, it's just a Python library that would be used to create a nice >> figure if you had it installed. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OnlyMPI.tar Type: application/x-tar Size: 430080 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Petsc-with-MPI.tar Type: application/x-tar Size: 10240 bytes Desc: not available URL: From scanmail at anl.gov Mon Dec 1 15:33:47 2014 From: scanmail at anl.gov (Administrator) Date: Mon, 1 Dec 2014 15:33:47 -0600 Subject: [petsc-users] [MailServer Notification]Argonne Antivirus Quarantine Notification - DO NOT REPLY Message-ID: Do not reply to this message. The reply address is not monitored. The message below has been quarantined by the Argonne National Laboratory Antivirus filtering system. The message was filtered for having been detected of having malicious content or an attachment that matches the laboratory?s filtering criteria. From: paulhuaizhang at gmail.com; To: jed at jedbrown.org;petsc-users at mcs.anl.gov; Subject: Re: [petsc-users] running error Attachment: OnlyMPI.tar Date: 12/1/2014 3:33:33 PM If you have any questions regarding the Argonne's antivirus filtering product, or feel that the attachment was incorrectly identified, please contact the CIS Service Desk at help at anl.gov or x-9999 option 2. From scanmail at anl.gov Mon Dec 1 15:33:48 2014 From: scanmail at anl.gov (Administrator) Date: Mon, 1 Dec 2014 15:33:48 -0600 Subject: [petsc-users] [MailServer Notification]Argonne Antivirus Quarantine Notification - DO NOT REPLY Message-ID: <39F16135BDC047C4A60BB4A354E5C1EC@anl.gov> Do not reply to this message. The reply address is not monitored. The message below has been quarantined by the Argonne National Laboratory Antivirus filtering system. The message was filtered for having been detected of having malicious content or an attachment that matches the laboratory?s filtering criteria. From: paulhuaizhang at gmail.com; To: jed at jedbrown.org;petsc-users at mcs.anl.gov; Subject: Re: [petsc-users] running error Attachment: OnlyMPI.tar Date: 12/1/2014 3:33:33 PM If you have any questions regarding the Argonne's antivirus filtering product, or feel that the attachment was incorrectly identified, please contact the CIS Service Desk at help at anl.gov or x-9999 option 2. From scanmail at anl.gov Mon Dec 1 15:34:24 2014 From: scanmail at anl.gov (Administrator) Date: Mon, 1 Dec 2014 15:34:24 -0600 Subject: [petsc-users] [MailServer Notification]Argonne Antivirus Quarantine Notification - DO NOT REPLY Message-ID: Do not reply to this message. The reply address is not monitored. The message below has been quarantined by the Argonne National Laboratory Antivirus filtering system. The message was filtered for having been detected of having malicious content or an attachment that matches the laboratory?s filtering criteria. From: paulhuaizhang at gmail.com; To: jed at jedbrown.org;petsc-users at mcs.anl.gov; Subject: Re: [petsc-users] running error Attachment: OnlyMPI.tar Date: 12/1/2014 3:33:33 PM If you have any questions regarding the Argonne's antivirus filtering product, or feel that the attachment was incorrectly identified, please contact the CIS Service Desk at help at anl.gov or x-9999 option 2. From paulhuaizhang at gmail.com Mon Dec 1 15:39:05 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 16:39:05 -0500 Subject: [petsc-users] running error In-Reply-To: References: <87lhmsgjjg.fsf@jedbrown.org> <87iohwgia5.fsf@jedbrown.org> <874mtfdq78.fsf@jedbrown.org> Message-ID: I better send you original files. The compressed files triggered some warnings I guess. Attached is the MPI test been verified. Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 4:33 PM, paul zhang wrote: > Hi Jed, > > Now I see PETSc is compiled correctly. However, when I attempted to call > "petscksp.h" in my own program (quite simple one), it failed for some > reason. Attached you can see two cases. The first is just the test of MPI, > which is fine. The second is one added PETSc, which has segment fault as it > went to > > MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current > process id */ > > Can you shed some light? The MPI version is 1.8.3. > > Thanks, > Paul > > > > > > > > > > > > > > > > > > > > > > > > > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 4:20 PM, paul zhang > wrote: > >> >> Sorry. I should reply it to the lists. >> >> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make >> PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel >> test >> >> Running test examples to verify correct installation >> Using PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 and >> PETSC_ARCH=linux-gnu-intel >> C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 >> MPI process >> C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 >> MPI processes >> Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 >> MPI process >> Completed test examples >> ========================================= >> Now to evaluate the computer systems you plan use - do: >> make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 >> PETSC_ARCH=linux-gnu-intel streams NPMAX=> intend to use> >> >> >> Huaibao (Paul) Zhang >> *Gas Surface Interactions Lab* >> Department of Mechanical Engineering >> University of Kentucky, >> Lexington, >> KY, 40506-0503 >> *Office*: 216 Ralph G. Anderson Building >> *Web*:gsil.engineering.uky.edu >> >> On Mon, Dec 1, 2014 at 4:18 PM, Jed Brown wrote: >> >>> paul zhang writes: >>> >>> > Hi Jed, >>> > Does this mean I've passed the default test? >>> >>> It's an MPI test. Run this to see if PETSc solvers are running >>> correctly: >>> >>> make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 >>> PETSC_ARCH=linux-gnu-intel test >>> >>> > Is the "open matplotlib " an issue? >>> >>> No, it's just a Python library that would be used to create a nice >>> figure if you had it installed. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- set (CMAKE_CXX_COMPILER /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpiCC) set (CMAKE_CXX_FLAGS "-O3") cmake_minimum_required(VERSION 2.6) project(kats) set (kats_VERSION_MAJOR 2) set (kats_VERSION_MINOR 0) list (APPEND CMAKE_MODULE_PATH "${kats_SOURCE_DIR}/CMake") # Pass some CMake settings to source code through a header file configure_file ( "${PROJECT_SOURCE_DIR}/cmake_vars.h.in" "${PROJECT_BINARY_DIR}/cmake_vars.h" ) set (CMAKE_INSTALL_PREFIX ${PROJECT_SOURCE_DIR}/../) # add to the include search path include_directories("${PROJECT_SOURCE_DIR}") #set (EXTRA_LIBS parmetis metis cgns petsc)# imf m) #add the executable set (SOURCES main.cc cmake_vars.h ) add_executable(kats ${SOURCES}) #target_link_libraries (kats ${FCFD_LIBS} }) install (TARGETS kats RUNTIME DESTINATION bin) -------------- next part -------------- A non-text attachment was scrubbed... Name: main.cc Type: text/x-c++src Size: 656 bytes Desc: not available URL: From paulhuaizhang at gmail.com Mon Dec 1 15:40:03 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 16:40:03 -0500 Subject: [petsc-users] running error In-Reply-To: References: <87lhmsgjjg.fsf@jedbrown.org> <87iohwgia5.fsf@jedbrown.org> <874mtfdq78.fsf@jedbrown.org> Message-ID: And the MPI and PETSc test with segment fault. This is the final goal. Many thanks to you Jed. Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 4:39 PM, paul zhang wrote: > I better send you original files. The compressed files triggered some > warnings I guess. > Attached is the MPI test been verified. > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 4:33 PM, paul zhang > wrote: > >> Hi Jed, >> >> Now I see PETSc is compiled correctly. However, when I attempted to call >> "petscksp.h" in my own program (quite simple one), it failed for some >> reason. Attached you can see two cases. The first is just the test of MPI, >> which is fine. The second is one added PETSc, which has segment fault as it >> went to >> >> MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current >> process id */ >> >> Can you shed some light? The MPI version is 1.8.3. >> >> Thanks, >> Paul >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Huaibao (Paul) Zhang >> *Gas Surface Interactions Lab* >> Department of Mechanical Engineering >> University of Kentucky, >> Lexington, >> KY, 40506-0503 >> *Office*: 216 Ralph G. Anderson Building >> *Web*:gsil.engineering.uky.edu >> >> On Mon, Dec 1, 2014 at 4:20 PM, paul zhang >> wrote: >> >>> >>> Sorry. I should reply it to the lists. >>> >>> [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make >>> PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel >>> test >>> >>> Running test examples to verify correct installation >>> Using PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 and >>> PETSC_ARCH=linux-gnu-intel >>> C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 >>> MPI process >>> C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 >>> MPI processes >>> Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 >>> MPI process >>> Completed test examples >>> ========================================= >>> Now to evaluate the computer systems you plan use - do: >>> make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 >>> PETSC_ARCH=linux-gnu-intel streams NPMAX=>> intend to use> >>> >>> >>> Huaibao (Paul) Zhang >>> *Gas Surface Interactions Lab* >>> Department of Mechanical Engineering >>> University of Kentucky, >>> Lexington, >>> KY, 40506-0503 >>> *Office*: 216 Ralph G. Anderson Building >>> *Web*:gsil.engineering.uky.edu >>> >>> On Mon, Dec 1, 2014 at 4:18 PM, Jed Brown wrote: >>> >>>> paul zhang writes: >>>> >>>> > Hi Jed, >>>> > Does this mean I've passed the default test? >>>> >>>> It's an MPI test. Run this to see if PETSc solvers are running >>>> correctly: >>>> >>>> make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 >>>> PETSC_ARCH=linux-gnu-intel test >>>> >>>> > Is the "open matplotlib " an issue? >>>> >>>> No, it's just a Python library that would be used to create a nice >>>> figure if you had it installed. >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- set (CMAKE_CXX_COMPILER /home/hzh225/LIB_CFD/openmpi-1.8.3/bin/mpiCC) set (CMAKE_CXX_FLAGS "-O3") set (PETSC_INCLUDE_DIRS1 /home/hzh225/LIB_CFD/nP/petsc-3.5.2/include) set (PETSC_INCLUDE_DIRS2 /home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/include) set (PETSC_LIBRARY_DIRS /home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel/lib) set (VALGRIND_INCLUDE_DIR /share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/include) set (VALGRIND_LIBRARY_DIR /share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0/lib) cmake_minimum_required(VERSION 2.6) project(kats) set (kats_VERSION_MAJOR 2) set (kats_VERSION_MINOR 0) list (APPEND CMAKE_MODULE_PATH "${kats_SOURCE_DIR}/CMake") # Pass some CMake settings to source code through a header file configure_file ( "${PROJECT_SOURCE_DIR}/cmake_vars.h.in" "${PROJECT_BINARY_DIR}/cmake_vars.h" ) set (CMAKE_INSTALL_PREFIX ${PROJECT_SOURCE_DIR}/../) # add to the include search path include_directories("${PROJECT_SOURCE_DIR}") include_directories(${PETSC_INCLUDE_DIRS1}) include_directories(${PETSC_INCLUDE_DIRS2}) include_directories(${VALGRIND_INCLUDE_DIR}) link_directories(${PETSC_LIBRARY_DIRS}) link_directories(${VALGRIND_LIBRARY_DIR}) set (EXTRA_LIBS petsc) #add the executable set (SOURCES main.cc cmake_vars.h ) add_executable(kats ${SOURCES}) target_link_libraries (kats ${EXTRA_LIBS}) install (TARGETS kats RUNTIME DESTINATION bin) -------------- next part -------------- A non-text attachment was scrubbed... Name: main.cc Type: text/x-c++src Size: 1030 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Dec 1 15:40:40 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 1 Dec 2014 15:40:40 -0600 Subject: [petsc-users] interpolate staggered grid values in parallel In-Reply-To: <87a937dr0z.fsf@jedbrown.org> References: <87a937dr0z.fsf@jedbrown.org> Message-ID: <74586C2A-F739-4F97-9E9B-D1C5FC637160@mcs.anl.gov> DMDAVecGetArray() works with either a DMDA local or global vector (of course giving ghost point access only with a local vector). To prevent this issue we could have DMDAVecLocalGetArray() and DMDAVecGlobalGetArray() and have each generate useful error messages if the appropriate vector is not passed in. Of course even with this extra level of handholding it won't stop someone from accessing v[j][i] etc with out of bounds array indices. Barry > On Dec 1, 2014, at 3:00 PM, Jed Brown wrote: > > Bishesh Khanal writes: >> Ok, I now see that KSPGetSolution(ksp, &x) gives a GLOBAL vector in x and >> not a local vector; hence the runtime memory error above since I can't >> access ghost values in indices such as [i-1] wth the global vec. >> Was there any reason not to make this information explicit in the docs to >> prevent the confusion I had ? Or should it have been obvious to me and that >> I'm missing something here ? > > Adding to Dave's comment, KSP has no concept of a "local vector". That > is a DM concept. From bsmith at mcs.anl.gov Mon Dec 1 15:47:27 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 1 Dec 2014 15:47:27 -0600 Subject: [petsc-users] running error In-Reply-To: References: <87lhmsgjjg.fsf@jedbrown.org> <87iohwgia5.fsf@jedbrown.org> <874mtfdq78.fsf@jedbrown.org> Message-ID: > On Dec 1, 2014, at 3:40 PM, paul zhang wrote: > > And the MPI and PETSc test with segment fault. What do you mean by this? Previously you sent Using PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 and PETSC_ARCH=linux-gnu-intel C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI processes indicating the PETSc test ran ok in parallel. Barry > > This is the final goal. Many thanks to you Jed. > Paul > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 4:39 PM, paul zhang wrote: > I better send you original files. The compressed files triggered some warnings I guess. > Attached is the MPI test been verified. > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 4:33 PM, paul zhang wrote: > Hi Jed, > > Now I see PETSc is compiled correctly. However, when I attempted to call "petscksp.h" in my own program (quite simple one), it failed for some reason. Attached you can see two cases. The first is just the test of MPI, which is fine. The second is one added PETSc, which has segment fault as it went to > > MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current process id */ > > Can you shed some light? The MPI version is 1.8.3. > > Thanks, > Paul > > > > > > > > > > > > > > > > > > > > > > > > > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 4:20 PM, paul zhang wrote: > > Sorry. I should reply it to the lists. > > [hzh225 at dlxlogin2-2 petsc-3.5.2]$ make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel test > > Running test examples to verify correct installation > Using PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 and PETSC_ARCH=linux-gnu-intel > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI processes > Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI process > Completed test examples > ========================================= > Now to evaluate the computer systems you plan use - do: > make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel streams NPMAX= > > > Huaibao (Paul) Zhang > Gas Surface Interactions Lab > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > Office: 216 Ralph G. Anderson Building > Web:gsil.engineering.uky.edu > > On Mon, Dec 1, 2014 at 4:18 PM, Jed Brown wrote: > paul zhang writes: > > > Hi Jed, > > Does this mean I've passed the default test? > > It's an MPI test. Run this to see if PETSc solvers are running correctly: > > make PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=linux-gnu-intel test > > > Is the "open matplotlib " an issue? > > No, it's just a Python library that would be used to create a nice > figure if you had it installed. > > > > > From balay at mcs.anl.gov Mon Dec 1 15:48:51 2014 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 1 Dec 2014 15:48:51 -0600 Subject: [petsc-users] Question on the compiler flags in Makefile In-Reply-To: <547C1CCE.4060606@gmail.com> References: <547C1CCE.4060606@gmail.com> Message-ID: On Mon, 1 Dec 2014, Danyang Su wrote: > Hi All, > > I have a PETSc application that need additional compiling flags to build > Hybrid MPI-OpenMP parallel application on WestGrid Supercomputer (Canada) > system. > > The code and makefile work fine on my local machine for both Windows and > Linux, but when compiled on WestGrid Orcinus System for the OpenMP version and > Hybrid version, the OpenMP parallel part does not take effect while only MPI > parallel part takes effect. There is no error while compiling the code. I am > not sure if there is something wrong with the makefile or something other > setting. > > The compiler flags need to build Hybrid MPI-OpenMP version on WestGrid Orcinus > is "-shared-intel -openmp -O2 -xSSSE3 -axSSE4.2,SSE4.1 -ip" . For the > sequential version or MPI parallel version, these flags are not needed. > > Would anybody help to check if the compiler flag (red) in the makefile is > correct? > > Thanks, > > Danyang > > The makefile is shown below > > include ${PETSC_DIR}/conf/variables > include ${PETSC_DIR}/conf/rules > > #FC = ifort > SRC =./../ > > # Additional flags that may be required by the compiler ... > # This is required for the OpenMP parallel version and Hybrid MPI-OpenMP > parallel version > # not necessary for the sequential version and MPI version > DFCFLAG = -shared-intel -openmp -O2 -xSSSE3 -axSSE4.2,SSE4.1 -ip > > #Flag for WestGrid Orcinus System > #Load PETSc module before make > #module load /global/system/Modules/modulefiles/intel-2011/petsc > FPPFLAGS = -DLINUX -DRELEASE -DPETSC -DMPI -DOPENMP > > SOURCES = $(SRC)gas_advection/relpfsat_g.o\ > $(SRC)int_h_ovendry.o\ > $(SRC)dhconst.o\ > ... > > min3p: $(SOURCES) chkopts > -${FLINKER} $(FPPFLAGS) $(DFCFLAG) -o ex $(SOURCES) ${PETSC_LIB} ${DLIB} You are using DFCFLAG only during link time - and not at compile time. If you need to specify flags for both compile time and linktime - use FFLAGS i.e: FFLAGS = -shared-intel -openmp -O2 -xSSSE3 -axSSE4.2,SSE4.1 -ip [and remove DFCFLAG] Also you should specify FFLAGS,FPPFLAGS before the 'incude' directives in the makefile. Satish From jed at jedbrown.org Mon Dec 1 15:49:00 2014 From: jed at jedbrown.org (Jed Brown) Date: Mon, 01 Dec 2014 14:49:00 -0700 Subject: [petsc-users] running error In-Reply-To: References: <87lhmsgjjg.fsf@jedbrown.org> <87iohwgia5.fsf@jedbrown.org> <874mtfdq78.fsf@jedbrown.org> Message-ID: <87tx1fca7n.fsf@jedbrown.org> paul zhang writes: > Hi Jed, > > Now I see PETSc is compiled correctly. However, when I attempted to call > "petscksp.h" in my own program (quite simple one), it failed for some > reason. Attached you can see two cases. The first is just the test of MPI, > which is fine. The second is one added PETSc, which has segment fault as it > went to > > MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current > process id */ I don't see anything obviously wrong, though cout is probably buffered so execution might have gotten further. As I said earlier, you should run in a debugger. I'm sorry, but we don't have time to debug your configuration over email---it takes away from time we would otherwise spend improving the library. I recommend starting with a working example from the PETSc tree and incrementally change it to do what you want. Make sure your hand-rolled CMake is linking exactly the same as PETSc. Or use FindPETSc.cmake. > Can you shed some light? The MPI version is 1.8.3. "MPI" is a standard, "Open MPI" is the implementation you are using. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From paulhuaizhang at gmail.com Mon Dec 1 15:50:54 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 16:50:54 -0500 Subject: [petsc-users] running error In-Reply-To: <87tx1fca7n.fsf@jedbrown.org> References: <87lhmsgjjg.fsf@jedbrown.org> <87iohwgia5.fsf@jedbrown.org> <874mtfdq78.fsf@jedbrown.org> <87tx1fca7n.fsf@jedbrown.org> Message-ID: Thanks for explanation. Let me see if I can find some samples. Best, Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 4:49 PM, Jed Brown wrote: > paul zhang writes: > > > Hi Jed, > > > > Now I see PETSc is compiled correctly. However, when I attempted to call > > "petscksp.h" in my own program (quite simple one), it failed for some > > reason. Attached you can see two cases. The first is just the test of > MPI, > > which is fine. The second is one added PETSc, which has segment fault as > it > > went to > > > > MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current > > process id */ > > I don't see anything obviously wrong, though cout is probably buffered > so execution might have gotten further. As I said earlier, you should > run in a debugger. I'm sorry, but we don't have time to debug your > configuration over email---it takes away from time we would otherwise > spend improving the library. > > I recommend starting with a working example from the PETSc tree and > incrementally change it to do what you want. Make sure your hand-rolled > CMake is linking exactly the same as PETSc. Or use FindPETSc.cmake. > > > Can you shed some light? The MPI version is 1.8.3. > > "MPI" is a standard, "Open MPI" is the implementation you are using. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alpkalpalp at gmail.com Mon Dec 1 15:52:35 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Mon, 1 Dec 2014 23:52:35 +0200 Subject: [petsc-users] PCBDDC right hand side vector Message-ID: Hi, I am trying to use PCBDDC for my substructured problem. I have each subdomains stiffneesses and load vectors and also mapping of numbering to global dofs. I copied ex59 and replaced proper functions for my problem. Everything seems working but, it does not produce correct results! I checked kspconvergedreaseon and it is 3 and iteration count is 0 !. I started debugging and know I am suspicious about the RHS vector. // assemble global matrix ierr = ComputeMatrix(dd,&K);CHKERRQ(ierr); // assemble BDDC rhs ierr = MatGetVecs(K,&F,NULL);CHKERRQ(ierr); ierr = VecZeroEntries(F);CHKERRQ(ierr); ierr = VecSetValues(F,Mapping.rows(),Mapping.data(),b.data(),ADD_VALUES);CHKERRQ(ierr); ierr = VecAssemblyBegin(F);CHKERRQ(ierr); ierr = VecAssemblyEnd(F);CHKERRQ(ierr); when I use matview and vecview, I see that K is matis and proc0 has 18x18 part of K, proc 1 has 12 part of K. 6 dofs are overlapping (dirichlet boundaries) so K is 24x24. However F seems that it has 12x1 at each processor. I guess sizes of K and F at each proc should be same and MatGetVecs should ensure same mapping for both K and F but it seems it does not ! Same problem is solved in other ksp solvers, I wonder how to prepare RHS for PCBDDC. thank you PS: I am at the latest master + stefano_zampini/pcbddc-primalfixes -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 20:53:49 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 21:53:49 -0500 Subject: [petsc-users] can't find PETSc package Message-ID: Hi All, I am attempting to configure my CMakeLists.txt file to compile a simple program using PETSc. I installed PETSc on a cluster, and cmake cannot find it. Can you shed some light on how I can do? CMAKE_MINIMUM_REQUIRED(VERSION 2.8) PROJECT(helloworld) SET(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake-modules") FIND_PACKAGE(PETSc REQUIRED) INCLUDE_DIRECTORIES(${PETSC_INCLUDES}) ADD_DEFINITIONS(${PETSC_DEFINITIONS}) Thanks, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 20:55:40 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 21:55:40 -0500 Subject: [petsc-users] can't find PETSc package In-Reply-To: References: Message-ID: Some error messages attached. -- Detecting CXX compiler ABI info - done CMake Error at /share/cluster/RHEL5.4/x86_64/apps/cmake/2.8.4/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:91 (MESSAGE): PETSc could not be found. Be sure to set PETSC_DIR and PETSC_ARCH. (missing: PETSC_INCLUDES PETSC_LIBRARIES PETSC_EXECUTABLE_RUNS) Call Stack (most recent call first): /share/cluster/RHEL5.4/x86_64/apps/cmake/2.8.4/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:252 (_FPHSA_FAILURE_MESSAGE) cmake-modules-master/FindPETSc.cmake:324 (find_package_handle_standard_args) CMakeLists.txt:7 (FIND_PACKAGE) -- Configuring incomplete, errors occurred! CMake Error at /share/cluster/RHEL5.4/x86_64/apps/cmake/2.8.4/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:91 (MESSAGE): PETSc could not be found. Be sure to set PETSC_DIR and PETSC_ARCH. (missing: PETSC_INCLUDES PETSC_LIBRARIES PETSC_EXECUTABLE_RUNS) Call Stack (most recent call first): /share/cluster/RHEL5.4/x86_64/apps/cmake/2.8.4/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:252 (_FPHSA_FAILURE_MESSAGE) cmake-modules-master/FindPETSc.cmake:324 (find_package_handle_standard_args) CMakeLists.txt:7 (FIND_PACKAGE) -- Configuring incomplete, errors occurred! Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 9:53 PM, paul zhang wrote: > Hi All, > > I am attempting to configure my CMakeLists.txt file to compile a simple > program using PETSc. I installed PETSc on a cluster, and cmake cannot find > it. Can you shed some light on how I can do? > > CMAKE_MINIMUM_REQUIRED(VERSION 2.8) > > PROJECT(helloworld) > > SET(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake-modules") > > FIND_PACKAGE(PETSc REQUIRED) > > INCLUDE_DIRECTORIES(${PETSC_INCLUDES}) > ADD_DEFINITIONS(${PETSC_DEFINITIONS}) > > > Thanks, > Paul > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 1 21:01:36 2014 From: jed at jedbrown.org (Jed Brown) Date: Mon, 01 Dec 2014 20:01:36 -0700 Subject: [petsc-users] can't find PETSc package In-Reply-To: References: Message-ID: <87fvcydab3.fsf@jedbrown.org> paul zhang writes: > Some error messages attached. > > -- Detecting CXX compiler ABI info - done > CMake Error at > /share/cluster/RHEL5.4/x86_64/apps/cmake/2.8.4/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:91 > (MESSAGE): > PETSc could not be found. Be sure to set PETSC_DIR and PETSC_ARCH. Please heed the line above. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From paulhuaizhang at gmail.com Mon Dec 1 21:17:50 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 22:17:50 -0500 Subject: [petsc-users] can't find PETSc package In-Reply-To: <87fvcydab3.fsf@jedbrown.org> References: <87fvcydab3.fsf@jedbrown.org> Message-ID: Like this? still in trouble though... CMAKE_MINIMUM_REQUIRED(VERSION 2.8) set(PETSC_DIR /home/hzh225/LIB_CFD/nP/petsc-3.5.2) set(PETSC_ARCH /home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel) SET(CMAKE_MODULE_PATH "./cmake-modules-master") FIND_PACKAGE(PETSc REQUIRED) CMake Error at cmake-modules-master/FindPETSc.cmake:122 (message): The pair PETSC_DIR=/home/hzh225/LIB_CFD/nP/petsc-3.5.2 PETSC_ARCH=/home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel do not specify a valid PETSc installation Call Stack (most recent call first): CMakeLists.txt:15 (FIND_PACKAGE) Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 10:01 PM, Jed Brown wrote: > paul zhang writes: > > > Some error messages attached. > > > > -- Detecting CXX compiler ABI info - done > > CMake Error at > > > /share/cluster/RHEL5.4/x86_64/apps/cmake/2.8.4/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:91 > > (MESSAGE): > > PETSc could not be found. Be sure to set PETSC_DIR and PETSC_ARCH. > > Please heed the line above. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 1 21:28:35 2014 From: jed at jedbrown.org (Jed Brown) Date: Mon, 01 Dec 2014 20:28:35 -0700 Subject: [petsc-users] can't find PETSc package In-Reply-To: References: <87fvcydab3.fsf@jedbrown.org> Message-ID: <87a936d924.fsf@jedbrown.org> paul zhang writes: > Like this? still in trouble though... > > CMAKE_MINIMUM_REQUIRED(VERSION 2.8) > > set(PETSC_DIR /home/hzh225/LIB_CFD/nP/petsc-3.5.2) If you're going to manually set local paths in your build files, then CMake is definitely a waste of time. These are normally environment variables. > set(PETSC_ARCH /home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel) This is not how PETSC_ARCH works. It should probably be "linux-gnu-intel". Please read what PETSc's build prints or read the documentation. http://www.mcs.anl.gov/petsc/documentation/installation.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From paulhuaizhang at gmail.com Mon Dec 1 21:41:16 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 22:41:16 -0500 Subject: [petsc-users] can't find PETSc package In-Reply-To: <87a936d924.fsf@jedbrown.org> References: <87fvcydab3.fsf@jedbrown.org> <87a936d924.fsf@jedbrown.org> Message-ID: Problem solved! This is a baby step for me. I actually have a code project which works with petsc-3.1-p8, and configured with Cmake. As I attempted to update PETSc, my old configuration flag never works. I have to start it over. Thanks a lot. ?Paul? On Mon, Dec 1, 2014 at 10:28 PM, Jed Brown wrote: > paul zhang writes: > > > Like this? still in trouble though... > > > > CMAKE_MINIMUM_REQUIRED(VERSION 2.8) > > > > set(PETSC_DIR /home/hzh225/LIB_CFD/nP/petsc-3.5.2) > > If you're going to manually set local paths in your build files, then > CMake is definitely a waste of time. These are normally environment > variables. > > > set(PETSC_ARCH /home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel) > > This is not how PETSC_ARCH works. It should probably be > "linux-gnu-intel". Please read what PETSc's build prints or read the > documentation. > > http://www.mcs.anl.gov/petsc/documentation/installation.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 1 21:44:28 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 1 Dec 2014 22:44:28 -0500 Subject: [petsc-users] can't find PETSc package In-Reply-To: References: <87fvcydab3.fsf@jedbrown.org> <87a936d924.fsf@jedbrown.org> Message-ID: Sounds like a warning, is it serious? -- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes -- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes - Success -- PETSc requires extra include paths, but links correctly with only interface libraries. This is an unexpected configuration (but it seems to work fine). Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Mon, Dec 1, 2014 at 10:41 PM, paul zhang wrote: > Problem solved! > This is a baby step for me. I actually have a code project which works > with petsc-3.1-p8, and configured with Cmake. As I attempted to update > PETSc, my old configuration flag never works. I have to start it over. > > Thanks a lot. > ?Paul? > > > > On Mon, Dec 1, 2014 at 10:28 PM, Jed Brown wrote: > >> paul zhang writes: >> >> > Like this? still in trouble though... >> > >> > CMAKE_MINIMUM_REQUIRED(VERSION 2.8) >> > >> > set(PETSC_DIR /home/hzh225/LIB_CFD/nP/petsc-3.5.2) >> >> If you're going to manually set local paths in your build files, then >> CMake is definitely a waste of time. These are normally environment >> variables. >> >> > set(PETSC_ARCH /home/hzh225/LIB_CFD/nP/petsc-3.5.2/linux-gnu-intel) >> >> This is not how PETSC_ARCH works. It should probably be >> "linux-gnu-intel". Please read what PETSc's build prints or read the >> documentation. >> >> http://www.mcs.anl.gov/petsc/documentation/installation.html >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 1 21:53:56 2014 From: jed at jedbrown.org (Jed Brown) Date: Mon, 01 Dec 2014 20:53:56 -0700 Subject: [petsc-users] can't find PETSc package In-Reply-To: References: <87fvcydab3.fsf@jedbrown.org> <87a936d924.fsf@jedbrown.org> Message-ID: <877fyad7vv.fsf@jedbrown.org> paul zhang writes: > Sounds like a warning, is it serious? No, as explained in the message. > -- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes > -- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes - Success > -- PETSc requires extra include paths, but links correctly with only > interface libraries. This is an unexpected configuration (but it seems to > work fine). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From danyang.su at gmail.com Tue Dec 2 01:47:37 2014 From: danyang.su at gmail.com (Danyang Su) Date: Mon, 01 Dec 2014 23:47:37 -0800 Subject: [petsc-users] Question on the compiler flags in Makefile In-Reply-To: References: <547C1CCE.4060606@gmail.com> Message-ID: <547D6E99.3070301@gmail.com> On 14-12-01 01:48 PM, Satish Balay wrote: > On Mon, 1 Dec 2014, Danyang Su wrote: > >> Hi All, >> >> I have a PETSc application that need additional compiling flags to build >> Hybrid MPI-OpenMP parallel application on WestGrid Supercomputer (Canada) >> system. >> >> The code and makefile work fine on my local machine for both Windows and >> Linux, but when compiled on WestGrid Orcinus System for the OpenMP version and >> Hybrid version, the OpenMP parallel part does not take effect while only MPI >> parallel part takes effect. There is no error while compiling the code. I am >> not sure if there is something wrong with the makefile or something other >> setting. >> >> The compiler flags need to build Hybrid MPI-OpenMP version on WestGrid Orcinus >> is "-shared-intel -openmp -O2 -xSSSE3 -axSSE4.2,SSE4.1 -ip" . For the >> sequential version or MPI parallel version, these flags are not needed. >> >> Would anybody help to check if the compiler flag (red) in the makefile is >> correct? >> >> Thanks, >> >> Danyang >> >> The makefile is shown below >> >> include ${PETSC_DIR}/conf/variables >> include ${PETSC_DIR}/conf/rules >> >> #FC = ifort >> SRC =./../ >> >> # Additional flags that may be required by the compiler ... >> # This is required for the OpenMP parallel version and Hybrid MPI-OpenMP >> parallel version >> # not necessary for the sequential version and MPI version >> DFCFLAG = -shared-intel -openmp -O2 -xSSSE3 -axSSE4.2,SSE4.1 -ip >> >> #Flag for WestGrid Orcinus System >> #Load PETSc module before make >> #module load /global/system/Modules/modulefiles/intel-2011/petsc >> FPPFLAGS = -DLINUX -DRELEASE -DPETSC -DMPI -DOPENMP >> >> SOURCES = $(SRC)gas_advection/relpfsat_g.o\ >> $(SRC)int_h_ovendry.o\ >> $(SRC)dhconst.o\ >> ... >> >> min3p: $(SOURCES) chkopts >> -${FLINKER} $(FPPFLAGS) $(DFCFLAG) -o ex $(SOURCES) ${PETSC_LIB} ${DLIB} > You are using DFCFLAG only during link time - and not at compile time. > > If you need to specify flags for both compile time and linktime - use FFLAGS > > i.e: > > FFLAGS = -shared-intel -openmp -O2 -xSSSE3 -axSSE4.2,SSE4.1 -ip > [and remove DFCFLAG] > > Also you should specify FFLAGS,FPPFLAGS before the 'incude' directives in the makefile. > > Satish Hi Satish, Thanks very much. It now works. Danyang > > From siddhesh4godbole at gmail.com Tue Dec 2 02:20:26 2014 From: siddhesh4godbole at gmail.com (siddhesh godbole) Date: Tue, 2 Dec 2014 13:50:26 +0530 Subject: [petsc-users] all eigen values Message-ID: Hello , I am a novice in using PETSC. Right now I am solving a cantilever beam vibration problem with PETSc in which i am using SLEPc to compute the eigen values. But in order to diagonalize the K and M matrices i need a full eigen matrix and all eigen values. What i see from all the tutorial help related to SLEPc is that it computes only certain number of eigenvalues which fall in some range or order. Please clarify me if i am ignorant about full usage of SLEPc and kindly suggest some way out. sincerely *Siddhesh M Godbole* IIT Madras -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Tue Dec 2 02:37:02 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 2 Dec 2014 09:37:02 +0100 Subject: [petsc-users] all eigen values In-Reply-To: References: Message-ID: <0B24A58C-F1C3-4538-BB78-7CE07DC73496@dsic.upv.es> El 02/12/2014, a las 09:20, siddhesh godbole escribi?: > Hello , > > I am a novice in using PETSC. Right now I am solving a cantilever beam vibration problem with PETSc in which i am using SLEPc to compute the eigen values. But in order to diagonalize the K and M matrices i need a full eigen matrix and all eigen values. > What i see from all the tutorial help related to SLEPc is that it computes only certain number of eigenvalues which fall in some range or order. > > Please clarify me if i am ignorant about full usage of SLEPc and kindly suggest some way out. > > sincerely > > Siddhesh M Godbole > IIT Madras SLEPc is intended for large-scale eigenproblems, where you compute only part of the eigenvalue/eigenvector pairs because computing/storing all of them would be unaffordable. For small-medium problems use LAPACK or ScaLAPACK. Jose From jed at jedbrown.org Tue Dec 2 11:52:16 2014 From: jed at jedbrown.org (Jed Brown) Date: Tue, 02 Dec 2014 10:52:16 -0700 Subject: [petsc-users] all eigen values In-Reply-To: <0B24A58C-F1C3-4538-BB78-7CE07DC73496@dsic.upv.es> References: <0B24A58C-F1C3-4538-BB78-7CE07DC73496@dsic.upv.es> Message-ID: <87vbluaqi7.fsf@jedbrown.org> "Jose E. Roman" writes: > SLEPc is intended for large-scale eigenproblems, where you compute > only part of the eigenvalue/eigenvector pairs because > computing/storing all of them would be unaffordable. For small-medium > problems use LAPACK or ScaLAPACK. Or Elemental http://libelemental.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From lb2653 at columbia.edu Wed Dec 3 10:20:46 2014 From: lb2653 at columbia.edu (Luc Berger-Vergiat) Date: Wed, 03 Dec 2014 11:20:46 -0500 Subject: [petsc-users] computation of Sp for fieldsplit schur preconditioner Message-ID: <547F385E.2040502@columbi.edu> Hi all, I would like to know if there would be an easy way of computing the Sp preconditioner for a fieldsplit schur complement using the following formula: Sp=A11-A10*diag(inv(A00))*A01 instead of Sp=A11-A10*inv(diag(A00))*A01 I think that it would be really beneficial in my case since the eigenvalues of both operators are very different for my problem (see ev_S_diaginv for the eigenvalues of the modified Sp and ev_S for the eigenvalues of the current Sp). I do understand that this requires to compute a more complex inverse while forming Sp, but I compute this inverse using a block jacobi lu due to the special properties of my matrix (see jac_nonlin_nested for the sparsity pattern of my matrix). So the change would actually be quite minimal no? I am also actually debating whether I should compute the exact S? -- Best, Luc -------------- next part -------------- A non-text attachment was scrubbed... Name: ev_S.pdf Type: application/pdf Size: 81811 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ev_Sp_diaginv.pdf Type: application/pdf Size: 81819 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jac_nonlin_nested.pdf Type: application/pdf Size: 51970 bytes Desc: not available URL: From jed at jedbrown.org Wed Dec 3 16:40:06 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 03 Dec 2014 15:40:06 -0700 Subject: [petsc-users] computation of Sp for fieldsplit schur preconditioner In-Reply-To: <547F385E.2040502@columbi.edu> References: <547F385E.2040502@columbi.edu> Message-ID: <87zjb473y1.fsf@jedbrown.org> Luc Berger-Vergiat writes: > Hi all, > I would like to know if there would be an easy way of computing the Sp > preconditioner for a fieldsplit schur complement using the following > formula: > Sp=A11-A10*diag(inv(A00))*A01 > instead of > Sp=A11-A10*inv(diag(A00))*A01 Not in general because inv(A00) is dense, thus not practically computable. You can use PCFieldSplitSetSchurPre to provide your own Sp. > I think that it would be really beneficial in my case since the > eigenvalues of both operators are very different for my problem (see > ev_S_diaginv for the eigenvalues of the modified Sp and ev_S for the > eigenvalues of the current Sp). > > I do understand that this requires to compute a more complex inverse > while forming Sp, but I compute this inverse using a block jacobi lu due > to the special properties of my matrix (see jac_nonlin_nested for the > sparsity pattern of my matrix). So the change would actually be quite > minimal no? I am also actually debating whether I should compute the > exact S? > > -- > Best, > Luc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From alpkalpalp at gmail.com Wed Dec 3 17:07:12 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Thu, 4 Dec 2014 01:07:12 +0200 Subject: [petsc-users] Unknown Mat type given: schurcomplement for master commit gf685883 Message-ID: H, Here is the message I get when PCBDDCCreateFETIDPOperators() is called; Thanks; [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Unknown type. Check for miss-spelling or missing package: http://www.mcs.anl.gov/petsc/documentation/ins tallation.html#external [0]PETSC ERROR: Unknown Mat type given: schurcomplement [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-1067-gf685883 GIT Date: 2014-12-02 13:26:28 -0600 [0]PETSC ERROR: Unknown Name on a BDDC_ICL_DEBUG named SEMIH-PC by semih Thu Dec 04 01:02:10 2014 [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-cxx="win32fe icl" --with-fc="win32fe ifort" --with-blas -lapack-dir=/cygdrive/c/MKL/lib/intel64 --with-hypre-include=/cygdrive/c/EXTRLIBS/include/HYPRE --with-hypre-lib=/cygdri ve/c/EXTRLIBS/lib/HYPRE.lib --with-scalapack-include=/cygdrive/c/MKL/include --with-scalapack-lib="[/cygdrive/c/MKL/lib/ intel64/mkl_scalapack_lp64_dll.lib,/cygdrive/c/MKL/lib/intel64/mkl_blacs_msmpi_lp64.lib]" --with-metis-include=/cygdrive /c/EXTRLIBS/include/parametis --with-metis-lib=/cygdrive/c/EXTRLIBS/lib/metis.lib --with-parmetis-include=/cygdrive/c/EX TRLIBS/include/parametis --with-parmetis-lib="[/cygdrive/c/EXTRLIBS/lib/parmetis.lib,/cygdrive/c/EXTRLIBS/lib/metis.lib] " --with-mpi-include=/cygdrive/c/MSMPI/Inc/ --with-mpi-lib="[/cygdrive/c/MSMPI/Lib/amd64/msmpi.lib,/cygdrive/c/MSMPI/Lib /amd64/msmpifec.lib]" --with-shared-libraries --useThreads=0 --with-pcbddc --PETSC_ARCH=BDDC_ICL_DEBUG --useThreads=0 [0]PETSC ERROR: #1 MatSetType() line 63 in C:\cywgin64\home\semih\PETSCM~1\src\mat\INTERF~1\matreg.c [0]PETSC ERROR: #2 MatCreateSchurComplement() line 212 in C:\cywgin64\home\semih\PETSCM~1\src\ksp\ksp\utils\schurm.c [0]PETSC ERROR: #3 PCBDDCSetupFETIDPPCContext() line 557 in C:\cywgin64\home\semih\PETSCM~1\src\ksp\pc\impls\bddc\bddcfe tidp.c [0]PETSC ERROR: #4 PCBDDCCreateFETIDPOperators_BDDC() line 1691 in C:\cywgin64\home\semih\PETSCM~1\src\ksp\pc\impls\bddc \bddc.c [0]PETSC ERROR: #5 PCBDDCCreateFETIDPOperators() line 1737 in C:\cywgin64\home\semih\PETSCM~1\src\ksp\pc\impls\bddc\bddc -------------- next part -------------- An HTML attachment was scrubbed... URL: From ansp6066 at colorado.edu Wed Dec 3 17:13:32 2014 From: ansp6066 at colorado.edu (Andrew Spott) Date: Wed, 03 Dec 2014 15:13:32 -0800 (PST) Subject: [petsc-users] Changing block size. Message-ID: <1417648412024.1801ac31@Nodemailer> What is the easiest way to change the block size of a matrix? I have a matrix that was saved with a block size of 1, and I would like to increase it upon load to a larger block size. ?Is there a simple way of doing this? Thanks -Andrew Spott -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjm2176 at columbia.edu Wed Dec 3 17:24:55 2014 From: cjm2176 at columbia.edu (Colin McAuliffe) Date: Wed, 3 Dec 2014 18:24:55 -0500 Subject: [petsc-users] computation of Sp for fieldsplit schur preconditioner In-Reply-To: <87zjb473y1.fsf@jedbrown.org> References: <547F385E.2040502@columbi.edu> <87zjb473y1.fsf@jedbrown.org> Message-ID: Hi Luc, it looks like it is possible to lump A00 before inverting it: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/MatSchurComplementGetAinvType.html#MatSchurComplementGetAinvType I wonder if this will produce a similar improvement for your problem while avoiding the need to implement diag(inv(A00)). On Wed, Dec 3, 2014 at 5:40 PM, Jed Brown wrote: > Luc Berger-Vergiat writes: > > > Hi all, > > I would like to know if there would be an easy way of computing the Sp > > preconditioner for a fieldsplit schur complement using the following > > formula: > > Sp=A11-A10*diag(inv(A00))*A01 > > instead of > > Sp=A11-A10*inv(diag(A00))*A01 > > Not in general because inv(A00) is dense, thus not practically > computable. You can use PCFieldSplitSetSchurPre to provide your own Sp. > > > I think that it would be really beneficial in my case since the > > eigenvalues of both operators are very different for my problem (see > > ev_S_diaginv for the eigenvalues of the modified Sp and ev_S for the > > eigenvalues of the current Sp). > > > > I do understand that this requires to compute a more complex inverse > > while forming Sp, but I compute this inverse using a block jacobi lu due > > to the special properties of my matrix (see jac_nonlin_nested for the > > sparsity pattern of my matrix). So the change would actually be quite > > minimal no? I am also actually debating whether I should compute the > > exact S? > > > > -- > > Best, > > Luc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 3 18:40:41 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 3 Dec 2014 18:40:41 -0600 Subject: [petsc-users] Changing block size. In-Reply-To: <1417648412024.1801ac31@Nodemailer> References: <1417648412024.1801ac31@Nodemailer> Message-ID: > On Dec 3, 2014, at 5:13 PM, Andrew Spott wrote: > > What is the easiest way to change the block size of a matrix? > > I have a matrix that was saved with a block size of 1, and I would like to increase it upon load to a larger block size. Is there a simple way of doing this? Yes, declare it as a BAIJ before calling MatLoad() for example Mat A; MatCreate(comm,&A); MatSetType(A,MATBAIJ); MatLoad(A,viewer); and pass the command line option -matload_block_size 2 In the development version of PETSc you can skip the command line option and instead use MatSetBlockSize(A,2); before the call to MatLoad(). There was a bug in the release version of PETSc that prevented this from previously working. Barry > > Thanks > > -Andrew Spott > From ansp6066 at colorado.edu Wed Dec 3 19:18:02 2014 From: ansp6066 at colorado.edu (Andrew Spott) Date: Wed, 03 Dec 2014 17:18:02 -0800 (PST) Subject: [petsc-users] Changing block size. In-Reply-To: References: Message-ID: <1417655882548.3cfab06a@Nodemailer> Awesome, thanks. ?I ran into that bug and thought I was doing something wrong. -Andrew ? Andrew On Wed, Dec 3, 2014 at 5:40 PM, Barry Smith wrote: >> On Dec 3, 2014, at 5:13 PM, Andrew Spott wrote: >> >> What is the easiest way to change the block size of a matrix? >> >> I have a matrix that was saved with a block size of 1, and I would like to increase it upon load to a larger block size. Is there a simple way of doing this? > Yes, declare it as a BAIJ before calling MatLoad() for example > Mat A; > MatCreate(comm,&A); > MatSetType(A,MATBAIJ); > MatLoad(A,viewer); > and pass the command line option -matload_block_size 2 > In the development version of PETSc you can skip the command line option and instead use > MatSetBlockSize(A,2); before the call to MatLoad(). There was a bug in the release version of PETSc that prevented this from previously working. > Barry >> >> Thanks >> >> -Andrew Spott >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Wed Dec 3 19:26:48 2014 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Thu, 04 Dec 2014 10:26:48 +0900 Subject: [petsc-users] Dumping KSP solve information to a file Message-ID: <547FB858.6060201@gmail.com> I'd like to be able to obtain and process post-solve information about KSP solves, after a PETSc application has run. Specifically, I would like the information produced by -ksp_converged_reason, as well as the final residual norm, to be available as a file for post-processing. A direct approach is to modify the application source, adding my own code invoking KSPGetIterationNumber(), KSPGetConvergedReason(), KSPGetResidualNorm(), etc. to the source, and producing the required output with an ASCII viewer. However, it'd be more convenient to be able to obtain the required information without modifying the application source. An inelegant and potentially fragile approach is to provide flags like -ksp_converged_reason, -ksp_monitor, etc. and write a script to extract the required information from what is dumped to stdout. My first thought on how to do things properly is to link in my own custom KSP viewer; should that be a viable approach? Is there a simpler/better method? -Patrick From bsmith at mcs.anl.gov Wed Dec 3 20:49:53 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 3 Dec 2014 20:49:53 -0600 Subject: [petsc-users] Dumping KSP solve information to a file In-Reply-To: <547FB858.6060201@gmail.com> References: <547FB858.6060201@gmail.com> Message-ID: > On Dec 3, 2014, at 7:26 PM, Patrick Sanan wrote: > > I'd like to be able to obtain and process post-solve information about KSP solves, after a PETSc application has run. Specifically, I would like the information produced by -ksp_converged_reason, as well as the final residual norm, to be available as a file for post-processing. > > A direct approach is to modify the application source, adding my own code invoking KSPGetIterationNumber(), KSPGetConvergedReason(), KSPGetResidualNorm(), etc. to the source, and producing the required output with an ASCII viewer. > > However, it'd be more convenient to be able to obtain the required information without modifying the application source. An inelegant and potentially fragile approach is to provide flags like -ksp_converged_reason, -ksp_monitor, etc. and write a script to extract the required information from what is dumped to stdout. > > My first thought on how to do things properly is to link in my own custom KSP viewer; should that be a viable approach? Is there a simpler/better method? We have ways to add your own monitor with KSPMonitorSet() but for viewer basically you would just write your own viewer function and then call it directly. Barry > > -Patrick > From bsmith at mcs.anl.gov Wed Dec 3 21:07:47 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 3 Dec 2014 21:07:47 -0600 Subject: [petsc-users] Dumping KSP solve information to a file In-Reply-To: References: <547FB858.6060201@gmail.com> Message-ID: <16B2D330-7BE3-4CBD-AE43-A6438BE33B0E@mcs.anl.gov> > On Dec 3, 2014, at 9:04 PM, Patrick Sanan wrote: > > > > > >> Il giorno Dec 4, 2014, alle ore 11:49 AM, Barry Smith ha scritto: >> >> >>> On Dec 3, 2014, at 7:26 PM, Patrick Sanan wrote: >>> >>> I'd like to be able to obtain and process post-solve information about KSP solves, after a PETSc application has run. Specifically, I would like the information produced by -ksp_converged_reason, as well as the final residual norm, to be available as a file for post-processing. >>> >>> A direct approach is to modify the application source, adding my own code invoking KSPGetIterationNumber(), KSPGetConvergedReason(), KSPGetResidualNorm(), etc. to the source, and producing the required output with an ASCII viewer. >>> >>> However, it'd be more convenient to be able to obtain the required information without modifying the application source. An inelegant and potentially fragile approach is to provide flags like -ksp_converged_reason, -ksp_monitor, etc. and write a script to extract the required information from what is dumped to stdout. >>> >>> My first thought on how to do things properly is to link in my own custom KSP viewer; should that be a viable approach? Is there a simpler/better method? >> >> We have ways to add your own monitor with KSPMonitorSet() but for viewer basically you would just write your own viewer function and then call it directly. >> > Ok- thats what Im essentially doing now, but (if I understand you correctly) this still involves modifying the source of the application I want to analyze - I was wondering how easy it would be to write my own viewer, link it in when I compile my application, You can do this part. > and then supply a command line option to invoke it at runtime. We don't have a way of registering new viewers at run time. Barry > >> Barry >> >>> >>> -Patrick From hus003 at ucsd.edu Wed Dec 3 23:54:06 2014 From: hus003 at ucsd.edu (Sun, Hui) Date: Thu, 4 Dec 2014 05:54:06 +0000 Subject: [petsc-users] how to use external solvers Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A55@XMAIL-MBX-BH1.AD.UCSD.EDU> Hello, I try to use external packages such as umfpack or superlu, which are listed here: http://www.mcs.anl.gov/research/projects/petsc/documentation/linearsolvertable.html I have PETSc compiled and installed with those external packages. Presumably I think to use umfpack would be simply to specify an option in the command line arguments. However, I had a hard time finding the correct option. How can I look it up? Thank you. Best, Hui -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Dec 4 00:10:40 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 03 Dec 2014 23:10:40 -0700 Subject: [petsc-users] how to use external solvers In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A55@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A55@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <87d2806j33.fsf@jedbrown.org> "Sun, Hui" writes: > Hello, > > I try to use external packages such as umfpack or superlu, which are listed here: http://www.mcs.anl.gov/research/projects/petsc/documentation/linearsolvertable.html > > I have PETSc compiled and installed with those external packages. Presumably I think to use umfpack would be simply to specify an option in the command line arguments. However, I had a hard time finding the correct option. How can I look it up? Thank you. -pc_type lu -pc_factor_mat_solver_package umfpack Linked from all the implementation pages on the table above: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetMatSolverPackage.html#PCFactorSetMatSolverPackage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From hus003 at ucsd.edu Thu Dec 4 00:22:48 2014 From: hus003 at ucsd.edu (Sun, Hui) Date: Thu, 4 Dec 2014 06:22:48 +0000 Subject: [petsc-users] how to use external solvers In-Reply-To: <87d2806j33.fsf@jedbrown.org> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A55@XMAIL-MBX-BH1.AD.UCSD.EDU>, <87d2806j33.fsf@jedbrown.org> Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A67@XMAIL-MBX-BH1.AD.UCSD.EDU> Thank you Jed, I get it! Best, Hui ________________________________________ From: Jed Brown [jed at jedbrown.org] Sent: Wednesday, December 03, 2014 10:10 PM To: Sun, Hui; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] how to use external solvers "Sun, Hui" writes: > Hello, > > I try to use external packages such as umfpack or superlu, which are listed here: http://www.mcs.anl.gov/research/projects/petsc/documentation/linearsolvertable.html > > I have PETSc compiled and installed with those external packages. Presumably I think to use umfpack would be simply to specify an option in the command line arguments. However, I had a hard time finding the correct option. How can I look it up? Thank you. -pc_type lu -pc_factor_mat_solver_package umfpack Linked from all the implementation pages on the table above: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetMatSolverPackage.html#PCFactorSetMatSolverPackage From hus003 at ucsd.edu Thu Dec 4 01:09:44 2014 From: hus003 at ucsd.edu (Sun, Hui) Date: Thu, 4 Dec 2014 07:09:44 +0000 Subject: [petsc-users] Question about preallocation and getOwnership functions Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A73@XMAIL-MBX-BH1.AD.UCSD.EDU> Hello, The following lines of code give me trouble because PETSc gives me an error message saying the object A is in a wrong state, and Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange(). Here is part of the code: ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,2*nx*ny,2*nx*ny);CHKERRQ(ierr); ierr = MatSetType(A,MATAIJ);CHKERRQ(ierr); ierr = MatMPIAIJSetPreallocation(A,9,NULL,9,NULL);CHKERRQ(ierr); ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr); I think I have called MatMPIAIJSetPreallocation, so what might be causing the problem here? Best, Hui -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Thu Dec 4 01:46:51 2014 From: hus003 at ucsd.edu (Sun, Hui) Date: Thu, 4 Dec 2014 07:46:51 +0000 Subject: [petsc-users] Question about preallocation and getOwnership functions In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A73@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A73@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A81@XMAIL-MBX-BH1.AD.UCSD.EDU> After reading some documentation, I figured it out myself. MATAIJ can be MATSEQAIJ or MATMPIAIJ, depending on the number of processors involved. Thus, I need to include the command MatSeqAIJSetPreallocation as well, to cover both cases. Best, Hui ________________________________ From: Sun, Hui Sent: Wednesday, December 03, 2014 11:09 PM To: petsc-users at mcs.anl.gov Subject: Question about preallocation and getOwnership functions Hello, The following lines of code give me trouble because PETSc gives me an error message saying the object A is in a wrong state, and Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange(). Here is part of the code: ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,2*nx*ny,2*nx*ny);CHKERRQ(ierr); ierr = MatSetType(A,MATAIJ);CHKERRQ(ierr); ierr = MatMPIAIJSetPreallocation(A,9,NULL,9,NULL);CHKERRQ(ierr); ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr); I think I have called MatMPIAIJSetPreallocation, so what might be causing the problem here? Best, Hui -------------- next part -------------- An HTML attachment was scrubbed... URL: From aurelia.cubaramos at epfl.ch Thu Dec 4 07:14:40 2014 From: aurelia.cubaramos at epfl.ch (Aurelia Cuba Ramos) Date: Thu, 04 Dec 2014 14:14:40 +0100 Subject: [petsc-users] Dirichlet boundary conditions for MATMPISBAIJ Message-ID: <54805E40.9040208@epfl.ch> Hi all, I recently started using PETSc and I am working with the MATMPISBAIJ matrix format. I am trying to apply Dirichlet boundary conditions by calling MatZeroRowsColumns() but this function seems to be not available for symmetric block matrices in parallel. I saw on the FAQ website that another possible way is to use MatZeroRows() and -ksp_type preonly -pc_type redistribute. But when I try calling MatZeroRows() I get again a PetscErrorCode 56, so it doesn't seem to be supported neither. I am currently using PETSc 3.4. Is MatZeroRowsColumns() avaibale for MATMPISBAIJ in PETSc 3.5 and if not, what is the fastest way to apply Dirichlet BC's for this matrix format? Many thanks, Aurelia From jed at jedbrown.org Thu Dec 4 09:35:21 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 04 Dec 2014 08:35:21 -0700 Subject: [petsc-users] Question about preallocation and getOwnership functions In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A81@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A73@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010B5A81@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <874mtb77ie.fsf@jedbrown.org> "Sun, Hui" writes: > After reading some documentation, I figured it out myself. MATAIJ can > be MATSEQAIJ or MATMPIAIJ, depending on the number of processors > involved. Thus, I need to include the command > MatSeqAIJSetPreallocation as well, to cover both cases. Or call MatXAIJSetPreallocation and be done with it for all *AIJ formats. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From may at bu.edu Thu Dec 4 09:43:10 2014 From: may at bu.edu (Young, Matthew, Adam) Date: Thu, 4 Dec 2014 15:43:10 +0000 Subject: [petsc-users] Question about preallocation and getOwnership functions In-Reply-To: <874mtb77ie.fsf@jedbrown.org> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A73@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010B5A81@XMAIL-MBX-BH1.AD.UCSD.EDU>, <874mtb77ie.fsf@jedbrown.org> Message-ID: <17A35C213185A84BB8ED54C88FBFD7128FC04381@IST-EX10MBX-4.ad.bu.edu> Is there an advantage to using MatXAIJSetPreallocation (as Jed suggested) or either of the SEQ/MPI preallocation routines (as Hui originally considered) instead of calling MatSetUp to cover all cases? ------------------------------------------- Matthew Young Graduate Student Boston University Dept. of Astronomy ------------------------------------------- ________________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Jed Brown [jed at jedbrown.org] Sent: Thursday, December 04, 2014 10:35 AM To: Sun, Hui; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question about preallocation and getOwnership functions "Sun, Hui" writes: > After reading some documentation, I figured it out myself. MATAIJ can > be MATSEQAIJ or MATMPIAIJ, depending on the number of processors > involved. Thus, I need to include the command > MatSeqAIJSetPreallocation as well, to cover both cases. Or call MatXAIJSetPreallocation and be done with it for all *AIJ formats. From jed at jedbrown.org Thu Dec 4 09:50:37 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 04 Dec 2014 08:50:37 -0700 Subject: [petsc-users] Question about preallocation and getOwnership functions In-Reply-To: <17A35C213185A84BB8ED54C88FBFD7128FC04381@IST-EX10MBX-4.ad.bu.edu> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A73@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010B5A81@XMAIL-MBX-BH1.AD.UCSD.EDU> <874mtb77ie.fsf@jedbrown.org> <17A35C213185A84BB8ED54C88FBFD7128FC04381@IST-EX10MBX-4.ad.bu.edu> Message-ID: <871tof76sy.fsf@jedbrown.org> "Young, Matthew, Adam" writes: > Is there an advantage to using MatXAIJSetPreallocation (as Jed > suggested) or either of the SEQ/MPI preallocation routines (as Hui > originally considered) instead of calling MatSetUp to cover all cases? If you don't preallocate, MatSetUp guesses: http://www.mcs.anl.gov/petsc/documentation/faq.html#efficient-assembly -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Thu Dec 4 13:35:00 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 4 Dec 2014 13:35:00 -0600 Subject: [petsc-users] Question about preallocation and getOwnership functions In-Reply-To: <17A35C213185A84BB8ED54C88FBFD7128FC04381@IST-EX10MBX-4.ad.bu.edu> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5A73@XMAIL-MBX-BH1.AD.UCSD.EDU> <7501CC2B7BBCC44A92ECEEC316170ECB010B5A81@XMAIL-MBX-BH1.AD.UCSD.EDU> <, > <874mtb77ie.fsf@jedbrown.org> <17A35C213185A84BB8ED54C88FBFD7128FC04381@IST-EX10MBX-4.ad.bu.edu> Message-ID: > On Dec 4, 2014, at 9:43 AM, Young, Matthew, Adam wrote: > > Is there an advantage to using MatXAIJSetPreallocation (as Jed suggested) or either of the SEQ/MPI preallocation routines (as Hui originally considered) instead of calling MatSetUp to cover all cases? Yes, from the manual page: MatSetUp - Sets up the internal matrix data structures for the later use. Collective on Mat Input Parameters: . A - the Mat context Notes: If the user has not set preallocation for this matrix then a default preallocation that is likely to be inefficient is used. > ------------------------------------------- > Matthew Young > Graduate Student > Boston University Dept. of Astronomy > ------------------------------------------- > > > ________________________________________ > From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Jed Brown [jed at jedbrown.org] > Sent: Thursday, December 04, 2014 10:35 AM > To: Sun, Hui; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Question about preallocation and getOwnership functions > > "Sun, Hui" writes: > >> After reading some documentation, I figured it out myself. MATAIJ can >> be MATSEQAIJ or MATMPIAIJ, depending on the number of processors >> involved. Thus, I need to include the command >> MatSeqAIJSetPreallocation as well, to cover both cases. > > Or call MatXAIJSetPreallocation and be done with it for all *AIJ > formats. From knepley at gmail.com Thu Dec 4 15:16:24 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 4 Dec 2014 15:16:24 -0600 Subject: [petsc-users] Dirichlet boundary conditions for MATMPISBAIJ In-Reply-To: <54805E40.9040208@epfl.ch> References: <54805E40.9040208@epfl.ch> Message-ID: On Thu, Dec 4, 2014 at 7:14 AM, Aurelia Cuba Ramos < aurelia.cubaramos at epfl.ch> wrote: > Hi all, > > I recently started using PETSc and I am working with the MATMPISBAIJ > matrix format. I am trying to apply Dirichlet boundary conditions by > calling MatZeroRowsColumns() but this function seems to be not available > for symmetric block matrices in parallel. I saw on the FAQ website that > another possible way is to use MatZeroRows() and -ksp_type preonly > -pc_type redistribute. But when I try calling MatZeroRows() I get again > a PetscErrorCode 56, so it doesn't seem to be supported neither. I am > currently using PETSc 3.4. Is MatZeroRowsColumns() avaibale for > MATMPISBAIJ in PETSc 3.5 and if not, what is the fastest way to apply > Dirichlet BC's for this matrix format? > 1) SBAIJ is just an optimization to save memory. I would get everything running with BAIJ, and then if you run out of memory switch to SBAIJ 2) We have only coded that up for the sequential version. I will add it to our list. Thanks, Matt > Many thanks, > > Aurelia > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hus003 at ucsd.edu Thu Dec 4 18:49:17 2014 From: hus003 at ucsd.edu (Sun, Hui) Date: Fri, 5 Dec 2014 00:49:17 +0000 Subject: [petsc-users] Parallelization efficiency diagnose Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B31@XMAIL-MBX-BH1.AD.UCSD.EDU> Hello, I try to test the efficiency of parallelization of my current installation of petsc by running the following command: make PETSC_DIR=/Users/Paul/Documents/software/petsc-install PETSC_ARCH= streams NPMAX=4 I get the following results: Number of MPI processes 1 Process 0 Huis-MacBook-Pro.local Function Rate (MB/s) Copy: 13354.1380 Scale: 13012.7268 Add: 14725.4078 Triad: 14822.7110 Number of MPI processes 2 Process 0 Huis-MacBook-Pro.local Process 1 Huis-MacBook-Pro.local Function Rate (MB/s) Copy: 14135.6610 Scale: 14071.0462 Add: 15598.0208 Triad: 15717.5890 Number of MPI processes 3 Process 0 Huis-MacBook-Pro.local Process 1 Huis-MacBook-Pro.local Process 2 Huis-MacBook-Pro.local Function Rate (MB/s) Copy: 13755.8241 Scale: 13704.7662 Add: 15312.1487 Triad: 15319.4803 Number of MPI processes 4 Process 0 Huis-MacBook-Pro.local Process 1 Huis-MacBook-Pro.local Process 2 Huis-MacBook-Pro.local Process 3 Huis-MacBook-Pro.local Function Rate (MB/s) Copy: 13769.1621 Scale: 13708.0972 Add: 15103.1783 Triad: 15133.8786 ------------------------------------------------ np speedup 1 1.0 2 1.06 3 1.03 4 1.02 Estimation of possible speedup of MPI programs based on Streams benchmark. It appears you have 1 node(s) Does this result basically says that my MacBook only have one node? However, I know my computer has 4 cores, as I type sysctl -n hw.ncpu in bash, it gives me 4. What does it really mean? By the way, here is my configuration for this installation: ./configure --download-fblaslapack --download-suitesparse --download-superlu_dist --download-parmetis --download-metis --download-hypre --prefix=/Users/Paul/Documents/software/petsc-install --with-mpi-dir=/Users/Paul/Documents/software/mpich-3.1.3-install Is there anything in this configuration command that causes trouble? Best, Hui -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Dec 4 19:02:42 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 04 Dec 2014 18:02:42 -0700 Subject: [petsc-users] Parallelization efficiency diagnose In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B31@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B31@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <87a93252od.fsf@jedbrown.org> "Sun, Hui" writes: > Estimation of possible speedup of MPI programs based on Streams benchmark. > > It appears you have 1 node(s) > > > Does this result basically says that my MacBook only have one node? > However, I know my computer has 4 cores, as I type sysctl -n hw.ncpu > in bash, it gives me 4. What does it really mean? By the way, here is > my configuration for this installation: Use "lstopo" from the hwloc package to see how your cores are laid out. You probably have two physical cores, each with hypethreading (4 logical cores), sharing the same memory bus. One thread saturates the memory bus, so you get no speedup by using more cores. http://www.mcs.anl.gov/petsc/documentation/faq.html#computers -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From hus003 at ucsd.edu Thu Dec 4 19:15:38 2014 From: hus003 at ucsd.edu (Sun, Hui) Date: Fri, 5 Dec 2014 01:15:38 +0000 Subject: [petsc-users] Parallelization efficiency diagnose In-Reply-To: <87a93252od.fsf@jedbrown.org> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B31@XMAIL-MBX-BH1.AD.UCSD.EDU>, <87a93252od.fsf@jedbrown.org> Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B42@XMAIL-MBX-BH1.AD.UCSD.EDU> Thank you Jed. I don't know how to use "lstopo" from the hwloc, but I looked up the cores and memory from the hardware overview from my MAC, it has Number of Processors: 1 Total Number of Cores: 2 Besides, as you said, there are 4 logical cores due to hyperthreading. However, I'm still expecting to get speed doubled because I have 2 real cores. So where is the restriction then? Best, Hui ________________________________________ From: Jed Brown [jed at jedbrown.org] Sent: Thursday, December 04, 2014 5:02 PM To: Sun, Hui; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Parallelization efficiency diagnose "Sun, Hui" writes: > Estimation of possible speedup of MPI programs based on Streams benchmark. > > It appears you have 1 node(s) > > > Does this result basically says that my MacBook only have one node? > However, I know my computer has 4 cores, as I type sysctl -n hw.ncpu > in bash, it gives me 4. What does it really mean? By the way, here is > my configuration for this installation: Use "lstopo" from the hwloc package to see how your cores are laid out. You probably have two physical cores, each with hypethreading (4 logical cores), sharing the same memory bus. One thread saturates the memory bus, so you get no speedup by using more cores. http://www.mcs.anl.gov/petsc/documentation/faq.html#computers From jed at jedbrown.org Thu Dec 4 19:51:28 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 04 Dec 2014 18:51:28 -0700 Subject: [petsc-users] Parallelization efficiency diagnose In-Reply-To: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B42@XMAIL-MBX-BH1.AD.UCSD.EDU> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B31@XMAIL-MBX-BH1.AD.UCSD.EDU> <87a93252od.fsf@jedbrown.org> <7501CC2B7BBCC44A92ECEEC316170ECB010B5B42@XMAIL-MBX-BH1.AD.UCSD.EDU> Message-ID: <877fy650f3.fsf@jedbrown.org> "Sun, Hui" writes: > Thank you Jed. I don't know how to use "lstopo" from the hwloc, A search engine will solve that problem. > but I looked up the cores and memory from the hardware overview from > my MAC, it has > > Number of Processors: 1 > Total Number of Cores: 2 > > Besides, as you said, there are 4 logical cores due to hyperthreading. However, I'm still expecting to get speed doubled because I have 2 real cores. So where is the restriction then? Memory bandwidth, as stated in my email and the page I linked. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Thu Dec 4 21:37:31 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 4 Dec 2014 21:37:31 -0600 Subject: [petsc-users] Parallelization efficiency diagnose In-Reply-To: <877fy650f3.fsf@jedbrown.org> References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B31@XMAIL-MBX-BH1.AD.UCSD.EDU> <87a93252od.fsf@jedbrown.org> <7501CC2B7BBCC44A92ECEEC316170ECB010B5B42@XMAIL-MBX-BH1.AD.UCSD.EDU> <877fy650f3.fsf@jedbrown.org> Message-ID: I have a different MacBook Pro generation and get $ make streams NPMAX=4 cd src/benchmarks/streams; /usr/bin/make --no-print-directory streams /Users/barrysmith/Src/PETSc/arch-mpich/bin/mpicc -o MPIVersion.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -I/Users/barrysmith/Src/PETSc/include -I/Users/barrysmith/Src/PETSc/arch-mpich/include -I/opt/X11/include -I/opt/local/include `pwd`/MPIVersion.c Number of MPI processes 1 Processor names Barrys-MacBook-Pro-3.local Triad: 10417.1979 Rate (MB/s) Number of MPI processes 2 Processor names Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Triad: 14673.8802 Rate (MB/s) Number of MPI processes 3 Processor names Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Triad: 14998.7656 Rate (MB/s) Number of MPI processes 4 Processor names Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Triad: 15001.2941 Rate (MB/s) ------------------------------------------------ np speedup 1 1.0 2 1.41 3 1.44 4 1.44 Is mine a better machine since I get a speedup of 1.44 while you get no speed up? No, the total memory bandwidth each of our machines can sustain is about Triad: 15001.2941 Rate (MB/s). My machine, which I am guessing is a little older than yours cannot utilize all that memory bandwidth with a single core. Triad: 10417.1979 Rate (MB/s) On your machine a single core can utilize all of the memory bandwidth, hence when you use the second core you get no speedup. I get speed up because the second core utilizes the extra memory bandwidth the first core did not utilize. On the other hand your machine will run PETSc programs a good bit faster on one core than mine. So parallelism will not give you any real benefit on your laptop, on mine it does, but in the end code will run slightly faster on your machine so your machine is better than mine. Barry > On Dec 4, 2014, at 7:51 PM, Jed Brown wrote: > > "Sun, Hui" writes: > >> Thank you Jed. I don't know how to use "lstopo" from the hwloc, > > A search engine will solve that problem. > >> but I looked up the cores and memory from the hardware overview from >> my MAC, it has >> >> Number of Processors: 1 >> Total Number of Cores: 2 >> >> Besides, as you said, there are 4 logical cores due to hyperthreading. However, I'm still expecting to get speed doubled because I have 2 real cores. So where is the restriction then? > > Memory bandwidth, as stated in my email and the page I linked. From hus003 at ucsd.edu Fri Dec 5 00:45:06 2014 From: hus003 at ucsd.edu (Sun, Hui) Date: Fri, 5 Dec 2014 06:45:06 +0000 Subject: [petsc-users] Parallelization efficiency diagnose In-Reply-To: References: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B31@XMAIL-MBX-BH1.AD.UCSD.EDU> <87a93252od.fsf@jedbrown.org> <7501CC2B7BBCC44A92ECEEC316170ECB010B5B42@XMAIL-MBX-BH1.AD.UCSD.EDU> <877fy650f3.fsf@jedbrown.org>, Message-ID: <7501CC2B7BBCC44A92ECEEC316170ECB010B5B6B@XMAIL-MBX-BH1.AD.UCSD.EDU> Thank you Barry and Jed for your explanations. I think I understand it a little bit better now. Hui ________________________________________ From: Barry Smith [bsmith at mcs.anl.gov] Sent: Thursday, December 04, 2014 7:37 PM To: Jed Brown Cc: Sun, Hui; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Parallelization efficiency diagnose I have a different MacBook Pro generation and get $ make streams NPMAX=4 cd src/benchmarks/streams; /usr/bin/make --no-print-directory streams /Users/barrysmith/Src/PETSc/arch-mpich/bin/mpicc -o MPIVersion.o -c -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -I/Users/barrysmith/Src/PETSc/include -I/Users/barrysmith/Src/PETSc/arch-mpich/include -I/opt/X11/include -I/opt/local/include `pwd`/MPIVersion.c Number of MPI processes 1 Processor names Barrys-MacBook-Pro-3.local Triad: 10417.1979 Rate (MB/s) Number of MPI processes 2 Processor names Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Triad: 14673.8802 Rate (MB/s) Number of MPI processes 3 Processor names Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Triad: 14998.7656 Rate (MB/s) Number of MPI processes 4 Processor names Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Barrys-MacBook-Pro-3.local Triad: 15001.2941 Rate (MB/s) ------------------------------------------------ np speedup 1 1.0 2 1.41 3 1.44 4 1.44 Is mine a better machine since I get a speedup of 1.44 while you get no speed up? No, the total memory bandwidth each of our machines can sustain is about Triad: 15001.2941 Rate (MB/s). My machine, which I am guessing is a little older than yours cannot utilize all that memory bandwidth with a single core. Triad: 10417.1979 Rate (MB/s) On your machine a single core can utilize all of the memory bandwidth, hence when you use the second core you get no speedup. I get speed up because the second core utilizes the extra memory bandwidth the first core did not utilize. On the other hand your machine will run PETSc programs a good bit faster on one core than mine. So parallelism will not give you any real benefit on your laptop, on mine it does, but in the end code will run slightly faster on your machine so your machine is better than mine. Barry > On Dec 4, 2014, at 7:51 PM, Jed Brown wrote: > > "Sun, Hui" writes: > >> Thank you Jed. I don't know how to use "lstopo" from the hwloc, > > A search engine will solve that problem. > >> but I looked up the cores and memory from the hardware overview from >> my MAC, it has >> >> Number of Processors: 1 >> Total Number of Cores: 2 >> >> Besides, as you said, there are 4 logical cores due to hyperthreading. However, I'm still expecting to get speed doubled because I have 2 real cores. So where is the restriction then? > > Memory bandwidth, as stated in my email and the page I linked. From aurelia.cubaramos at epfl.ch Fri Dec 5 04:15:57 2014 From: aurelia.cubaramos at epfl.ch (Aurelia Cuba Ramos) Date: Fri, 05 Dec 2014 11:15:57 +0100 Subject: [petsc-users] Dirichlet boundary conditions for MATMPISBAIJ In-Reply-To: References: <54805E40.9040208@epfl.ch> Message-ID: <548185DD.9040602@epfl.ch> Thank you Matt, I changed the format to BAIJ as you recommended. It seems to work fine now. On 04.12.2014 22:16, Matthew Knepley wrote: > On Thu, Dec 4, 2014 at 7:14 AM, Aurelia Cuba Ramos > > wrote: > > Hi all, > > I recently started using PETSc and I am working with the MATMPISBAIJ > matrix format. I am trying to apply Dirichlet boundary conditions by > calling MatZeroRowsColumns() but this function seems to be not > available > for symmetric block matrices in parallel. I saw on the FAQ website > that > another possible way is to use MatZeroRows() and -ksp_type preonly > -pc_type redistribute. But when I try calling MatZeroRows() I get > again > a PetscErrorCode 56, so it doesn't seem to be supported neither. I am > currently using PETSc 3.4. Is MatZeroRowsColumns() avaibale for > MATMPISBAIJ in PETSc 3.5 and if not, what is the fastest way to apply > Dirichlet BC's for this matrix format? > > > 1) SBAIJ is just an optimization to save memory. I would get > everything running > with BAIJ, and then if you run out of memory switch to SBAIJ > > 2) We have only coded that up for the sequential version. I will add > it to our list. > > Thanks, > > Matt > > > Many thanks, > > Aurelia > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From erica at b2bmarketing.us.com Fri Dec 5 09:15:41 2014 From: erica at b2bmarketing.us.com (Erica Martin) Date: Fri, 5 Dec 2014 07:15:41 -0800 Subject: [petsc-users] Prospective Clients from Solar & Energy Message-ID: Hello, Would you like to procure the contact list of Solar & Energy Industry Decision Makers to accelerate revenue generation activities at your firm? Our list allows you to communicate with the right audience in the market place and gain momentum for your products/services. Let me know your interest in procuring such a list so I can show you what we have in store for you. Regards, Erica Martin Demand Generation Executive Note: You were specifically sent this email based upon your company profile. If for some reason this was sent in error or you wish not to receive any further messages from us please reply with subject line as "LEAVE OUT" and you will be excluded from all future mailings. -------------- next part -------------- An HTML attachment was scrubbed... URL: From evanum at gmail.com Fri Dec 5 14:11:34 2014 From: evanum at gmail.com (Evan Um) Date: Fri, 5 Dec 2014 12:11:34 -0800 Subject: [petsc-users] Using MUMPS as a preconditioner for KSPSolve() Message-ID: Dear PETSC Users, I tried to use a Cholesky factor (MUMPS results) as a preconditioner for KSPSolve(). An example code is pasted below. When the code runs, the log file indicates that Job=3 (i.e. backward/forward substitution) of MUMPS is called every time inside the loop. Is there anyway to avoid JOB=3 of MUMPS and use the factor as a pure preconditioner for the CG solver inside KSPSOLVE()? On my cluster, JOB=3 shows unexpected slow performance (see Vol72, Issue 35) and should be avoided. In advance, thanks for your help. Regards, Evan Code: KSPCreate(PETSC_COMM_WORLD, &ksp_fetd_dt); KSPSetOperators(ksp_fetd_dt, A_dt, A_dt); KSPSetType (ksp_fetd_dt, KSPPREONLY); KSPGetPC(ksp_fetd_dt, &pc_fetd_dt); MatSetOption(A_dt, MAT_SPD, PETSC_TRUE); PCSetType(pc_fetd_dt, PCCHOLESKY); PCFactorSetMatSolverPackage(pc_fetd_dt, MATSOLVERMUMPS); PCFactorSetUpMatSolverPackage(pc_fetd_dt); PCFactorGetMatrix(pc_fetd_dt, &F_dt); MatMumpsSetIcntl(F_dt, 4, 1); // Turn on MUMPS's log file. KSPSetType(ksp_fetd_dt, KSPCG); KSPSetTolerances(ksp_fetd_dt, 1e-9, 1.0e-50, 1.0e10, ksp_iter); for (int i=0; i<1000; i++) { // Create a new RHS vector B_dt KSPSolve(ksp_fetd_dt,B_dt,solution); // Output solution time=time2-time1; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Dec 5 14:20:55 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 5 Dec 2014 14:20:55 -0600 Subject: [petsc-users] Using MUMPS as a preconditioner for KSPSolve() In-Reply-To: References: Message-ID: On Fri, Dec 5, 2014 at 2:11 PM, Evan Um wrote: > Dear PETSC Users, > > I tried to use a Cholesky factor (MUMPS results) as a preconditioner for > KSPSolve(). An example code is pasted below. When the code runs, the log > file indicates that Job=3 (i.e. backward/forward substitution) of MUMPS is > called every time inside the loop. Is there anyway to avoid JOB=3 of MUMPS > and use the factor as a pure preconditioner for the CG solver inside > KSPSOLVE()? On my cluster, JOB=3 shows unexpected slow performance (see > Vol72, Issue 35) and should be avoided. In advance, thanks for > What do you mean by the above sentence. When you Cholesky factor a matrix A, you get A = L^T L When you use this factorization as a preconditioner, you modify your original problem to (L^T L)^{-1} A x = (L^T L)^{-1} b When you use a Krylov method like CG, this means that at every iterate you must apply (L^T L){-1} which entails a forward and backward solve. If you mean, you only want to apply (L^T L)^{-1} and not run CG, then use -ksp_type preonly Matt > your help. > > Regards, > Evan > > > Code: > KSPCreate(PETSC_COMM_WORLD, &ksp_fetd_dt); > KSPSetOperators(ksp_fetd_dt, A_dt, A_dt); > KSPSetType (ksp_fetd_dt, KSPPREONLY); > KSPGetPC(ksp_fetd_dt, &pc_fetd_dt); > MatSetOption(A_dt, MAT_SPD, PETSC_TRUE); > PCSetType(pc_fetd_dt, PCCHOLESKY); > PCFactorSetMatSolverPackage(pc_fetd_dt, MATSOLVERMUMPS); > PCFactorSetUpMatSolverPackage(pc_fetd_dt); > PCFactorGetMatrix(pc_fetd_dt, &F_dt); > MatMumpsSetIcntl(F_dt, 4, 1); // Turn on MUMPS's log file. > KSPSetType(ksp_fetd_dt, KSPCG); > KSPSetTolerances(ksp_fetd_dt, 1e-9, 1.0e-50, 1.0e10, ksp_iter); > for (int i=0; i<1000; i++) { > // Create a new RHS vector B_dt > KSPSolve(ksp_fetd_dt,B_dt,solution); > // Output solution time=time2-time1; > } > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From evanum at gmail.com Fri Dec 5 14:31:49 2014 From: evanum at gmail.com (Evan Um) Date: Fri, 5 Dec 2014 12:31:49 -0800 Subject: [petsc-users] Using MUMPS as a preconditioner for KSPSolve() In-Reply-To: References: Message-ID: Dear Matt, Thanks for your quick reply. I mean avoiding MUMPS's internal back/forward solvers (JOB=3). Does KSPSOLVE() have its own back/forward routines? Evan On Fri, Dec 5, 2014 at 12:20 PM, Matthew Knepley wrote: > On Fri, Dec 5, 2014 at 2:11 PM, Evan Um wrote: > >> Dear PETSC Users, >> >> I tried to use a Cholesky factor (MUMPS results) as a preconditioner for >> KSPSolve(). An example code is pasted below. When the code runs, the log >> file indicates that Job=3 (i.e. backward/forward substitution) of MUMPS is >> called every time inside the loop. Is there anyway to avoid JOB=3 of MUMPS >> and use the factor as a pure preconditioner for the CG solver inside >> KSPSOLVE()? On my cluster, JOB=3 shows unexpected slow performance (see >> Vol72, Issue 35) and should be avoided. In advance, thanks for >> > > What do you mean by the above sentence. When you Cholesky factor a matrix > A, you get > > A = L^T L > > When you use this factorization as a preconditioner, you modify your > original problem to > > (L^T L)^{-1} A x = (L^T L)^{-1} b > > When you use a Krylov method like CG, this means that at every iterate you > must apply > (L^T L){-1} which entails a forward and backward solve. > > If you mean, you only want to apply (L^T L)^{-1} and not run CG, then use > -ksp_type preonly > > Matt > > >> your help. >> >> Regards, >> Evan >> >> >> Code: >> KSPCreate(PETSC_COMM_WORLD, &ksp_fetd_dt); >> KSPSetOperators(ksp_fetd_dt, A_dt, A_dt); >> KSPSetType (ksp_fetd_dt, KSPPREONLY); >> KSPGetPC(ksp_fetd_dt, &pc_fetd_dt); >> MatSetOption(A_dt, MAT_SPD, PETSC_TRUE); >> PCSetType(pc_fetd_dt, PCCHOLESKY); >> PCFactorSetMatSolverPackage(pc_fetd_dt, MATSOLVERMUMPS); >> PCFactorSetUpMatSolverPackage(pc_fetd_dt); >> PCFactorGetMatrix(pc_fetd_dt, &F_dt); >> MatMumpsSetIcntl(F_dt, 4, 1); // Turn on MUMPS's log file. >> KSPSetType(ksp_fetd_dt, KSPCG); >> KSPSetTolerances(ksp_fetd_dt, 1e-9, 1.0e-50, 1.0e10, ksp_iter); >> for (int i=0; i<1000; i++) { >> // Create a new RHS vector B_dt >> KSPSolve(ksp_fetd_dt,B_dt,solution); >> // Output solution time=time2-time1; >> } >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Dec 5 14:33:55 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 5 Dec 2014 14:33:55 -0600 Subject: [petsc-users] Using MUMPS as a preconditioner for KSPSolve() In-Reply-To: References: Message-ID: On Fri, Dec 5, 2014 at 2:31 PM, Evan Um wrote: > Dear Matt, > > Thanks for your quick reply. I mean avoiding MUMPS's internal back/forward > solvers (JOB=3). Does KSPSOLVE() have its own back/forward routines? > 1) MUMPS stores these in its own format, so code from other packages like PETSc is not useful here 2) I do not think the MUMPS has the wrong algorithm or implementation. This process is slow in parallel, and should be avoided in favor of more scalable methods. Matt > Evan > > On Fri, Dec 5, 2014 at 12:20 PM, Matthew Knepley > wrote: > >> On Fri, Dec 5, 2014 at 2:11 PM, Evan Um wrote: >> >>> Dear PETSC Users, >>> >>> I tried to use a Cholesky factor (MUMPS results) as a preconditioner for >>> KSPSolve(). An example code is pasted below. When the code runs, the log >>> file indicates that Job=3 (i.e. backward/forward substitution) of MUMPS is >>> called every time inside the loop. Is there anyway to avoid JOB=3 of MUMPS >>> and use the factor as a pure preconditioner for the CG solver inside >>> KSPSOLVE()? On my cluster, JOB=3 shows unexpected slow performance (see >>> Vol72, Issue 35) and should be avoided. In advance, thanks for >>> >> >> What do you mean by the above sentence. When you Cholesky factor a matrix >> A, you get >> >> A = L^T L >> >> When you use this factorization as a preconditioner, you modify your >> original problem to >> >> (L^T L)^{-1} A x = (L^T L)^{-1} b >> >> When you use a Krylov method like CG, this means that at every iterate >> you must apply >> (L^T L){-1} which entails a forward and backward solve. >> >> If you mean, you only want to apply (L^T L)^{-1} and not run CG, then use >> -ksp_type preonly >> >> Matt >> >> >>> your help. >>> >>> Regards, >>> Evan >>> >>> >>> Code: >>> KSPCreate(PETSC_COMM_WORLD, &ksp_fetd_dt); >>> KSPSetOperators(ksp_fetd_dt, A_dt, A_dt); >>> KSPSetType (ksp_fetd_dt, KSPPREONLY); >>> KSPGetPC(ksp_fetd_dt, &pc_fetd_dt); >>> MatSetOption(A_dt, MAT_SPD, PETSC_TRUE); >>> PCSetType(pc_fetd_dt, PCCHOLESKY); >>> PCFactorSetMatSolverPackage(pc_fetd_dt, MATSOLVERMUMPS); >>> PCFactorSetUpMatSolverPackage(pc_fetd_dt); >>> PCFactorGetMatrix(pc_fetd_dt, &F_dt); >>> MatMumpsSetIcntl(F_dt, 4, 1); // Turn on MUMPS's log file. >>> KSPSetType(ksp_fetd_dt, KSPCG); >>> KSPSetTolerances(ksp_fetd_dt, 1e-9, 1.0e-50, 1.0e10, ksp_iter); >>> for (int i=0; i<1000; i++) { >>> // Create a new RHS vector B_dt >>> KSPSolve(ksp_fetd_dt,B_dt,solution); >>> // Output solution time=time2-time1; >>> } >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Fri Dec 5 20:11:32 2014 From: jychang48 at gmail.com (Justin Chang) Date: Fri, 5 Dec 2014 20:11:32 -0600 Subject: [petsc-users] MPI configure error for Mac OS X Yosemite Message-ID: Hi all, I recently upgraded my iMac to the OS X Yosemite, and when I tried installing PETSc, it gave me these strange errors when I tried installing MPICH or OpenMPI (i tried both options). I have never seen these errors before, so my guess is that it may have something to do with the recent OS upgrade. Attached is the configure log. Any help appreciated, thanks. Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 2861844 bytes Desc: not available URL: From balay at mcs.anl.gov Fri Dec 5 20:21:41 2014 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 5 Dec 2014 20:21:41 -0600 Subject: [petsc-users] MPI configure error for Mac OS X Yosemite In-Reply-To: References: Message-ID: On Fri, 5 Dec 2014, Justin Chang wrote: > Hi all, > > I recently upgraded my iMac to the OS X Yosemite, and when I tried > installing PETSc, it gave me these strange errors when I tried installing > MPICH or OpenMPI (i tried both options). I have never seen these errors > before, so my guess is that it may have something to do with the recent OS > upgrade. > > Attached is the configure log. Any help appreciated, thanks. --download-mpich should work. for --download-openmpi - try adding to configure options: CFLAGS="" CXXFLAGS="" [-Wall appears to messup openmpi configure] Satish From francium87 at hotmail.com Sat Dec 6 07:56:05 2014 From: francium87 at hotmail.com (linjing bo) Date: Sat, 6 Dec 2014 13:56:05 +0000 Subject: [petsc-users] Manually set IS instead of calling MatNestGetISs Message-ID: The Problem in short : How to manually set IS for MATNEST instead of calling MatNestGetISs in fortran? Problems in detail: I'm trying to solve a 3D vector field in Fortran. Base on manual page, a efficient way is to use MATNEST with 3x3 block configuration. So I follow the ex70.c and use MatNestGetISs to set the Vector and other things. But no symbol is found in library. After searching the mailing list I notice the Fortran support is fixed recently in 3.5. But for capability issue ( most of the Supercomputer I can access only support optimized version 3.4.4 and older ), using the latest 3.5 is not a option. So I'm wondering is there a way to manually set IS to overcome this issue, or good suggestion other than MATNEST of solving a multiphysics problem? Some Trial: I tried to use ISCreateBlock to create a block IS with 3 blocks on vector and use it instead of MatNestGetISs, But PETSc report that the local Matrix and Vector size are not match: [6]PETSC ERROR: MatMult() line 2172 in /tmp/petsc-3.4.4/src/mat/interface/matrix.c Mat mat,Vec y: local dim 7320 7319 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alpkalpalp at gmail.com Sat Dec 6 08:12:08 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Sat, 6 Dec 2014 16:12:08 +0200 Subject: [petsc-users] reconfigure after pull? Message-ID: Hi, How to understand that I should reconfigure after a git pull? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sat Dec 6 08:43:07 2014 From: jed at jedbrown.org (Jed Brown) Date: Sat, 06 Dec 2014 07:43:07 -0700 Subject: [petsc-users] reconfigure after pull? In-Reply-To: References: Message-ID: <87a9302610.fsf@jedbrown.org> Alp Kalpalp writes: > Hi, > > How to understand that I should reconfigure after a git pull? It is usually not necessary, but may be necessary if changes in config/ are relevant to you. You can see the changes to config/ with $ git pull $ git log --stat ORIG_HEAD.. -- config/ Change --stat to -p to see the full diffs instead of just a stat. When in doubt reconfigure. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From knepley at gmail.com Sat Dec 6 13:53:33 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 6 Dec 2014 13:53:33 -0600 Subject: [petsc-users] Manually set IS instead of calling MatNestGetISs In-Reply-To: References: Message-ID: MATNEST is merely an optimization, and not usually the most important one. Get everything working for PC fieldsplit first and then we can see if NEST makes sense. Thanks Matt On Dec 6, 2014 7:56 AM, "linjing bo" wrote: > The Problem in short : How to manually set IS for MATNEST instead of > calling MatNestGetISs in fortran? > > Problems in detail: I'm trying to solve a 3D vector field in Fortran. Base > on manual page, a efficient way is to use MATNEST with 3x3 block > configuration. So I follow the ex70.c and use MatNestGetISs to set the > Vector and other things. But no symbol is found in library. After searching > the mailing list I notice the Fortran support is fixed recently in 3.5. But > for capability issue ( most of the Supercomputer I can access only support > optimized version 3.4.4 and older ), using the latest 3.5 is not a option. > So I'm wondering is there a way to manually set IS to overcome this issue, > or good suggestion other than MATNEST of solving a multiphysics problem? > > Some Trial: I tried to use ISCreateBlock to create a block IS with 3 > blocks on vector and use it instead of MatNestGetISs, But PETSc report that > the local Matrix and Vector size are not match: > > [6]PETSC ERROR: MatMult() line 2172 in > /tmp/petsc-3.4.4/src/mat/interface/matrix.c Mat mat,Vec y: local dim 7320 > 7319 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Guillaume.Duclaux at geo.uib.no Sat Dec 6 15:52:26 2014 From: Guillaume.Duclaux at geo.uib.no (Guillaume Duclaux) Date: Sat, 06 Dec 2014 22:52:26 +0100 Subject: [petsc-users] MPI configure error for Mac OS X Yosemite In-Reply-To: References: Message-ID: <9e525757870a6beb6508a36913093db5@webmail.uib.no> Hi Justin, I've experienced a similar problem on my laptop (fresh install of Yosemite). I suspect it is a write permission problem with /usr/local/ (though I'm not 100% certain, as it could be an issue with gcc/clang). I'm very new to all this... I've documented a step by step install that worked for me (tested both with openmpi and mpich), feel free to try it out! Hope that helps. Cheers Guillaume PREREQUISITES: XCODE AND COMPILERS + First of all, install Xcode + The 'Command Line Tools' package is required: https://developer.apple.com/downloads/index.action?name=for%20Xcode%20-# [1] + If you need it, install gfortran (a binary is available from the gnu page: https://gcc.gnu.org/wiki/GFortranBinaries#MacOS [2]) GETTING AND INSTALLING GCC: (THE GCC INSTALLED ON MAC OS X IS ACTUALLY CLANG, I.E. APPLE C COMPILER, AND NOT GNU C COMPILER - THE ONE YOU PROBABLY WANT) That can be tricky and one possible 'simple' solution consists in installing homebrew (http://brew.sh [3]) + in a terminal: ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" + Once homebrew is installed, tap into hombrew versions: brew tap homebrew/versions The installation will do the following: ==> THE FOLLOWING DIRECTORIES WILL BE MADE GROUP WRITABLE: /usr/local/. /usr/local/bin /usr/local/share /usr/local/share/man /usr/local/share/man/man1 ------------------------- NOW PETSC AND FRIENDS: + Get PETSc 3.5.2 tarball: http://www.mcs.anl.gov/petsc/download/index.html [4] + Go into petsc-3.5.2/ to configure the installation (note that we ask petsc to download and install openmpi and hdf5 libraries as well): ./configure --with-cc=x86_64-apple-darwin14.0.0-gcc-4.8 --with-fc=/usr/local/gfortran/bin/gfortran --download-openmpi=1 --download-hdf5=1 --with-debugging=1 + then, follow the make all, and make test prompts. + Add the PETSC_ARCH and PETSC_DIR to your .profile: ##Setting ARCH and DIR for PETSc export PETSC_DIR=/Users/gduclaux/dev/petsc-3.5.2 export PETSC_ARCH=arch-darwin-c-opt > ---------- Forwarded message ---------- > From: SATISH BALAY > Date: 6 December 2014 at 03:21 > Subject: Re: [petsc-users] MPI configure error for Mac OS X Yosemite > To: Justin Chang > Cc: petsc-users > > On Fri, 5 Dec 2014, Justin Chang wrote: > >> Hi all, >> >> I recently upgraded my iMac to the OS X Yosemite, and when I tried >> installing PETSc, it gave me these strange errors when I tried installing >> MPICH or OpenMPI (i tried both options). I have never seen these errors >> before, so my guess is that it may have something to do with the recent OS >> upgrade. >> >> Attached is the configure log. Any help appreciated, thanks. > > --download-mpich should work. > > for --download-openmpi - try adding to configure options: CFLAGS="" CXXFLAGS="" > > [-Wall appears to messup openmpi configure] > > Satish Links: ------ [1] https://developer.apple.com/downloads/index.action?name=for%20Xcode%20-# [2] https://gcc.gnu.org/wiki/GFortranBinaries#MacOS [3] http://brew.sh/ [4] http://www.mcs.anl.gov/petsc/download/index.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From alpkalpalp at gmail.com Sat Dec 6 16:27:42 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Sun, 7 Dec 2014 00:27:42 +0200 Subject: [petsc-users] Unknown Mat type given: schurcomplement for master commit gf685883 In-Reply-To: References: Message-ID: This error shows up when you have several functions that has their own PetscInitialize and PetscFinalize calls. When a single call is made to these function everything seems OK. I thing some data is not properly finalized in Petsc.. ALp On Thu, Dec 4, 2014 at 1:07 AM, Alp Kalpalp wrote: > H, > > Here is the message I get when PCBDDCCreateFETIDPOperators() is called; > > Thanks; > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Unknown type. Check for miss-spelling or missing package: > http://www.mcs.anl.gov/petsc/documentation/ins > tallation.html#external > [0]PETSC ERROR: Unknown Mat type given: schurcomplement > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-1067-gf685883 GIT > Date: 2014-12-02 13:26:28 -0600 > [0]PETSC ERROR: Unknown Name on a BDDC_ICL_DEBUG named SEMIH-PC by semih > Thu Dec 04 01:02:10 2014 > [0]PETSC ERROR: Configure options --with-cc="win32fe icl" > --with-cxx="win32fe icl" --with-fc="win32fe ifort" --with-blas > -lapack-dir=/cygdrive/c/MKL/lib/intel64 > --with-hypre-include=/cygdrive/c/EXTRLIBS/include/HYPRE > --with-hypre-lib=/cygdri > ve/c/EXTRLIBS/lib/HYPRE.lib > --with-scalapack-include=/cygdrive/c/MKL/include > --with-scalapack-lib="[/cygdrive/c/MKL/lib/ > intel64/mkl_scalapack_lp64_dll.lib,/cygdrive/c/MKL/lib/intel64/mkl_blacs_msmpi_lp64.lib]" > --with-metis-include=/cygdrive > /c/EXTRLIBS/include/parametis > --with-metis-lib=/cygdrive/c/EXTRLIBS/lib/metis.lib > --with-parmetis-include=/cygdrive/c/EX > TRLIBS/include/parametis > --with-parmetis-lib="[/cygdrive/c/EXTRLIBS/lib/parmetis.lib,/cygdrive/c/EXTRLIBS/lib/metis.lib] > " --with-mpi-include=/cygdrive/c/MSMPI/Inc/ > --with-mpi-lib="[/cygdrive/c/MSMPI/Lib/amd64/msmpi.lib,/cygdrive/c/MSMPI/Lib > /amd64/msmpifec.lib]" --with-shared-libraries --useThreads=0 --with-pcbddc > --PETSC_ARCH=BDDC_ICL_DEBUG --useThreads=0 > [0]PETSC ERROR: #1 MatSetType() line 63 in > C:\cywgin64\home\semih\PETSCM~1\src\mat\INTERF~1\matreg.c > [0]PETSC ERROR: #2 MatCreateSchurComplement() line 212 in > C:\cywgin64\home\semih\PETSCM~1\src\ksp\ksp\utils\schurm.c > [0]PETSC ERROR: #3 PCBDDCSetupFETIDPPCContext() line 557 in > C:\cywgin64\home\semih\PETSCM~1\src\ksp\pc\impls\bddc\bddcfe > tidp.c > [0]PETSC ERROR: #4 PCBDDCCreateFETIDPOperators_BDDC() line 1691 in > C:\cywgin64\home\semih\PETSCM~1\src\ksp\pc\impls\bddc > \bddc.c > [0]PETSC ERROR: #5 PCBDDCCreateFETIDPOperators() line 1737 in > C:\cywgin64\home\semih\PETSCM~1\src\ksp\pc\impls\bddc\bddc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sat Dec 6 16:32:24 2014 From: jed at jedbrown.org (Jed Brown) Date: Sat, 06 Dec 2014 15:32:24 -0700 Subject: [petsc-users] Unknown Mat type given: schurcomplement for master commit gf685883 In-Reply-To: References: Message-ID: <87y4qkz9xj.fsf@jedbrown.org> Alp Kalpalp writes: > This error shows up when you have several functions that has their own > PetscInitialize and PetscFinalize calls. When a single call is made to > these function everything seems OK. I thing some data is not properly > finalized in Petsc.. We need a test case. The best way is to make a failing test in .../examples/tests/ and submit a pull request. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From alpkalpalp at gmail.com Sat Dec 6 17:04:46 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Sun, 7 Dec 2014 01:04:46 +0200 Subject: [petsc-users] MatZeroRowsColumnsLocalIS support for MATIS object Message-ID: In ex59 which is about pcbddc; there is a code block for applying dirichlet boundaries; if (dd.DBC_zerorows) { ierr = ComputeSpecialBoundaryIndices(dd,&dirichletIS,NULL);CHKERRQ(ierr); ierr = MatSetOption(local_mat,MAT_KEEP_NONZERO_PATTERN,PETSC_TRUE);CHKERRQ(ierr); ierr = MatZeroRowsLocalIS(*A,dirichletIS,1.0,NULL,NULL);CHKERRQ(ierr); ierr = ISDestroy(&dirichletIS);CHKERRQ(ierr); } I replaced MatZeroRowsLocalIS function with MatZeroRowsColumnsLocalIS, but got following error; "Need to provide local to global mapping to matrix first" from this line of code in MatZeroRowsColumnsLocal; if (!mat->cmap->mapping) SETERRQ(PetscObjectComm((PetscObject)mat),PETSC_ERR_ARG_WRONGSTATE,"Need to provide local to global mapping to matrix first"); As you know MatIS object is created by suppliying an ISLocalToGlobalMapping so it has this info. I think MatIS should be revised in order to be used for MatZeroRowsColumnsLocalIS, I am a newby so dont know how to fix this. Regards, Alp -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Mon Dec 8 08:26:10 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 8 Dec 2014 09:26:10 -0500 Subject: [petsc-users] petsc solver Message-ID: Hi all, I was wondering if PETSc has a point implicit method solver or similar implemented. Thanks, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 8 08:28:16 2014 From: jed at jedbrown.org (Jed Brown) Date: Mon, 08 Dec 2014 07:28:16 -0700 Subject: [petsc-users] petsc solver In-Reply-To: References: Message-ID: <87a92yz05b.fsf@jedbrown.org> paul zhang writes: > Hi all, > > I was wondering if PETSc has a point implicit method solver or similar > implemented. -pc_type pbjacobi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From paulhuaizhang at gmail.com Mon Dec 8 08:42:53 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Mon, 8 Dec 2014 09:42:53 -0500 Subject: [petsc-users] petsc solver In-Reply-To: <87a92yz05b.fsf@jedbrown.org> References: <87a92yz05b.fsf@jedbrown.org> Message-ID: Is there some document or reference I can access to see how it works? Thanks, Paul On Mon, Dec 8, 2014 at 9:28 AM, Jed Brown wrote: > paul zhang writes: > > > Hi all, > > > > I was wondering if PETSc has a point implicit method solver or similar > > implemented. > > -pc_type pbjacobi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Dec 8 08:50:32 2014 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 8 Dec 2014 15:50:32 +0100 Subject: [petsc-users] petsc solver In-Reply-To: References: <87a92yz05b.fsf@jedbrown.org> Message-ID: I would just read the source (found here) http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/pbjacobi/pbjacobi.c.html#PCPBJACOBI The implementation supports blocks sizes from 1 through to 7. On 8 December 2014 at 15:42, paul zhang wrote: > > Is there some document or reference I can access to see how it works? > > Thanks, > Paul > > > On Mon, Dec 8, 2014 at 9:28 AM, Jed Brown wrote: > >> paul zhang writes: >> >> > Hi all, >> > >> > I was wondering if PETSc has a point implicit method solver or similar >> > implemented. >> >> -pc_type pbjacobi >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlesbona at gmail.com Tue Dec 9 06:03:55 2014 From: carlesbona at gmail.com (Carles Bona) Date: Tue, 9 Dec 2014 13:03:55 +0100 Subject: [petsc-users] Checkpointing / restart Message-ID: Dear all, I am trying to restart a TS calculation from a previously saved result. However, when I call TSSolve I can only provide the saved solution vector (U) as an initial condition, but I haven't found a way to set the initail time derivative (V) through a call to TSSolve. Is there a way do this? Which would be the way to restart from a checkpointed state? Many thanks, Carles -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Tue Dec 9 08:59:56 2014 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Tue, 9 Dec 2014 17:59:56 +0300 Subject: [petsc-users] Checkpointing / restart In-Reply-To: References: Message-ID: On 9 December 2014 at 15:03, Carles Bona wrote: > Dear all, > > I am trying to restart a TS calculation from a previously saved result. > However, when I call TSSolve I can only provide the saved solution vector > (U) as an initial condition, but I haven't found a way to set the initail > time derivative (V) through a call to TSSolve. > > Is there a way do this? Which would be the way to restart from a > checkpointed state? > AFAIK, there is no way. What TS type are you using? It should be possible to hack it. -- Lisandro Dalcin ============ Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust.edu.sa/ 4700 King Abdullah University of Science and Technology al-Khawarizmi Bldg (Bldg 1), Office # 4332 Thuwal 23955-6900, Kingdom of Saudi Arabia http://www.kaust.edu.sa Office Phone: +966 12 808-0459 From carlesbona at gmail.com Tue Dec 9 09:24:31 2014 From: carlesbona at gmail.com (Carles Bona) Date: Tue, 9 Dec 2014 16:24:31 +0100 Subject: [petsc-users] Checkpointing / restart In-Reply-To: References: Message-ID: TS type is alpha Carles 2014-12-09 15:59 GMT+01:00 Lisandro Dalcin : > On 9 December 2014 at 15:03, Carles Bona wrote: > > Dear all, > > > > I am trying to restart a TS calculation from a previously saved result. > > However, when I call TSSolve I can only provide the saved solution vector > > (U) as an initial condition, but I haven't found a way to set the initail > > time derivative (V) through a call to TSSolve. > > > > Is there a way do this? Which would be the way to restart from a > > checkpointed state? > > > > AFAIK, there is no way. What TS type are you using? It should be > possible to hack it. > > > > -- > Lisandro Dalcin > ============ > Research Scientist > Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) > Numerical Porous Media Center (NumPor) > King Abdullah University of Science and Technology (KAUST) > http://numpor.kaust.edu.sa/ > > 4700 King Abdullah University of Science and Technology > al-Khawarizmi Bldg (Bldg 1), Office # 4332 > Thuwal 23955-6900, Kingdom of Saudi Arabia > http://www.kaust.edu.sa > > Office Phone: +966 12 808-0459 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Carol.Brickley at awe.co.uk Tue Dec 9 09:54:25 2014 From: Carol.Brickley at awe.co.uk (Carol.Brickley at awe.co.uk) Date: Tue, 9 Dec 2014 15:54:25 +0000 Subject: [petsc-users] F90 program interfaced with petsc 3.4.3 - inconsitencies Message-ID: <201412091554.sB9FsSbv024380@msw1.awe.co.uk> Hi, I am trying to runs a F90 program interfaced with petsc 3.4.3. When the routine DMDACreate2D is called, if I run with a debugger such as ddt, it shows me that in the petsc routines the DM array (inra) is populated, however when I return to the F90 code the DM array has one value and no longer seems to be populated as a DM array. I am including finclude/petsc.h90 in the routine calling DMDACreate2D. Any ideas? Carol Dr Carol Brickley BSc,PhD,ARCS,DIC,MBCS Senior Software Engineer Engineering Applications Team DS+T, AWE Aldermaston Reading Berkshire RG7 4PR Direct: 0118 9855035 ___________________________________________________ ____________________________ The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 9 10:02:25 2014 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 9 Dec 2014 10:02:25 -0600 Subject: [petsc-users] F90 program interfaced with petsc 3.4.3 - inconsitencies In-Reply-To: <201412091554.sB9FsSbv024380@msw1.awe.co.uk> References: <201412091554.sB9FsSbv024380@msw1.awe.co.uk> Message-ID: On Tue, Dec 9, 2014 at 9:54 AM, wrote: > Hi, > > > > I am trying to runs a F90 program interfaced with petsc 3.4.3. When the > routine DMDACreate2D is called, if I run with a debugger such as ddt, it > shows me that in the petsc routines the DM array (inra) is populated, > however when I return to the F90 code the DM array has one value and no > longer seems to be populated as a DM array. I am including > finclude/petsc.h90 in the routine calling DMDACreate2D. > Lets try to eliminate variables. Can you run SNES ex5f? If so, then you just have to modify the source and build until you can run yours. Thanks, Matt > Any ideas? > > > > Carol > > > > *Dr Carol Brickley * > > *BSc,PhD,ARCS,DIC,MBCS* > > > > *Senior Software Engineer* > > *Engineering Applications Team* > > *DS+T,* > > *AWE* > > *Aldermaston* > > *Reading* > > *Berkshire* > > *RG7 4PR* > > > > *Direct: 0118 9855035* > > > > ___________________________________________________ > ____________________________ The information in this email and in any > attachment(s) is commercial in confidence. If you are not the named > addressee(s) or if you receive this email in error then any distribution, > copying or use of this communication or the information in it is strictly > prohibited. Please notify us immediately by email at admin.internet(at) > awe.co.uk, and then delete this message from your computer. While > attachments are virus checked, AWE plc does not accept any liability in > respect of any virus which is not detected. AWE Plc Registered in England > and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Tue Dec 9 10:47:09 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Tue, 9 Dec 2014 11:47:09 -0500 Subject: [petsc-users] a simple question Message-ID: Hello, I attempted to compute D x= b, where D is the diagonal vector of a matrix A, and b is the right hand side. so x=D^{-1} b. I was wondering if there is a function of vector that can directly compute b/D, or I have to do something else? Thanks, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Tue Dec 9 11:02:16 2014 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Tue, 9 Dec 2014 17:02:16 +0000 Subject: [petsc-users] a simple question In-Reply-To: Message-ID: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecPointwiseDivide.html From: paul zhang > Date: Tue, 9 Dec 2014 11:47:09 -0500 To: PETSc users list > Subject: [petsc-users] a simple question Hello, I attempted to compute D x= b, where D is the diagonal vector of a matrix A, and b is the right hand side. so x=D^{-1} b. I was wondering if there is a function of vector that can directly compute b/D, or I have to do something else? Thanks, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Tue Dec 9 11:03:41 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Tue, 9 Dec 2014 12:03:41 -0500 Subject: [petsc-users] a simple question In-Reply-To: References: Message-ID: That is it. Thanks, Paul On Tue, Dec 9, 2014 at 12:02 PM, Abhyankar, Shrirang G. wrote: > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecPointwiseDivide.html > > From: paul zhang > Date: Tue, 9 Dec 2014 11:47:09 -0500 > To: PETSc users list > Subject: [petsc-users] a simple question > > Hello, > > I attempted to compute D x= b, where D is the diagonal vector of a > matrix A, and b is the right hand side. so x=D^{-1} b. I was wondering if > there is a function of vector that can directly compute b/D, or I have to > do something else? > > > Thanks, > Paul > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Wed Dec 10 01:02:55 2014 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Wed, 10 Dec 2014 10:02:55 +0300 Subject: [petsc-users] Checkpointing / restart In-Reply-To: References: Message-ID: On 9 December 2014 at 18:24, Carles Bona wrote: > TS type is alpha > I'm working on a reimplementation of TSALPHA in PetIGA, https://bitbucket.org/dalcinl/petiga/src/default/src/tsalpha1.c . Eventually, I'll update the code in PETSc with this new implementation. This new implementation features some attempts to implement time-step adaptivity. Additionally, it implements a poor man's attempt to estimate a initial derivative without requiring users to setup and solve a problem involving the mass matrix. We can experiment in this code with some extra APIs to let users specify the initial derivative. What about TSAlphaSetSolution(ts,U,V) and TSAlphaGetSolution(ts,&U,&V), where U and V are the initial solution and derivative vectors? -- Lisandro Dalcin ============ Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust.edu.sa/ 4700 King Abdullah University of Science and Technology al-Khawarizmi Bldg (Bldg 1), Office # 4332 Thuwal 23955-6900, Kingdom of Saudi Arabia http://www.kaust.edu.sa Office Phone: +966 12 808-0459 From siddhesh4godbole at gmail.com Wed Dec 10 04:29:34 2014 From: siddhesh4godbole at gmail.com (siddhesh godbole) Date: Wed, 10 Dec 2014 15:59:34 +0530 Subject: [petsc-users] MatDenseRestoreArray run time error Message-ID: hello, i was trying to modify ex99 in srs/mat/examples/test which deals with LAPACKsygvx_ for eigenvalues and eigenvectors it compiles but gives the following error while executing the program [0]PETSC ERROR:*Cannot locate function MatDenseRestoreArray_C in object* [0]PETSC ERROR: Configure options --download-mpich --download-f2cblaslapack=1 [0]PETSC ERROR: *#1 MatDenseRestoreArray() line 1523 in /home/iitm/Downloads/petsc-3.5.2/src/mat/impls/dense/seq/dense.c* can you please explain to me what are theses errors? *Siddhesh M Godbole* 5th year Dual Degree, Civil Eng & Applied Mech. IIT Madras -------------- next part -------------- An HTML attachment was scrubbed... URL: From siddhesh4godbole at gmail.com Wed Dec 10 04:31:19 2014 From: siddhesh4godbole at gmail.com (siddhesh godbole) Date: Wed, 10 Dec 2014 16:01:19 +0530 Subject: [petsc-users] MatDenseRestoreArray run time error In-Reply-To: References: Message-ID: Pardon me, This is the full error message [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: Cannot locate function MatDenseRestoreArray_C in object [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 [0]PETSC ERROR: ./8D on a arch-linux2-c-debug named iitm by iitm Wed Dec 10 15:48:03 2014 [0]PETSC ERROR: Configure options --download-mpich --download-f2cblaslapack=1 [0]PETSC ERROR: #1 MatDenseRestoreArray() line 1523 in /home/iitm/Downloads/petsc-3.5.2/src/mat/impls/dense/seq/dense.c [0]PETSC ERROR: #2 main() line 181 in /home/iitm/Desktop/project/8D.c *Siddhesh M Godbole* 5th year Dual Degree, Civil Eng & Applied Mech. IIT Madras On Wed, Dec 10, 2014 at 3:59 PM, siddhesh godbole < siddhesh4godbole at gmail.com> wrote: > hello, > > i was trying to modify ex99 in srs/mat/examples/test which deals with > LAPACKsygvx_ for eigenvalues and eigenvectors > > it compiles but gives the following error while executing the program > > > > [0]PETSC ERROR:*Cannot locate function MatDenseRestoreArray_C in object* > [0]PETSC ERROR: Configure options --download-mpich > --download-f2cblaslapack=1 > [0]PETSC ERROR: *#1 MatDenseRestoreArray() line 1523 in > /home/iitm/Downloads/petsc-3.5.2/src/mat/impls/dense/seq/dense.c* > > can you please explain to me what are theses errors? > > *Siddhesh M Godbole* > > 5th year Dual Degree, > Civil Eng & Applied Mech. > IIT Madras > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ansp6066 at colorado.edu Wed Dec 10 11:20:56 2014 From: ansp6066 at colorado.edu (Andrew Spott) Date: Wed, 10 Dec 2014 09:20:56 -0800 (PST) Subject: [petsc-users] reference, copy or constant view? Message-ID: <1418232056610.0852fc55@Nodemailer> In the slepc and petsc docs, is there a way to tell if a function is going to copy, modify, or just ?look? at a petsc object that is a parameter? Specifically there are two examples that I?m curious about (I?m trying to chase down a bug): EPSSetOperators(EPS, Mat, Mat) I know the first argument is changed, but are the two later arguments modified? ?It appears that they are destroyed when EPSSetOperators is called a second time (in STSetOperators), but this isn?t clear in the API. ?After looking at the source, I assume that if I?m running an eigensolve on the same Operator Matrix, but the matrix values have changed, then I shouldn?t do another EPSSetOperators (or EPSSetProblemType, EPSSetTarget, EPSSetDimensions, etc?). ?Is this correct? VecDot(Vec,Vec,*PetscScalar) I assume that the two Vectors will not be changed, however it is hard to tell even from the source (it calls a function pointer which I don?t know how to find the source of). ps. ?what does ?collective on {EPS,Mat,Vec,etc.}? mean? Thanks for the help, -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Dec 10 11:30:07 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 10 Dec 2014 10:30:07 -0700 Subject: [petsc-users] reference, copy or constant view? In-Reply-To: <1418232056610.0852fc55@Nodemailer> References: <1418232056610.0852fc55@Nodemailer> Message-ID: <87388ns99c.fsf@jedbrown.org> Andrew Spott writes: > In the slepc and petsc docs, is there a way to tell if a function is going to copy, modify, or just ?look? at a petsc object that is a parameter? > > > Specifically there are two examples that I?m curious about (I?m trying to chase down a bug): > > > EPSSetOperators(EPS, Mat, Mat) > > > I know the first argument is changed, but are the two later arguments > modified? ?It appears that they are destroyed when EPSSetOperators is > called a second time (in STSetOperators), but this isn?t clear in the > API. ?After looking at the source, I assume that if I?m running an > eigensolve on the same Operator Matrix, but the matrix values have > changed, then I shouldn?t do another EPSSetOperators (or > EPSSetProblemType, EPSSetTarget, EPSSetDimensions, etc?). ?Is this > correct? Actually, do call ESPSetOperators so that EPS knows the matrix has changed. The "destroy" you see is just decrementing the reference count (it won't drop to zero because you have an extra reference). > VecDot(Vec,Vec,*PetscScalar) > > > I assume that the two Vectors will not be changed, however it is hard > to tell even from the source (it calls a function pointer which I > don?t know how to find the source of). I recommend using GNU Global, etags, or similar (see users manual for more). Then you can tab-complete VecDot_MPI, for example. If you don't know the types, one option is to break in the debugger and check the function pointer. > ps. ?what does ?collective on {EPS,Mat,Vec,etc.}? mean? It's MPI terminology, indicating that all processes on the given communicator must call the function together. Usually that means communication happens (or may happen). We say "logically collective" if there is no communication, but parallel consistency requires a collective call. (We might double-check for collective calls when running in debug mode.) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From ansp6066 at colorado.edu Wed Dec 10 11:40:59 2014 From: ansp6066 at colorado.edu (Andrew Spott) Date: Wed, 10 Dec 2014 09:40:59 -0800 (PST) Subject: [petsc-users] reference, copy or constant view? In-Reply-To: <87388ns99c.fsf@jedbrown.org> References: <87388ns99c.fsf@jedbrown.org> Message-ID: <1418233259218.2f66d38e@Nodemailer> Thanks for the help regarding etags. ?I used to have them set up, I?ll have to reconfigure them. >Actually, do call ESPSetOperators so that EPS knows?. Unfortunately this just confused me more. ?Am I reading this correctly in that doing: ? ? ? ? ? ? EPSSetOperators(eps, *A, PETSC_NULL); ? ? ? ? ? ? EPSSetProblemType(eps, EPS_HEP); ? ? ? ? ? ? EPSSetType(eps, EPSKRYLOVSCHUR); ? ? ? ? ? ? EPSSetWhichEigenpairs(eps,? EPS_TARGET_REAL); ? ? ? ? ? ? EPSSetTarget(eps, -0.903801); ? ? ? ? ? ? EPSSetDimensions(eps, 1000, PETSC_DEFAULT, PETSC_DEFAULT); ? ? ? ? ? ? EPSSetInitialSpace(eps,1,&eigenvector); ? ? ? ? ? ? EPSSolve(eps); Won?t actually change *A? ?What about VecDot? Thanks again, -Andrew On Wed, Dec 10, 2014 at 10:30 AM, Jed Brown wrote: > Andrew Spott writes: >> In the slepc and petsc docs, is there a way to tell if a function is going to copy, modify, or just ?look? at a petsc object that is a parameter? >> >> >> Specifically there are two examples that I?m curious about (I?m trying to chase down a bug): >> >> >> EPSSetOperators(EPS, Mat, Mat) >> >> >> I know the first argument is changed, but are the two later arguments >> modified? ?It appears that they are destroyed when EPSSetOperators is >> called a second time (in STSetOperators), but this isn?t clear in the >> API. ?After looking at the source, I assume that if I?m running an >> eigensolve on the same Operator Matrix, but the matrix values have >> changed, then I shouldn?t do another EPSSetOperators (or >> EPSSetProblemType, EPSSetTarget, EPSSetDimensions, etc?). ?Is this >> correct? > Actually, do call ESPSetOperators so that EPS knows the matrix has > changed. The "destroy" you see is just decrementing the reference > count (it won't drop to zero because you have an extra reference). >> VecDot(Vec,Vec,*PetscScalar) >> >> >> I assume that the two Vectors will not be changed, however it is hard >> to tell even from the source (it calls a function pointer which I >> don?t know how to find the source of). > I recommend using GNU Global, etags, or similar (see users manual for > more). Then you can tab-complete VecDot_MPI, for example. If you don't > know the types, one option is to break in the debugger and check the > function pointer. >> ps. ?what does ?collective on {EPS,Mat,Vec,etc.}? mean? > It's MPI terminology, indicating that all processes on the given > communicator must call the function together. Usually that means > communication happens (or may happen). We say "logically collective" if > there is no communication, but parallel consistency requires a > collective call. (We might double-check for collective calls when > running in debug mode.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Dec 10 11:47:58 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 10 Dec 2014 10:47:58 -0700 Subject: [petsc-users] reference, copy or constant view? In-Reply-To: <1418233259218.2f66d38e@Nodemailer> References: <87388ns99c.fsf@jedbrown.org> <1418233259218.2f66d38e@Nodemailer> Message-ID: <87zjavqtv5.fsf@jedbrown.org> Andrew Spott writes: > Thanks for the help regarding etags. ?I used to have them set up, I?ll have to reconfigure them. > > > > >>Actually, do call ESPSetOperators so that EPS knows?. > > > > > > > Unfortunately this just confused me more. ?Am I reading this correctly in that doing: > > > > > > ? ? ? ? ? ? EPSSetOperators(eps, *A, PETSC_NULL); > > ? ? ? ? ? ? EPSSetProblemType(eps, EPS_HEP); > > ? ? ? ? ? ? EPSSetType(eps, EPSKRYLOVSCHUR); > > ? ? ? ? ? ? EPSSetWhichEigenpairs(eps,? EPS_TARGET_REAL); > > ? ? ? ? ? ? EPSSetTarget(eps, -0.903801); > > ? ? ? ? ? ? EPSSetDimensions(eps, 1000, PETSC_DEFAULT, PETSC_DEFAULT); > > ? ? ? ? ? ? EPSSetInitialSpace(eps,1,&eigenvector); > > ? ? ? ? ? ? EPSSolve(eps); > > > > > Won?t actually change *A? ? Correct. > What about VecDot? It does not change the vectors. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Wed Dec 10 15:58:15 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 10 Dec 2014 15:58:15 -0600 Subject: [petsc-users] MatDenseRestoreArray run time error In-Reply-To: References: Message-ID: Siddhesh, That example was unfortunately not in the test suite and hence did not work. Note that it is calling MatDenseRestoreArray() on the wrong matrix (A when it should be on A_dense). I've attached a fixed version of the example that should compile and run correctly for you (it does for me). Sorry for the inconvenience Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: ex99.c Type: application/octet-stream Size: 12450 bytes Desc: not available URL: -------------- next part -------------- > On Dec 10, 2014, at 4:31 AM, siddhesh godbole wrote: > > Pardon me, > This is the full error message > > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: Cannot locate function MatDenseRestoreArray_C in object > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 > [0]PETSC ERROR: ./8D on a arch-linux2-c-debug named iitm by iitm Wed Dec 10 15:48:03 2014 > [0]PETSC ERROR: Configure options --download-mpich --download-f2cblaslapack=1 > [0]PETSC ERROR: #1 MatDenseRestoreArray() line 1523 in /home/iitm/Downloads/petsc-3.5.2/src/mat/impls/dense/seq/dense.c > [0]PETSC ERROR: #2 main() line 181 in /home/iitm/Desktop/project/8D.c > > > Siddhesh M Godbole > > 5th year Dual Degree, > Civil Eng & Applied Mech. > IIT Madras > > On Wed, Dec 10, 2014 at 3:59 PM, siddhesh godbole wrote: > hello, > > i was trying to modify ex99 in srs/mat/examples/test which deals with LAPACKsygvx_ for eigenvalues and eigenvectors > > it compiles but gives the following error while executing the program > > > > [0]PETSC ERROR:Cannot locate function MatDenseRestoreArray_C in object > [0]PETSC ERROR: Configure options --download-mpich --download-f2cblaslapack=1 > [0]PETSC ERROR: #1 MatDenseRestoreArray() line 1523 in /home/iitm/Downloads/petsc-3.5.2/src/mat/impls/dense/seq/dense.c > > can you please explain to me what are theses errors? > > Siddhesh M Godbole > > 5th year Dual Degree, > Civil Eng & Applied Mech. > IIT Madras > From mfadams at lbl.gov Wed Dec 10 16:32:30 2014 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 10 Dec 2014 17:32:30 -0500 Subject: [petsc-users] configure error Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 2201218 bytes Desc: not available URL: From knepley at gmail.com Wed Dec 10 16:37:20 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 10 Dec 2014 16:37:20 -0600 Subject: [petsc-users] configure error In-Reply-To: References: Message-ID: On Wed, Dec 10, 2014 at 4:32 PM, Mark Adams wrote: > > This is the error: Executing: cc -o /tmp/petsc-uxG_YZ/config.libraries/conftest -wd1572 -fast -openmp -traceback /tmp/petsc-uxG_YZ/config.libraries/conftest.o -Wl,-rpath,/autofs/na3_home1/adams/petsc_public/arch-eos-opt-intel/lib64 -L/autofs/na3_home1/adams/petsc_public/arch-eos-opt-intel/lib64 -lmetis -ldl -lstdc++ Possible ERROR while running linker: exit code 256 stderr: ipo: warning #11012: unable to find -lmetis ipo: warning #11021: unresolved METIS_PartGraphKway Can you see where libmetis.a was built? It should have been in $PETSC_ARCH/lib64 Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 10 18:43:19 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 10 Dec 2014 18:43:19 -0600 Subject: [petsc-users] configure error In-Reply-To: References: Message-ID: <0AB8C9E3-5682-404C-A444-3C43EE319306@mcs.anl.gov> Remove the entire arch-eos-opt-intel directory and start again. If something fails then email configure.log immediately, do not fiddle around trying other things. Barry Somehow the previously build library is not correct. ipo: warning #11060: /autofs/na3_home1/adams/petsc_public/arch-eos-opt-intel/lib/libmetis.a is an archive, but has no symbols (this can happen if ar is used where xiar is needed) ipo: warning #11010: file format not recognized for /autofs/na3_home1/adams/petsc_public/arch-eos-opt-intel/lib/libmetis.a > On Dec 10, 2014, at 4:32 PM, Mark Adams wrote: > > > From jychang48 at gmail.com Wed Dec 10 19:34:44 2014 From: jychang48 at gmail.com (Justin Chang) Date: Wed, 10 Dec 2014 19:34:44 -0600 Subject: [petsc-users] Speedup studies using DMPlex Message-ID: Hi all, So I am trying to run a speed-up (i.e., strong scaling) study by solving a diffusion problem much like in SNES ex12.c, and plan on using up to 1k cores on LANL's Mustang HPC system. However, it seems that DMPlexDistribute is taking an extremely long time. I am using -petscpartitioner_type parmetis on command line but it seems to make over 50% of the code execution time. Is this normal or is there a "better" way to conduct such a study? Thanks, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 11 04:07:32 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 11 Dec 2014 04:07:32 -0600 Subject: [petsc-users] Speedup studies using DMPlex In-Reply-To: References: Message-ID: On Wed, Dec 10, 2014 at 7:34 PM, Justin Chang wrote: > Hi all, > > So I am trying to run a speed-up (i.e., strong scaling) study by solving a > diffusion problem much like in SNES ex12.c, and plan on using up to 1k > cores on LANL's Mustang HPC system. However, it seems that DMPlexDistribute > is taking an extremely long time. I am using -petscpartitioner_type > parmetis on command line but it seems to make over 50% of the code > execution time. Is this normal or is there a "better" way to conduct such a > study? > 0) What mesh are you using? The most scalable way of running now is to read and distribute a coarse mesh and use regular refinement in parallel. 1) This is pure overhead in the sense that its one-to-many communication, and its done once, so most people do not report the time. 2) I agree its too slow. There is a branch in next that completely reworks distribution. We have run it up to 8K cores on Hector and it is faster. 3) Early next year we plan to have parallel startup working, where each process reads a chunk of the mesh, and then its redistributes for load balance. Thanks, Matt > Thanks, > Justin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.medale at univ-amu.fr Thu Dec 11 04:38:27 2014 From: marc.medale at univ-amu.fr (Marc MEDALE) Date: Thu, 11 Dec 2014 11:38:27 +0100 Subject: [petsc-users] Troubles updating my code from PETSc-3.4 to 3.5 Using MUMPS for KSPSolve() Message-ID: Dear PETSC Users, I have just updated to PETSc-3.5 my research code that uses PETSc for a while but I'm facing an astonishing difference between PETSc-3.4 to 3.5 versions when solving a very ill conditioned algebraic system with MUMPS (4.10.0 in both cases). The only differences that arise in my fortran source code are the following: Loma1-medale% diff ../version_3.5/solvEFL_MAN_SBIF.F ../version_3.4/solvEFL_MAN_SBIF.F 336,337d335 < CALL MatSetOption(MATGLOB,MAT_KEEP_NONZERO_PATTERN, < & PETSC_TRUE,IER) 749,750c747,748 < CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_REAL, < & PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,IER) --- > CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_DOUBLE_PRECISION, > & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,IER) 909c907,908 < CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB,IER) --- > CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB, > & SAME_NONZERO_PATTERN,IER) When I run the corresponding program versions on 128 cores of our cluster with the same input data and the following command line arguments: -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 I get the following outputs: a) with PETSc-3.4p4: L2 norm of solution vector: 7.39640E-02, b) with PETSc-3.5p1: L2 norm of solution vector: 1.61325E-02 Do I have change something else in updating my code based on KSP from PETSc-3.4 to 3.5 versions? Do any default values in the PETSc-MUMPS interface have been changed from PETSc-3.4 to 3.5? Any hints or suggestions are welcome to help me to recover the right results (obtained with PETSc-3.4). Thank you very much. Marc MEDALE. From knepley at gmail.com Thu Dec 11 04:43:43 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 11 Dec 2014 04:43:43 -0600 Subject: [petsc-users] Troubles updating my code from PETSc-3.4 to 3.5 Using MUMPS for KSPSolve() In-Reply-To: References: Message-ID: On Thu, Dec 11, 2014 at 4:38 AM, Marc MEDALE wrote: > Dear PETSC Users, > > I have just updated to PETSc-3.5 my research code that uses PETSc for a > while but I'm facing an astonishing difference between PETSc-3.4 to 3.5 > versions when solving a very ill conditioned algebraic system with MUMPS > (4.10.0 in both cases). > > The only differences that arise in my fortran source code are the > following: > Loma1-medale% diff ../version_3.5/solvEFL_MAN_SBIF.F > ../version_3.4/solvEFL_MAN_SBIF.F > 336,337d335 > < CALL MatSetOption(MATGLOB,MAT_KEEP_NONZERO_PATTERN, > < & PETSC_TRUE,IER) > 749,750c747,748 > < CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_REAL, > < & PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,IER) > --- > > CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_DOUBLE_PRECISION, > > & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,IER) > 909c907,908 > < CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB,IER) > --- > > CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB, > > & SAME_NONZERO_PATTERN,IER) > > When I run the corresponding program versions on 128 cores of our cluster > with the same input data and the following command line arguments: > -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps > -mat_mumps_icntl_8 0 > > I get the following outputs: > a) with PETSc-3.4p4: > L2 norm of solution vector: 7.39640E-02, > > b) with PETSc-3.5p1: > L2 norm of solution vector: 1.61325E-02 > > Do I have change something else in updating my code based on KSP from > PETSc-3.4 to 3.5 versions? > Do any default values in the PETSc-MUMPS interface have been changed from > PETSc-3.4 to 3.5? > Any hints or suggestions are welcome to help me to recover the right > results (obtained with PETSc-3.4). > Send the output from -ksp_monitor -ksp_view for both runs. I am guessing that a MUMPS default changed between versions. Thanks, Matt > Thank you very much. > > Marc MEDALE. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Thu Dec 11 04:45:35 2014 From: jychang48 at gmail.com (Justin Chang) Date: Thu, 11 Dec 2014 04:45:35 -0600 Subject: [petsc-users] Speedup studies using DMPlex In-Reply-To: References: Message-ID: I am manually creating a structured tetrahedral mesh within my code and using the DMPlexCreateFromDAG function to make a DMPlex out of it. If I go with your suggestion, do I simply call DMRefine(...) after the mesh is distributed? Because I notice that regular refinement is present in PETSc 3.5.2 SNES ex12.c but not in the PETSc developer's version (which I am using). Thanks, Justin On Thu, Dec 11, 2014 at 4:07 AM, Matthew Knepley wrote: > On Wed, Dec 10, 2014 at 7:34 PM, Justin Chang wrote: > >> Hi all, >> >> So I am trying to run a speed-up (i.e., strong scaling) study by solving >> a diffusion problem much like in SNES ex12.c, and plan on using up to 1k >> cores on LANL's Mustang HPC system. However, it seems that DMPlexDistribute >> is taking an extremely long time. I am using -petscpartitioner_type >> parmetis on command line but it seems to make over 50% of the code >> execution time. Is this normal or is there a "better" way to conduct such a >> study? >> > > 0) What mesh are you using? The most scalable way of running now is to > read and distribute a coarse mesh and use regular refinement in parallel. > > 1) This is pure overhead in the sense that its one-to-many communication, > and its done once, so most people do not report the time. > > 2) I agree its too slow. There is a branch in next that completely reworks > distribution. We have run it up to 8K cores on Hector and > it is faster. > > 3) Early next year we plan to have parallel startup working, where each > process reads a chunk of the mesh, and then its redistributes > for load balance. > > Thanks, > > Matt > > >> Thanks, >> Justin >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.medale at univ-amu.fr Thu Dec 11 10:07:51 2014 From: marc.medale at univ-amu.fr (Marc MEDALE) Date: Thu, 11 Dec 2014 17:07:51 +0100 Subject: [petsc-users] Troubles updating my code from PETSc-3.4 to 3.5 Using MUMPS for KSPSolve() In-Reply-To: References: Message-ID: <767243EE-FE85-45D9-8368-2EF5DBEBBE92@univ-amu.fr> Dear Matt, the output files obtained with PETSc-3.4p4 and 3.5p1 versions using the following command line: -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 -ksp_monitor -ksp_view are attached below. If skipping flops and memory usage per core, a diff between the two output files reduces to: diff Output_3.4p4.txt Output_3.5p1.txt 14c14 < Matrix Object: 64 MPI processes --- > Mat Object: 64 MPI processes 18c18 < total: nonzeros=481059588, allocated nonzeros=481059588 --- > total: nonzeros=4.8106e+08, allocated nonzeros=4.8106e+08 457c457 < INFOG(10) (total integer space store the matrix factors after factorization): 26149876 --- > INFOG(10) (total integer space store the matrix factors after factorization): 26136333 461c461 < INFOG(14) (number of memory compress after factorization): 54 --- > INFOG(14) (number of memory compress after factorization): 48 468,469c468,469 < INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 338 < INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 19782 --- > INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 334 > INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 19779 472a473,478 > INFOG(28) (after factorization: number of null pivots encountered): 0 > INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 470143172 > INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 202, 10547 > INFOG(32) (after analysis: type of analysis done): 1 > INFOG(33) (value used for ICNTL(8)): 0 > INFOG(34) (exponent of the determinant if determinant is requested): 0 474c480 < Matrix Object: 64 MPI processes --- > Mat Object: 64 MPI processes 477c483 < total: nonzeros=63720324, allocated nonzeros=63720324 --- > total: nonzeros=6.37203e+07, allocated nonzeros=6.37203e+07 481c487 < Norme de U 1 7.37266E-02, L 1 1.00000E+00 --- > Norme de U 1 1.61172E-02, L 1 1.00000E+00 483c489 < Temps total d execution : 198.373291969299 --- > Temps total d execution : 216.934082031250 Which does not reveal any striking differences, except in the L2 norm of the solution vectors. I need assistance to help me to overcome this quite bizarre behavior. Thank you. Marc MEDALE ========================================================= Universit? Aix-Marseille, Polytech'Marseille, D?pt M?canique Energ?tique Laboratoire IUSTI, UMR 7343 CNRS-Universit? Aix-Marseille Technopole de Chateau-Gombert, 5 rue Enrico Fermi 13453 MARSEILLE, Cedex 13, FRANCE --------------------------------------------------------------------------------------------------- Tel : +33 (0)4.91.10.69.14 ou 38 Fax : +33 (0)4.91.10.69.69 e-mail : marc.medale at univ-amu.fr ========================================================= Le 11 d?c. 2014 ? 11:43, Matthew Knepley a ?crit : > On Thu, Dec 11, 2014 at 4:38 AM, Marc MEDALE wrote: > Dear PETSC Users, > > I have just updated to PETSc-3.5 my research code that uses PETSc for a while but I'm facing an astonishing difference between PETSc-3.4 to 3.5 versions when solving a very ill conditioned algebraic system with MUMPS (4.10.0 in both cases). > > The only differences that arise in my fortran source code are the following: > Loma1-medale% diff ../version_3.5/solvEFL_MAN_SBIF.F ../version_3.4/solvEFL_MAN_SBIF.F > 336,337d335 > < CALL MatSetOption(MATGLOB,MAT_KEEP_NONZERO_PATTERN, > < & PETSC_TRUE,IER) > 749,750c747,748 > < CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_REAL, > < & PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,IER) > --- > > CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_DOUBLE_PRECISION, > > & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,IER) > 909c907,908 > < CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB,IER) > --- > > CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB, > > & SAME_NONZERO_PATTERN,IER) > > When I run the corresponding program versions on 128 cores of our cluster with the same input data and the following command line arguments: > -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 > > I get the following outputs: > a) with PETSc-3.4p4: > L2 norm of solution vector: 7.39640E-02, > > b) with PETSc-3.5p1: > L2 norm of solution vector: 1.61325E-02 > > Do I have change something else in updating my code based on KSP from PETSc-3.4 to 3.5 versions? > Do any default values in the PETSc-MUMPS interface have been changed from PETSc-3.4 to 3.5? > Any hints or suggestions are welcome to help me to recover the right results (obtained with PETSc-3.4). > > Send the output from -ksp_monitor -ksp_view for both runs. I am guessing that a MUMPS default changed between versions. > > Thanks, > > Matt > > Thank you very much. > > Marc MEDALE. > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Output_3.4p4.txt URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Output_3.5p1.txt URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 11 10:16:05 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 11 Dec 2014 10:16:05 -0600 Subject: [petsc-users] Troubles updating my code from PETSc-3.4 to 3.5 Using MUMPS for KSPSolve() In-Reply-To: <767243EE-FE85-45D9-8368-2EF5DBEBBE92@univ-amu.fr> References: <767243EE-FE85-45D9-8368-2EF5DBEBBE92@univ-amu.fr> Message-ID: On Thu, Dec 11, 2014 at 10:07 AM, Marc MEDALE wrote: > Dear Matt, > > the output files obtained with PETSc-3.4p4 and 3.5p1 versions using the > following command line: > -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps > -mat_mumps_icntl_8 0 -ksp_monitor -ksp_view > > are attached below. If skipping flops and memory usage per core, a diff > between the two output files reduces to: > diff Output_3.4p4.txt Output_3.5p1.txt > 14c14 > < Matrix Object: 64 MPI processes > --- > > Mat Object: 64 MPI processes > 18c18 > < total: nonzeros=481059588, allocated nonzeros=481059588 > --- > > total: nonzeros=4.8106e+08, allocated nonzeros=4.8106e+08 > 457c457 > < INFOG(10) (total integer space store the matrix factors > after factorization): 26149876 > --- > > INFOG(10) (total integer space store the matrix factors > after factorization): 26136333 > 461c461 > < INFOG(14) (number of memory compress after factorization): > 54 > --- > > INFOG(14) (number of memory compress after factorization): > 48 > 468,469c468,469 > < INFOG(21) (size in MB of memory effectively used during > factorization - value on the most memory consuming processor): 338 > < INFOG(22) (size in MB of memory effectively used during > factorization - sum over all processors): 19782 > --- > > INFOG(21) (size in MB of memory effectively used during > factorization - value on the most memory consuming processor): 334 > > INFOG(22) (size in MB of memory effectively used during > factorization - sum over all processors): 19779 > 472a473,478 > > INFOG(28) (after factorization: number of null pivots > encountered): 0 > > INFOG(29) (after factorization: effective number of > entries in the factors (sum over all processors)): 470143172 > > INFOG(30, 31) (after solution: size in Mbytes of memory > used during solution phase): 202, 10547 > > INFOG(32) (after analysis: type of analysis done): 1 > > INFOG(33) (value used for ICNTL(8)): 0 > > INFOG(34) (exponent of the determinant if determinant is > requested): 0 > 474c480 > < Matrix Object: 64 MPI processes > --- > > Mat Object: 64 MPI processes > 477c483 > < total: nonzeros=63720324, allocated nonzeros=63720324 > --- > > total: nonzeros=6.37203e+07, allocated nonzeros=6.37203e+07 > 481c487 > < Norme de U 1 7.37266E-02, L 1 1.00000E+00 > --- > > Norme de U 1 1.61172E-02, L 1 1.00000E+00 > 483c489 > < Temps total d execution : 198.373291969299 > --- > > Temps total d execution : 216.934082031250 > > These appear to be two different matrices with the same, or about the same structure. The factorization is proceeding differently, I am guessing due to different pivots. Can you write the matrix to a binary file using MatView() and load it into both versions so that we are certain it is the same? Thanks, Matt > Which does not reveal any striking differences, except in the L2 norm of > the solution vectors. > > I need assistance to help me to overcome this quite bizarre behavior. > > Thank you. > > Marc MEDALE > > ========================================================= > Universit? Aix-Marseille, Polytech'Marseille, D?pt M?canique Energ?tique > Laboratoire IUSTI, UMR 7343 CNRS-Universit? Aix-Marseille > Technopole de Chateau-Gombert, 5 rue Enrico Fermi > 13453 MARSEILLE, Cedex 13, FRANCE > > --------------------------------------------------------------------------------------------------- > Tel : +33 (0)4.91.10.69.14 ou 38 > Fax : +33 (0)4.91.10.69.69 > e-mail : marc.medale at univ-amu.fr > ========================================================= > > > > > > > > > > Le 11 d?c. 2014 ? 11:43, Matthew Knepley a ?crit : > > On Thu, Dec 11, 2014 at 4:38 AM, Marc MEDALE > wrote: > >> Dear PETSC Users, >> >> I have just updated to PETSc-3.5 my research code that uses PETSc for a >> while but I'm facing an astonishing difference between PETSc-3.4 to 3.5 >> versions when solving a very ill conditioned algebraic system with MUMPS >> (4.10.0 in both cases). >> >> The only differences that arise in my fortran source code are the >> following: >> Loma1-medale% diff ../version_3.5/solvEFL_MAN_SBIF.F >> ../version_3.4/solvEFL_MAN_SBIF.F >> 336,337d335 >> < CALL MatSetOption(MATGLOB,MAT_KEEP_NONZERO_PATTERN, >> < & PETSC_TRUE,IER) >> 749,750c747,748 >> < CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_REAL, >> < & PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,IER) >> --- >> > CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_DOUBLE_PRECISION, >> > & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,IER) >> 909c907,908 >> < CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB,IER) >> --- >> > CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB, >> > & SAME_NONZERO_PATTERN,IER) >> >> When I run the corresponding program versions on 128 cores of our cluster >> with the same input data and the following command line arguments: >> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps >> -mat_mumps_icntl_8 0 >> >> I get the following outputs: >> a) with PETSc-3.4p4: >> L2 norm of solution vector: 7.39640E-02, >> >> b) with PETSc-3.5p1: >> L2 norm of solution vector: 1.61325E-02 >> >> Do I have change something else in updating my code based on KSP from >> PETSc-3.4 to 3.5 versions? >> Do any default values in the PETSc-MUMPS interface have been changed from >> PETSc-3.4 to 3.5? >> Any hints or suggestions are welcome to help me to recover the right >> results (obtained with PETSc-3.4). >> > > Send the output from -ksp_monitor -ksp_view for both runs. I am guessing > that a MUMPS default changed between versions. > > Thanks, > > Matt > > >> Thank you very much. >> >> Marc MEDALE. > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Dec 11 11:01:13 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 11 Dec 2014 11:01:13 -0600 Subject: [petsc-users] Troubles updating my code from PETSc-3.4 to 3.5 Using MUMPS for KSPSolve() In-Reply-To: <767243EE-FE85-45D9-8368-2EF5DBEBBE92@univ-amu.fr> References: <767243EE-FE85-45D9-8368-2EF5DBEBBE92@univ-amu.fr> Message-ID: <4C19AE7F-0A7A-46A8-A452-A3FFEFE46B1B@mcs.anl.gov> Please run both with -ksp_monitor -ksp_type gmres and send the output Barry > On Dec 11, 2014, at 10:07 AM, Marc MEDALE wrote: > > Dear Matt, > > the output files obtained with PETSc-3.4p4 and 3.5p1 versions using the following command line: > -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 -ksp_monitor -ksp_view > > are attached below. If skipping flops and memory usage per core, a diff between the two output files reduces to: > diff Output_3.4p4.txt Output_3.5p1.txt > 14c14 > < Matrix Object: 64 MPI processes > --- > > Mat Object: 64 MPI processes > 18c18 > < total: nonzeros=481059588, allocated nonzeros=481059588 > --- > > total: nonzeros=4.8106e+08, allocated nonzeros=4.8106e+08 > 457c457 > < INFOG(10) (total integer space store the matrix factors after factorization): 26149876 > --- > > INFOG(10) (total integer space store the matrix factors after factorization): 26136333 > 461c461 > < INFOG(14) (number of memory compress after factorization): 54 > --- > > INFOG(14) (number of memory compress after factorization): 48 > 468,469c468,469 > < INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 338 > < INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 19782 > --- > > INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 334 > > INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 19779 > 472a473,478 > > INFOG(28) (after factorization: number of null pivots encountered): 0 > > INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 470143172 > > INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 202, 10547 > > INFOG(32) (after analysis: type of analysis done): 1 > > INFOG(33) (value used for ICNTL(8)): 0 > > INFOG(34) (exponent of the determinant if determinant is requested): 0 > 474c480 > < Matrix Object: 64 MPI processes > --- > > Mat Object: 64 MPI processes > 477c483 > < total: nonzeros=63720324, allocated nonzeros=63720324 > --- > > total: nonzeros=6.37203e+07, allocated nonzeros=6.37203e+07 > 481c487 > < Norme de U 1 7.37266E-02, L 1 1.00000E+00 > --- > > Norme de U 1 1.61172E-02, L 1 1.00000E+00 > 483c489 > < Temps total d execution : 198.373291969299 > --- > > Temps total d execution : 216.934082031250 > > > Which does not reveal any striking differences, except in the L2 norm of the solution vectors. > > I need assistance to help me to overcome this quite bizarre behavior. > > Thank you. > > Marc MEDALE > > ========================================================= > Universit? Aix-Marseille, Polytech'Marseille, D?pt M?canique Energ?tique > Laboratoire IUSTI, UMR 7343 CNRS-Universit? Aix-Marseille > Technopole de Chateau-Gombert, 5 rue Enrico Fermi > 13453 MARSEILLE, Cedex 13, FRANCE > --------------------------------------------------------------------------------------------------- > Tel : +33 (0)4.91.10.69.14 ou 38 > Fax : +33 (0)4.91.10.69.69 > e-mail : marc.medale at univ-amu.fr > ========================================================= > > > > > > > > Le 11 d?c. 2014 ? 11:43, Matthew Knepley a ?crit : > >> On Thu, Dec 11, 2014 at 4:38 AM, Marc MEDALE wrote: >> Dear PETSC Users, >> >> I have just updated to PETSc-3.5 my research code that uses PETSc for a while but I'm facing an astonishing difference between PETSc-3.4 to 3.5 versions when solving a very ill conditioned algebraic system with MUMPS (4.10.0 in both cases). >> >> The only differences that arise in my fortran source code are the following: >> Loma1-medale% diff ../version_3.5/solvEFL_MAN_SBIF.F ../version_3.4/solvEFL_MAN_SBIF.F >> 336,337d335 >> < CALL MatSetOption(MATGLOB,MAT_KEEP_NONZERO_PATTERN, >> < & PETSC_TRUE,IER) >> 749,750c747,748 >> < CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_REAL, >> < & PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,IER) >> --- >> > CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_DOUBLE_PRECISION, >> > & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,IER) >> 909c907,908 >> < CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB,IER) >> --- >> > CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB, >> > & SAME_NONZERO_PATTERN,IER) >> >> When I run the corresponding program versions on 128 cores of our cluster with the same input data and the following command line arguments: >> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 >> >> I get the following outputs: >> a) with PETSc-3.4p4: >> L2 norm of solution vector: 7.39640E-02, >> >> b) with PETSc-3.5p1: >> L2 norm of solution vector: 1.61325E-02 >> >> Do I have change something else in updating my code based on KSP from PETSc-3.4 to 3.5 versions? >> Do any default values in the PETSc-MUMPS interface have been changed from PETSc-3.4 to 3.5? >> Any hints or suggestions are welcome to help me to recover the right results (obtained with PETSc-3.4). >> >> Send the output from -ksp_monitor -ksp_view for both runs. I am guessing that a MUMPS default changed between versions. >> >> Thanks, >> >> Matt >> >> Thank you very much. >> >> Marc MEDALE. >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > From knepley at gmail.com Thu Dec 11 11:28:42 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 11 Dec 2014 11:28:42 -0600 Subject: [petsc-users] Speedup studies using DMPlex In-Reply-To: References: Message-ID: On Thu, Dec 11, 2014 at 4:45 AM, Justin Chang wrote: > I am manually creating a structured tetrahedral mesh within my code and > using the DMPlexCreateFromDAG function to make a DMPlex out of it. If I go > with your suggestion, do I simply call DMRefine(...) after the mesh is > distributed? Because I notice that regular refinement is present in PETSc > 3.5.2 SNES ex12.c but not in the PETSc developer's version (which I am > using). > In 3.5.2 the regular refinement code is in there explicitly, but in dev you can now use the regular mechanism -dm_refine <# times> Thanks, Matt > Thanks, > Justin > > On Thu, Dec 11, 2014 at 4:07 AM, Matthew Knepley > wrote: > >> On Wed, Dec 10, 2014 at 7:34 PM, Justin Chang >> wrote: >> >>> Hi all, >>> >>> So I am trying to run a speed-up (i.e., strong scaling) study by solving >>> a diffusion problem much like in SNES ex12.c, and plan on using up to 1k >>> cores on LANL's Mustang HPC system. However, it seems that DMPlexDistribute >>> is taking an extremely long time. I am using -petscpartitioner_type >>> parmetis on command line but it seems to make over 50% of the code >>> execution time. Is this normal or is there a "better" way to conduct such a >>> study? >>> >> >> 0) What mesh are you using? The most scalable way of running now is to >> read and distribute a coarse mesh and use regular refinement in parallel. >> >> 1) This is pure overhead in the sense that its one-to-many communication, >> and its done once, so most people do not report the time. >> >> 2) I agree its too slow. There is a branch in next that completely >> reworks distribution. We have run it up to 8K cores on Hector and >> it is faster. >> >> 3) Early next year we plan to have parallel startup working, where each >> process reads a chunk of the mesh, and then its redistributes >> for load balance. >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Justin >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Thu Dec 11 16:06:12 2014 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Thu, 11 Dec 2014 16:06:12 -0600 Subject: [petsc-users] Changing TSAdapt Message-ID: Hi I'm trying to use the same TS twice, the first time using the basic TSAdaptType, then I change it to none like this: TSAdapt adapt; TSGetAdapt(ts,&adapt); TSAdaptSetType(adapt,"none"); However, when I destroy the TS, I get this error: 0x00007ffff605c4f2 in VecDestroy (v=0x28) at /home/miguel/petsc/src/vec/vec/interface/vector.c:423 423 if (!*v) PetscFunctionReturn(0); (gdb) backtrace #0 0x00007ffff605c4f2 in VecDestroy (v=0x28) at /home/miguel/petsc/src/vec/vec/interface/vector.c:423 #1 0x00007ffff6f330a5 in TSAdaptDestroy_Basic (adapt=0xfdacd0) at /home/miguel/petsc/src/ts/adapt/impls/basic/adaptbasic.c:66 #2 0x00007ffff6f2c433 in TSAdaptDestroy (adapt=0xfccbc8) at /home/miguel/petsc/src/ts/adapt/interface/tsadapt.c:238 #3 0x00007ffff6f03093 in TSDestroy (ts=0x7fffffffdd80) at /home/miguel/petsc/src/ts/interface/ts.c:1906 It's trying to destroy the TSAdaptDestroy_Basic, but I think it was already destroyed when I changed the TSAdaptType to none, is this true? How can I effectively change the TSAdaptType without having this error? Thanks Miguel -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Thu Dec 11 21:33:59 2014 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 11 Dec 2014 20:33:59 -0700 Subject: [petsc-users] How to call a lapack routine in the petsc? Message-ID: Hi all, How to call a Lapack routine to solve a dense linear system? Any simple example? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Thu Dec 11 21:39:16 2014 From: fd.kong at siat.ac.cn (Fande Kong) Date: Thu, 11 Dec 2014 20:39:16 -0700 Subject: [petsc-users] How to call a lapack routine in the petsc? Message-ID: Hi all, How to call a Lapack routine to solve a dense linear system? Any simple example? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Dec 11 21:47:27 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 11 Dec 2014 21:47:27 -0600 Subject: [petsc-users] How to call a lapack routine in the petsc? In-Reply-To: References: Message-ID: <79D10C51-8BEA-4C6D-BF34-9C5A68688D75@mcs.anl.gov> > On Dec 11, 2014, at 9:33 PM, Fande Kong wrote: > > Hi all, > > How to call a Lapack routine to solve a dense linear system? Any simple example? Create the Mat with MatCreateSeqDense() then create the usual KSP and use for the solver options -pc_type lu -ksp_type preonly See src/ksp/ksp/examples/tutorials/ex30.c Barry > > Thanks, > From fdkong.jd at gmail.com Thu Dec 11 21:54:06 2014 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 11 Dec 2014 20:54:06 -0700 Subject: [petsc-users] How to call a lapack routine in the petsc? In-Reply-To: <79D10C51-8BEA-4C6D-BF34-9C5A68688D75@mcs.anl.gov> References: <79D10C51-8BEA-4C6D-BF34-9C5A68688D75@mcs.anl.gov> Message-ID: Hi Barry, Thanks. I know how to solve a dense linear system in the petsc, but I was wondering how to call a Lapack routine in the petsc. On Thu, Dec 11, 2014 at 8:47 PM, Barry Smith wrote: > > > > On Dec 11, 2014, at 9:33 PM, Fande Kong wrote: > > > > Hi all, > > > > How to call a Lapack routine to solve a dense linear system? Any simple > example? > > Create the Mat with MatCreateSeqDense() then create the usual KSP and > use for the solver options -pc_type lu -ksp_type preonly > > See src/ksp/ksp/examples/tutorials/ex30.c > > > Barry > > > > > Thanks, > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 11 21:57:22 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 11 Dec 2014 21:57:22 -0600 Subject: [petsc-users] How to call a lapack routine in the petsc? In-Reply-To: References: <79D10C51-8BEA-4C6D-BF34-9C5A68688D75@mcs.anl.gov> Message-ID: On Thu, Dec 11, 2014 at 9:54 PM, Fande Kong wrote: > > Hi Barry, > > Thanks. > > I know how to solve a dense linear system in the petsc, but I was > wondering how to call a Lapack routine in the petsc. > This is from dt.c: #include PetscBLASInt LDZ, N; ierr = PetscBLASIntCast(npoints,&N);CHKERRQ(ierr); LDZ = N; ierr = PetscFPTrapPush(PETSC_FP_TRAP_OFF);CHKERRQ(ierr); PetscStackCallBLAS("LAPACKsteqr",LAPACKsteqr_("I",&N,x,w,Z,&LDZ,work,&info)); ierr = PetscFPTrapPop();CHKERRQ(ierr); if (info) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_PLIB,"xSTEQR error"); Matt > > On Thu, Dec 11, 2014 at 8:47 PM, Barry Smith wrote: >> >> >> > On Dec 11, 2014, at 9:33 PM, Fande Kong wrote: >> > >> > Hi all, >> > >> > How to call a Lapack routine to solve a dense linear system? Any simple >> example? >> >> Create the Mat with MatCreateSeqDense() then create the usual KSP and >> use for the solver options -pc_type lu -ksp_type preonly >> >> See src/ksp/ksp/examples/tutorials/ex30.c >> >> >> Barry >> >> > >> > Thanks, >> > >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Dec 11 22:09:30 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 11 Dec 2014 22:09:30 -0600 Subject: [petsc-users] How to call a lapack routine in the petsc? In-Reply-To: References: <79D10C51-8BEA-4C6D-BF34-9C5A68688D75@mcs.anl.gov> Message-ID: <2BCC4FCE-A65F-4C86-8738-BBFBCDF31740@mcs.anl.gov> > On Dec 11, 2014, at 9:54 PM, Fande Kong wrote: > > Hi Barry, > > Thanks. > > I know how to solve a dense linear system in the petsc, but I was wondering how to call a Lapack routine in the petsc. You asked how to call a lapack routine to solve a dense linear system. That is exactly what I told you. For seqdense matrices PETSc lu solvers directly call the LAPACK routines to do the factorization and the solves. Sure it is possible to call LAPACK routines directly to solve a sequential dense linear system but there is no reason to do that since PETSc does it for you. For dense matrices of dimension 10 or larger the overhead of calling through PETSc is negligible so there is no good reason to call lapack directly. Barry > > > > On Thu, Dec 11, 2014 at 8:47 PM, Barry Smith wrote: > > > On Dec 11, 2014, at 9:33 PM, Fande Kong wrote: > > > > Hi all, > > > > How to call a Lapack routine to solve a dense linear system? Any simple example? > > Create the Mat with MatCreateSeqDense() then create the usual KSP and use for the solver options -pc_type lu -ksp_type preonly > > See src/ksp/ksp/examples/tutorials/ex30.c > > > Barry > > > > > Thanks, > > > From bsmith at mcs.anl.gov Thu Dec 11 22:44:22 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 11 Dec 2014 22:44:22 -0600 Subject: [petsc-users] Changing TSAdapt In-Reply-To: References: Message-ID: <9B2FA133-7F7E-439C-B4E0-BDCDD0D093A8@mcs.anl.gov> Miguel, Thanks for reporting this, you have found a bug in our code. When we changed the adapt type we did not zero out the function pointers for the old basic adaptor hence they were improperly called when the object was finally destroyed at the end. I've attached a patch. Once you apply this simply run make gnumake in the PETSc root directory, recompile your code and run it again and it should successfully end. Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: ts-adapt.patch Type: application/octet-stream Size: 821 bytes Desc: not available URL: -------------- next part -------------- > On Dec 11, 2014, at 4:06 PM, Miguel Angel Salazar de Troya wrote: > > Hi > > I'm trying to use the same TS twice, the first time using the basic TSAdaptType, then I change it to none like this: > > TSAdapt adapt; > TSGetAdapt(ts,&adapt); > TSAdaptSetType(adapt,"none"); > > However, when I destroy the TS, I get this error: > > 0x00007ffff605c4f2 in VecDestroy (v=0x28) at /home/miguel/petsc/src/vec/vec/interface/vector.c:423 > 423 if (!*v) PetscFunctionReturn(0); > (gdb) backtrace > #0 0x00007ffff605c4f2 in VecDestroy (v=0x28) at /home/miguel/petsc/src/vec/vec/interface/vector.c:423 > #1 0x00007ffff6f330a5 in TSAdaptDestroy_Basic (adapt=0xfdacd0) at /home/miguel/petsc/src/ts/adapt/impls/basic/adaptbasic.c:66 > #2 0x00007ffff6f2c433 in TSAdaptDestroy (adapt=0xfccbc8) at /home/miguel/petsc/src/ts/adapt/interface/tsadapt.c:238 > #3 0x00007ffff6f03093 in TSDestroy (ts=0x7fffffffdd80) at /home/miguel/petsc/src/ts/interface/ts.c:1906 > > > It's trying to destroy the TSAdaptDestroy_Basic, but I think it was already destroyed when I changed the TSAdaptType to none, is this true? How can I effectively change the TSAdaptType without having this error? > > Thanks > Miguel > > -- > Miguel Angel Salazar de Troya > Graduate Research Assistant > Department of Mechanical Science and Engineering > University of Illinois at Urbana-Champaign > (217) 550-2360 > salaza11 at illinois.edu > From jed at jedbrown.org Thu Dec 11 22:50:06 2014 From: jed at jedbrown.org (Jed Brown) Date: Thu, 11 Dec 2014 21:50:06 -0700 Subject: [petsc-users] Changing TSAdapt In-Reply-To: <9B2FA133-7F7E-439C-B4E0-BDCDD0D093A8@mcs.anl.gov> References: <9B2FA133-7F7E-439C-B4E0-BDCDD0D093A8@mcs.anl.gov> Message-ID: <87d27ppj41.fsf@jedbrown.org> Barry Smith writes: > Miguel, > > Thanks for reporting this, you have found a bug in our code. When we changed the adapt type we did not zero out the function pointers for the old basic adaptor hence they were improperly called when the object was finally destroyed at the end. Lisandro fixed this here. https://bitbucket.org/petsc/petsc/commits/40813bd21acd4c08b8080bc6cc1eef9949a22ac8 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Thu Dec 11 22:58:36 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 11 Dec 2014 22:58:36 -0600 Subject: [petsc-users] Changing TSAdapt In-Reply-To: <87d27ppj41.fsf@jedbrown.org> References: <9B2FA133-7F7E-439C-B4E0-BDCDD0D093A8@mcs.anl.gov> <87d27ppj41.fsf@jedbrown.org> Message-ID: <186E057A-085D-4AE2-B786-4E35AAF2E678@mcs.anl.gov> Yikes, seven days out and not yet put into maint! Meanwhile Miguel has to waste a whole day because no one told him about the fix. I won't appologize for fixing it again. Since a fix that is not accessible is not a fix. Barry > On Dec 11, 2014, at 10:50 PM, Jed Brown wrote: > > Barry Smith writes: > >> Miguel, >> >> Thanks for reporting this, you have found a bug in our code. When we changed the adapt type we did not zero out the function pointers for the old basic adaptor hence they were improperly called when the object was finally destroyed at the end. > > Lisandro fixed this here. > > https://bitbucket.org/petsc/petsc/commits/40813bd21acd4c08b8080bc6cc1eef9949a22ac8 From fdkong.jd at gmail.com Thu Dec 11 23:11:11 2014 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 11 Dec 2014 22:11:11 -0700 Subject: [petsc-users] How to call a lapack routine in the petsc? In-Reply-To: <2BCC4FCE-A65F-4C86-8738-BBFBCDF31740@mcs.anl.gov> References: <79D10C51-8BEA-4C6D-BF34-9C5A68688D75@mcs.anl.gov> <2BCC4FCE-A65F-4C86-8738-BBFBCDF31740@mcs.anl.gov> Message-ID: Thanks, Got it. On Thu, Dec 11, 2014 at 9:09 PM, Barry Smith wrote: > > > > On Dec 11, 2014, at 9:54 PM, Fande Kong wrote: > > > > Hi Barry, > > > > Thanks. > > > > I know how to solve a dense linear system in the petsc, but I was > wondering how to call a Lapack routine in the petsc. > > You asked how to call a lapack routine to solve a dense linear system. > That is exactly what I told you. For seqdense matrices PETSc lu solvers > directly call the LAPACK routines to do the factorization and the solves. > > Sure it is possible to call LAPACK routines directly to solve a > sequential dense linear system but there is no reason to do that since > PETSc does it for you. For dense matrices of dimension 10 or larger the > overhead of calling through PETSc is negligible so there is no good reason > to call lapack directly. > > Barry > > > > > > > > > On Thu, Dec 11, 2014 at 8:47 PM, Barry Smith wrote: > > > > > On Dec 11, 2014, at 9:33 PM, Fande Kong wrote: > > > > > > Hi all, > > > > > > How to call a Lapack routine to solve a dense linear system? Any > simple example? > > > > Create the Mat with MatCreateSeqDense() then create the usual KSP and > use for the solver options -pc_type lu -ksp_type preonly > > > > See src/ksp/ksp/examples/tutorials/ex30.c > > > > > > Barry > > > > > > > > Thanks, > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From siddhesh4godbole at gmail.com Fri Dec 12 07:32:15 2014 From: siddhesh4godbole at gmail.com (siddhesh godbole) Date: Fri, 12 Dec 2014 19:02:15 +0530 Subject: [petsc-users] MatDenseRestoreArray run time error In-Reply-To: References: Message-ID: Hello Barry, I am stuck with MatConvert because the matrix I am feeding into it is set as MATSEQSBAIJ during formation. I learned from your documents that already bij or sbij formats cannot be called upon by MatConvert command. but the ex99.c which you have sent me in your previous mail also has MatConvert operated on A & B which are already set up as MATSEQSBIAJ. I don't understand this. Is there something I am missing to focus on? *Siddhesh M Godbole* 5th year Dual Degree, Civil Eng & Applied Mech. IIT Madras On Thu, Dec 11, 2014 at 3:28 AM, Barry Smith wrote: > > > Siddhesh, > > That example was unfortunately not in the test suite and hence did not > work. Note that it is calling MatDenseRestoreArray() on the wrong matrix (A > when it should be on A_dense). I've attached a fixed version of the example > that should compile and run correctly for you (it does for me). > > Sorry for the inconvenience > > Barry > > > > On Dec 10, 2014, at 4:31 AM, siddhesh godbole < > siddhesh4godbole at gmail.com> wrote: > > > > Pardon me, > > This is the full error message > > > > > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > [0]PETSC ERROR: No support for this operation for this object type > > [0]PETSC ERROR: Cannot locate function MatDenseRestoreArray_C in object > > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > > [0]PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 > > [0]PETSC ERROR: ./8D on a arch-linux2-c-debug named iitm by iitm Wed Dec > 10 15:48:03 2014 > > [0]PETSC ERROR: Configure options --download-mpich > --download-f2cblaslapack=1 > > [0]PETSC ERROR: #1 MatDenseRestoreArray() line 1523 in > /home/iitm/Downloads/petsc-3.5.2/src/mat/impls/dense/seq/dense.c > > [0]PETSC ERROR: #2 main() line 181 in /home/iitm/Desktop/project/8D.c > > > > > > Siddhesh M Godbole > > > > 5th year Dual Degree, > > Civil Eng & Applied Mech. > > IIT Madras > > > > On Wed, Dec 10, 2014 at 3:59 PM, siddhesh godbole < > siddhesh4godbole at gmail.com> wrote: > > hello, > > > > i was trying to modify ex99 in srs/mat/examples/test which deals with > LAPACKsygvx_ for eigenvalues and eigenvectors > > > > it compiles but gives the following error while executing the program > > > > > > > > [0]PETSC ERROR:Cannot locate function MatDenseRestoreArray_C in object > > [0]PETSC ERROR: Configure options --download-mpich > --download-f2cblaslapack=1 > > [0]PETSC ERROR: #1 MatDenseRestoreArray() line 1523 in > /home/iitm/Downloads/petsc-3.5.2/src/mat/impls/dense/seq/dense.c > > > > can you please explain to me what are theses errors? > > > > Siddhesh M Godbole > > > > 5th year Dual Degree, > > Civil Eng & Applied Mech. > > IIT Madras > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Dec 12 08:43:47 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 12 Dec 2014 08:43:47 -0600 Subject: [petsc-users] MatDenseRestoreArray run time error In-Reply-To: References: Message-ID: I don't understand. If you want to use LAPACK to find eigenvalues/eigenvectors then you need to use a dense format; that is the only format that LAPACK supports. So create a SeqDense matrix instead of a seqsbaij matrix. Barry > On Dec 12, 2014, at 7:32 AM, siddhesh godbole wrote: > > Hello Barry, > > I am stuck with MatConvert because the matrix I am feeding into it is set as MATSEQSBAIJ during formation. I learned from your documents that already bij or sbij formats cannot be called upon by MatConvert command. but the ex99.c which you have sent me in your previous mail also has MatConvert operated on A & B which are already set up as MATSEQSBIAJ. > > I don't understand this. Is there something I am missing to focus on? > > > > Siddhesh M Godbole > > 5th year Dual Degree, > Civil Eng & Applied Mech. > IIT Madras > > On Thu, Dec 11, 2014 at 3:28 AM, Barry Smith wrote: > > Siddhesh, > > That example was unfortunately not in the test suite and hence did not work. Note that it is calling MatDenseRestoreArray() on the wrong matrix (A when it should be on A_dense). I've attached a fixed version of the example that should compile and run correctly for you (it does for me). > > Sorry for the inconvenience > > Barry > > > > On Dec 10, 2014, at 4:31 AM, siddhesh godbole wrote: > > > > Pardon me, > > This is the full error message > > > > > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > > [0]PETSC ERROR: No support for this operation for this object type > > [0]PETSC ERROR: Cannot locate function MatDenseRestoreArray_C in object > > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > > [0]PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 > > [0]PETSC ERROR: ./8D on a arch-linux2-c-debug named iitm by iitm Wed Dec 10 15:48:03 2014 > > [0]PETSC ERROR: Configure options --download-mpich --download-f2cblaslapack=1 > > [0]PETSC ERROR: #1 MatDenseRestoreArray() line 1523 in /home/iitm/Downloads/petsc-3.5.2/src/mat/impls/dense/seq/dense.c > > [0]PETSC ERROR: #2 main() line 181 in /home/iitm/Desktop/project/8D.c > > > > > > Siddhesh M Godbole > > > > 5th year Dual Degree, > > Civil Eng & Applied Mech. > > IIT Madras > > > > On Wed, Dec 10, 2014 at 3:59 PM, siddhesh godbole wrote: > > hello, > > > > i was trying to modify ex99 in srs/mat/examples/test which deals with LAPACKsygvx_ for eigenvalues and eigenvectors > > > > it compiles but gives the following error while executing the program > > > > > > > > [0]PETSC ERROR:Cannot locate function MatDenseRestoreArray_C in object > > [0]PETSC ERROR: Configure options --download-mpich --download-f2cblaslapack=1 > > [0]PETSC ERROR: #1 MatDenseRestoreArray() line 1523 in /home/iitm/Downloads/petsc-3.5.2/src/mat/impls/dense/seq/dense.c > > > > can you please explain to me what are theses errors? > > > > Siddhesh M Godbole > > > > 5th year Dual Degree, > > Civil Eng & Applied Mech. > > IIT Madras > > > > From d.scott at ed.ac.uk Fri Dec 12 08:58:19 2014 From: d.scott at ed.ac.uk (David Scott) Date: Fri, 12 Dec 2014 14:58:19 +0000 Subject: [petsc-users] GAMG with more than one DOF Message-ID: <548B028B.6010700@ed.ac.uk> How can I get GAMG to take into account the fact that I have more than one degree of freedom per node when it is producing a coarser grid? I am dealing with a problem in structural mechanics and have an unstructued grid. The matrix is read from a file produced by another application. The components of the displacement for a node are grouped together. Thanks in advance, David -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From knepley at gmail.com Fri Dec 12 10:07:30 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Dec 2014 10:07:30 -0600 Subject: [petsc-users] GAMG with more than one DOF In-Reply-To: <548B028B.6010700@ed.ac.uk> References: <548B028B.6010700@ed.ac.uk> Message-ID: On Fri, Dec 12, 2014 at 8:58 AM, David Scott wrote: > > How can I get GAMG to take into account the fact that I have more than one > degree of freedom per node when it is producing a coarser grid? I am > dealing with a problem in structural mechanics and have an unstructued > grid. The matrix is read from a file produced by another application. The > components of the displacement for a node are grouped together. > GAMG uses the block size from the matrix. Did you set that? Thanks, Matt > Thanks in advance, > > David > > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetroya at gmail.com Fri Dec 12 10:38:57 2014 From: salazardetroya at gmail.com (Miguel Angel Salazar de Troya) Date: Fri, 12 Dec 2014 10:38:57 -0600 Subject: [petsc-users] Changing TSAdapt In-Reply-To: <186E057A-085D-4AE2-B786-4E35AAF2E678@mcs.anl.gov> References: <9B2FA133-7F7E-439C-B4E0-BDCDD0D093A8@mcs.anl.gov> <87d27ppj41.fsf@jedbrown.org> <186E057A-085D-4AE2-B786-4E35AAF2E678@mcs.anl.gov> Message-ID: Thanks a lot for the fix. Miguel On Thu, Dec 11, 2014 at 10:58 PM, Barry Smith wrote: > > > Yikes, seven days out and not yet put into maint! Meanwhile Miguel has > to waste a whole day because no one told him about the fix. > > I won't appologize for fixing it again. Since a fix that is not > accessible is not a fix. > > Barry > > > On Dec 11, 2014, at 10:50 PM, Jed Brown wrote: > > > > Barry Smith writes: > > > >> Miguel, > >> > >> Thanks for reporting this, you have found a bug in our code. When we > changed the adapt type we did not zero out the function pointers for the > old basic adaptor hence they were improperly called when the object was > finally destroyed at the end. > > > > Lisandro fixed this here. > > > > > https://bitbucket.org/petsc/petsc/commits/40813bd21acd4c08b8080bc6cc1eef9949a22ac8 > > -- *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at Urbana-Champaign (217) 550-2360 salaza11 at illinois.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Dec 14 12:19:14 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 14 Dec 2014 11:19:14 -0700 Subject: [petsc-users] Checkpointing / restart In-Reply-To: References: Message-ID: <87sigim6vx.fsf@jedbrown.org> Lisandro Dalcin writes: > We can experiment in this code with some extra APIs to let users > specify the initial derivative. What about TSAlphaSetSolution(ts,U,V) > and TSAlphaGetSolution(ts,&U,&V), where U and V are the initial > solution and derivative vectors? For multistep methods, it makes sense to support specification of higher moments. This is necessary, for example, after regridding when using an adaptive spatial discretization. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From domenico_lahaye at yahoo.com Sun Dec 14 13:53:30 2014 From: domenico_lahaye at yahoo.com (domenico lahaye) Date: Sun, 14 Dec 2014 19:53:30 +0000 (UTC) Subject: [petsc-users] Implement new Krylov method within PETSc Message-ID: <1291761417.33024.1418586810263.JavaMail.yahoo@jws100123.mail.ne1.yahoo.com> Dear PETSc developers,? ??Could you pls. formulate guidelines on how to implement?a new (to PETSc) Krylov subspace?method (such as e.g. SQMR?or IDR) in PETSc or refer me to such guidelines?? ??Thanks, Domenico.?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Dec 14 14:38:14 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 14 Dec 2014 14:38:14 -0600 Subject: [petsc-users] Implement new Krylov method within PETSc In-Reply-To: <1291761417.33024.1418586810263.JavaMail.yahoo@jws100123.mail.ne1.yahoo.com> References: <1291761417.33024.1418586810263.JavaMail.yahoo@jws100123.mail.ne1.yahoo.com> Message-ID: <42860650-09C5-4CB4-9A83-E3B20B892619@mcs.anl.gov> Domenico, Please take a look at src/ksp/ksp/impls/cg/cg.c it contains comments at the top and in within the file on how one can copy the cg implementation and organize it for another Krylov method. Barry src/ksp/pc/impls/jacobi.c does a similar thing for preconditioners > On Dec 14, 2014, at 1:53 PM, domenico lahaye wrote: > > Dear PETSc developers, > > Could you pls. formulate guidelines on how to implement > a new (to PETSc) Krylov subspace method (such as e.g. SQMR > or IDR) in PETSc or refer me to such guidelines? > > Thanks, Domenico. > > From domenico_lahaye at yahoo.com Sun Dec 14 14:52:20 2014 From: domenico_lahaye at yahoo.com (domenico lahaye) Date: Sun, 14 Dec 2014 20:52:20 +0000 (UTC) Subject: [petsc-users] Implement new Krylov method within PETSc In-Reply-To: <42860650-09C5-4CB4-9A83-E3B20B892619@mcs.anl.gov> References: <42860650-09C5-4CB4-9A83-E3B20B892619@mcs.anl.gov> Message-ID: <1874064447.328975.1418590340221.JavaMail.yahoo@jws10031.mail.ne1.yahoo.com> ?Thanks. Domenico.? From: Barry Smith To: domenico lahaye Cc: Petsc-users List Sent: Sunday, December 14, 2014 9:38 PM Subject: Re: [petsc-users] Implement new Krylov method within PETSc ? Domenico, ? ? Please take a look at src/ksp/ksp/impls/cg/cg.c it contains comments at the top and in within the file on how one can copy the cg implementation and organize it for another Krylov method. ? Barry ? src/ksp/pc/impls/jacobi.c does a similar thing for preconditioners > On Dec 14, 2014, at 1:53 PM, domenico lahaye wrote: > > Dear PETSc developers, > >? Could you pls. formulate guidelines on how to implement > a new (to PETSc) Krylov subspace method (such as e.g. SQMR > or IDR) in PETSc or refer me to such guidelines? > >? Thanks, Domenico. >? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglists at xgm.de Mon Dec 15 07:12:17 2014 From: mailinglists at xgm.de (Florian Lindner) Date: Mon, 15 Dec 2014 14:12:17 +0100 Subject: [petsc-users] Hang with Internal Error: Ring ids do not match Message-ID: <1566789.fOW3KWVRd3@asaru> Hello, since our application has two possible entry paths, petsc could be initialized at different positions. When used as a library (in contrast to a standalone executable), the code looks like: PetscErrorCode ierr; std::cout << "Petsc before PetscInitializeNoArguments()" << std::endl; ierr = PetscInitializeNoArguments(); CHKERRV(ierr); std::cout << "Petsc after PetscInitializeNoArguments()" << std::endl; It never get's to "Petsc after..." and instead hangs and prints this error message: Internal Error: invalid error code 609e0e (Ring ids do not match) in MPIR_Allreduce_impl:712 [0]PETSC ERROR: #1 PetscWorldIsSingleHost() line 99 in /data2/scratch/lindner/petsc/src/sys/utils/pdisplay.c [0]PETSC ERROR: #2 PetscSetDisplay() line 123 in /data2/scratch/lindner/petsc/src/sys/utils/pdisplay.c [0]PETSC ERROR: #3 PetscOptionsCheckInitial_Private() line 324 in /data2/scratch/lindner/petsc/src/sys/objects/init.c [0]PETSC ERROR: #4 PetscInitialize() line 881 in /data2/scratch/lindner/petsc/src/sys/objects/pinit.c [0]PETSC ERROR: #5 SolverInterfaceImpl() line 120 in src/precice/impl/SolverInterfaceImpl.cpp Which is somehow incomprehensible for me... What could be the cause for that? Thanks, Florian From knepley at gmail.com Mon Dec 15 07:52:14 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 15 Dec 2014 07:52:14 -0600 Subject: [petsc-users] Hang with Internal Error: Ring ids do not match In-Reply-To: <1566789.fOW3KWVRd3@asaru> References: <1566789.fOW3KWVRd3@asaru> Message-ID: On Mon, Dec 15, 2014 at 7:12 AM, Florian Lindner wrote: > > Hello, > > since our application has two possible entry paths, petsc could be > initialized at different positions. When used as a library (in contrast to > a standalone executable), the code looks like: > > PetscErrorCode ierr; > std::cout << "Petsc before PetscInitializeNoArguments()" << std::endl; > ierr = PetscInitializeNoArguments(); CHKERRV(ierr); > std::cout << "Petsc after PetscInitializeNoArguments()" << std::endl; > > It never get's to "Petsc after..." and instead hangs and prints this error > message: > > Internal Error: invalid error code 609e0e (Ring ids do not match) in > MPIR_Allreduce_impl:712 > [0]PETSC ERROR: #1 PetscWorldIsSingleHost() line 99 in > /data2/scratch/lindner/petsc/src/sys/utils/pdisplay.c > [0]PETSC ERROR: #2 PetscSetDisplay() line 123 in > /data2/scratch/lindner/petsc/src/sys/utils/pdisplay.c > [0]PETSC ERROR: #3 PetscOptionsCheckInitial_Private() line 324 in > /data2/scratch/lindner/petsc/src/sys/objects/init.c > [0]PETSC ERROR: #4 PetscInitialize() line 881 in > /data2/scratch/lindner/petsc/src/sys/objects/pinit.c > [0]PETSC ERROR: #5 SolverInterfaceImpl() line 120 in > src/precice/impl/SolverInterfaceImpl.cpp > > Which is somehow incomprehensible for me... What could be the cause for > that? > It means all procs did not callPetscInitialize(). We call MPI_Allreduce(), which needs all procs in MPI_COMM_WORLD. Thanks, Matt > Thanks, > Florian > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Dec 15 11:30:43 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 15 Dec 2014 11:30:43 -0600 Subject: [petsc-users] Hang with Internal Error: Ring ids do not match In-Reply-To: References: <1566789.fOW3KWVRd3@asaru> Message-ID: <8A6F275D-046E-4D98-82F1-761AFE9F2E01@mcs.anl.gov> Are you setting PETSC_COMM_WORLD before these calls? Perhaps on some processes but not on others? Barry > On Dec 15, 2014, at 7:52 AM, Matthew Knepley wrote: > > On Mon, Dec 15, 2014 at 7:12 AM, Florian Lindner wrote: > Hello, > > since our application has two possible entry paths, petsc could be initialized at different positions. When used as a library (in contrast to a standalone executable), the code looks like: > > PetscErrorCode ierr; > std::cout << "Petsc before PetscInitializeNoArguments()" << std::endl; > ierr = PetscInitializeNoArguments(); CHKERRV(ierr); > std::cout << "Petsc after PetscInitializeNoArguments()" << std::endl; > > It never get's to "Petsc after..." and instead hangs and prints this error message: > > Internal Error: invalid error code 609e0e (Ring ids do not match) in MPIR_Allreduce_impl:712 > [0]PETSC ERROR: #1 PetscWorldIsSingleHost() line 99 in /data2/scratch/lindner/petsc/src/sys/utils/pdisplay.c > [0]PETSC ERROR: #2 PetscSetDisplay() line 123 in /data2/scratch/lindner/petsc/src/sys/utils/pdisplay.c > [0]PETSC ERROR: #3 PetscOptionsCheckInitial_Private() line 324 in /data2/scratch/lindner/petsc/src/sys/objects/init.c > [0]PETSC ERROR: #4 PetscInitialize() line 881 in /data2/scratch/lindner/petsc/src/sys/objects/pinit.c > [0]PETSC ERROR: #5 SolverInterfaceImpl() line 120 in src/precice/impl/SolverInterfaceImpl.cpp > > Which is somehow incomprehensible for me... What could be the cause for that? > > It means all procs did not callPetscInitialize(). We call MPI_Allreduce(), which needs all procs in MPI_COMM_WORLD. > > Thanks, > > Matt > > Thanks, > Florian > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From dalcinl at gmail.com Mon Dec 15 14:46:17 2014 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Mon, 15 Dec 2014 17:46:17 -0300 Subject: [petsc-users] Changing TSAdapt In-Reply-To: <9B2FA133-7F7E-439C-B4E0-BDCDD0D093A8@mcs.anl.gov> References: <9B2FA133-7F7E-439C-B4E0-BDCDD0D093A8@mcs.anl.gov> Message-ID: On 12 December 2014 at 01:44, Barry Smith wrote: > > Miguel, > > Thanks for reporting this, you have found a bug in our code. When we changed the adapt type we did not zero out the function pointers for the old basic adaptor hence they were improperly called when the object was finally destroyed at the end. > > I've attached a patch. Once you apply this simply run > > make gnumake > > in the PETSc root directory, recompile your code and run it again and it should successfully end. > > Barry > Your patch also clears any user-specified adapt->ops->checkstage. Perhaps you should revert your commit and merge my PR instead? https://bitbucket.org/petsc/petsc/pull-request/228/fixes-for-ts-tsadapt-and-tsalpha/diff -- Lisandro Dalcin ============ Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust.edu.sa/ 4700 King Abdullah University of Science and Technology al-Khawarizmi Bldg (Bldg 1), Office # 4332 Thuwal 23955-6900, Kingdom of Saudi Arabia http://www.kaust.edu.sa Office Phone: +966 12 808-0459 From bsmith at mcs.anl.gov Mon Dec 15 16:20:00 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 15 Dec 2014 16:20:00 -0600 Subject: [petsc-users] Changing TSAdapt In-Reply-To: References: <9B2FA133-7F7E-439C-B4E0-BDCDD0D093A8@mcs.anl.gov> Message-ID: <71C3E1CB-2F7B-41E1-AA03-70E10B4CD87F@mcs.anl.gov> I have no idea how to revert something that is in maint, master, etc. Jed will need to fix this mess. I have no idea how to do it. I have no objection to someone taking out my fix and putting in yours, but it needs to go into maint in less than six months. Barry > On Dec 15, 2014, at 2:46 PM, Lisandro Dalcin wrote: > > On 12 December 2014 at 01:44, Barry Smith wrote: >> >> Miguel, >> >> Thanks for reporting this, you have found a bug in our code. When we changed the adapt type we did not zero out the function pointers for the old basic adaptor hence they were improperly called when the object was finally destroyed at the end. >> >> I've attached a patch. Once you apply this simply run >> >> make gnumake >> >> in the PETSc root directory, recompile your code and run it again and it should successfully end. >> >> Barry >> > > Your patch also clears any user-specified adapt->ops->checkstage. > Perhaps you should revert your commit and merge my PR instead? > > https://bitbucket.org/petsc/petsc/pull-request/228/fixes-for-ts-tsadapt-and-tsalpha/diff > > > > -- > Lisandro Dalcin > ============ > Research Scientist > Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) > Numerical Porous Media Center (NumPor) > King Abdullah University of Science and Technology (KAUST) > http://numpor.kaust.edu.sa/ > > 4700 King Abdullah University of Science and Technology > al-Khawarizmi Bldg (Bldg 1), Office # 4332 > Thuwal 23955-6900, Kingdom of Saudi Arabia > http://www.kaust.edu.sa > > Office Phone: +966 12 808-0459 From abhyshr at mcs.anl.gov Mon Dec 15 20:33:33 2014 From: abhyshr at mcs.anl.gov (Abhyankar, Shrirang G.) Date: Tue, 16 Dec 2014 02:33:33 +0000 Subject: [petsc-users] DMPlex and MatSetValuesLocal Message-ID: Matt, Does MatSetValuesLocal work with a matrix that is created with DMPlex? Well, actually I am using DMNetwork. I am getting the following error because ISLocalToGlobalMapping mat->rmap->mapping and mat->cmap->mapping are not set on the matrix. Perhaps I am not setting up something correctly? Shri [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Null argument, when expecting valid pointer [0]PETSC ERROR: Null Object: Parameter # 1 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-1134-g7fbfed6 GIT Date: 2014-12-13 14:24:34 -0600 [0]PETSC ERROR: ./DYN on a debug-master named Shrirangs-MacBook-Pro.local by Shri Mon Dec 15 20:11:18 2014 [0]PETSC ERROR: Configure options --download-chaco --download-metis --download-parmetis --download-superlu_dist PETSC_ARCH=debug-master [0]PETSC ERROR: #1 ISLocalToGlobalMappingApply() line 396 in /Users/Shri/packages/petsc/src/vec/is/utils/isltog.c [0]PETSC ERROR: #2 MatSetValuesLocal() line 2017 in /Users/Shri/packages/petsc/src/mat/interface/matrix.c [0]PETSC ERROR: #3 DYNIJacobian() line 282 in /Users/Shri/Documents/tsopf-code/src/dyn/dyn.c [0]PETSC ERROR: #4 TSComputeIJacobian() line 763 in /Users/Shri/packages/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #5 SNESTSFormJacobian_Theta() line 320 in /Users/Shri/packages/petsc/src/ts/impls/implicit/theta/theta.c [0]PETSC ERROR: #6 SNESTSFormJacobian() line 3552 in /Users/Shri/packages/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #7 SNESComputeJacobian() line 2193 in /Users/Shri/packages/petsc/src/snes/interface/snes.c [0]PETSC ERROR: #8 SNESSolve_NEWTONLS() line 230 in /Users/Shri/packages/petsc/src/snes/impls/ls/ls.c [0]PETSC ERROR: #9 SNESSolve() line 3743 in /Users/Shri/packages/petsc/src/snes/interface/snes.c [0]PETSC ERROR: #10 TSStep_Theta() line 195 in /Users/Shri/packages/petsc/src/ts/impls/implicit/theta/theta.c [0]PETSC ERROR: #11 TSStep() line 2628 in /Users/Shri/packages/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #12 TSSolve() line 2745 in /Users/Shri/packages/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #13 DYNSolve() line 620 in /Users/Shri/Documents/tsopf-code/src/dyn/dyn.c [0]PETSC ERROR: #14 main() line 35 in /Users/Shri/Documents/tsopf-code/applications/dyn-main.c [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 85) - process 0 Shri From marc.medale at univ-amu.fr Tue Dec 16 02:55:14 2014 From: marc.medale at univ-amu.fr (Marc MEDALE) Date: Tue, 16 Dec 2014 09:55:14 +0100 Subject: [petsc-users] Troubles updating my code from PETSc-3.4 to 3.5 Using MUMPS for KSPSolve() In-Reply-To: <4C19AE7F-0A7A-46A8-A452-A3FFEFE46B1B@mcs.anl.gov> References: <767243EE-FE85-45D9-8368-2EF5DBEBBE92@univ-amu.fr> <4C19AE7F-0A7A-46A8-A452-A3FFEFE46B1B@mcs.anl.gov> Message-ID: <9F0BE70F-6948-4249-BA43-3E897250EF29@univ-amu.fr> Dear Barry and Matt, Since the outputs from ksp_view and kip_monitor where not very helpful, I come back to you with results from more detailed tests on the solution a very ill-conditionned algebraic system solved in parallel with KSPSolve and MUMPS direct solver. 1) I have dumped into binary files both the assembled matrix and rhs computed with the two versions of my research code (PETSc-3.4p4 and 3.5p1). The respective files are:: Mat_bin_3.4p4, RHS_bin_3.4p4; Mat_bin_3.5p1, RHS_bin_3.5p1; 2) To prevent from any question refering to a possible bug in my own code upgrade I have run /src/ksp/ksp/examples/tutorials/ex10 (slightly modified to compute the L2 norm of the solution vector, attached to this e-mail) on 40 cores with the two PETSc versions and the combination of Mat and Rhs, with the following command line options: -f0 Mat_bin_3.5p1 -f1 Mat_bin_3.4p4 -rhs RHS_bin_3.4p4 -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 -mat_type mpiaij -vec_type mpi -options_left and : -f0 Mat_bin_3.5p1 -f1 Mat_bin_3.4p4 -rhs RHS_bin_3.5p1 -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 -mat_type mpiaij -vec_type mpi -options_left 3) Results provided below compare the outputs obtained by a diff on respective output files (all the four output files are attached to this e-mail): a) ex10-PETSs-3.5p1, running with the various binary matrix files and rhs files: diff Test_ex10-3.5p1_rhs-3.5p1.out Test_ex10-3.5p1_rhs-3.4p4.out 2c2 < Residual norm 1.66855e-08 --- > Residual norm 1.66813e-08 5c5 < Residual norm 1.6675e-08 --- > Residual norm 1.66699e-08 16c16 < -rhs RHS_bin_3.5p1 --- > -rhs RHS_bin_3.4p4 b) ex10-PETSs-3.5p1 versus ex10-PETSs-3.4p4, with the various binary matrix files and rhs files: diff Test_ex10-3.5p1_rhs-3.5p1.out Test_ex10-3.4p4_rhs-3.5p1.out 2,3c2,3 < Residual norm 1.66855e-08 < Solution norm 0.0161289 --- > Residual norm 2.89642e-08 > Solution norm 0.0731946 5,6c5,6 < Residual norm 1.6675e-08 < Solution norm 0.0161289 --- > Residual norm 2.89849e-08 > Solution norm 0.0732001 4) Analysis: - Test a) and its symmetric (undertaken with ex10-PETSc-3.4p1 demonstrate that the two matrices and two Rhs computed with the two PETSc versions are identical: they produce the same solution vector and comparable residuals, up to the numerical accuracy when solving such ill-conditionned algebraic systems (condition number of order of 1e9, that the reason I use the MUMPS direct solver); - Test b) and its symmetric (undertaken with the rhs computed with PETSc-3.4p4) show that a very different solution vector (more than 4 times difference in the L2 norm) is obtained when solving the algebraic system with ex10-3.5p1 and ex10-3.4p4, both with MUMPS-4.10.0 and the same command line options, whereas the residuals are quite different but only twice. The first two lines below refer to the former calculation and the last two lines refer to the latter one: < Residual norm 1.66855e-08 < Solution norm 0.0161289 --- > Residual norm 2.89642e-08 > Solution norm 0.0731946 5) Questions: - Do any default values in the PETSc-MUMPS interface have been changed from PETSc-3.4 to 3.5? - What is going wrong? If you have some time to play on your side with the binary files (matrices and rhs), I would be pleased to provide them to you, just let me know where to drop them. Their weight is approx 775 Mo for each mat and 16 Mo for each rhs. Thank you for you help to overcome this crazy problem. Best regards. Marc MEDALE Le 11 d?c. 2014 ? 18:01, Barry Smith a ?crit : > > Please run both with -ksp_monitor -ksp_type gmres and send the output > > Barry > >> On Dec 11, 2014, at 10:07 AM, Marc MEDALE wrote: >> >> Dear Matt, >> >> the output files obtained with PETSc-3.4p4 and 3.5p1 versions using the following command line: >> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 -ksp_monitor -ksp_view >> >> are attached below. If skipping flops and memory usage per core, a diff between the two output files reduces to: >> diff Output_3.4p4.txt Output_3.5p1.txt >> 14c14 >> < Matrix Object: 64 MPI processes >> --- >>> Mat Object: 64 MPI processes >> 18c18 >> < total: nonzeros=481059588, allocated nonzeros=481059588 >> --- >>> total: nonzeros=4.8106e+08, allocated nonzeros=4.8106e+08 >> 457c457 >> < INFOG(10) (total integer space store the matrix factors after factorization): 26149876 >> --- >>> INFOG(10) (total integer space store the matrix factors after factorization): 26136333 >> 461c461 >> < INFOG(14) (number of memory compress after factorization): 54 >> --- >>> INFOG(14) (number of memory compress after factorization): 48 >> 468,469c468,469 >> < INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 338 >> < INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 19782 >> --- >>> INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 334 >>> INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 19779 >> 472a473,478 >>> INFOG(28) (after factorization: number of null pivots encountered): 0 >>> INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 470143172 >>> INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 202, 10547 >>> INFOG(32) (after analysis: type of analysis done): 1 >>> INFOG(33) (value used for ICNTL(8)): 0 >>> INFOG(34) (exponent of the determinant if determinant is requested): 0 >> 474c480 >> < Matrix Object: 64 MPI processes >> --- >>> Mat Object: 64 MPI processes >> 477c483 >> < total: nonzeros=63720324, allocated nonzeros=63720324 >> --- >>> total: nonzeros=6.37203e+07, allocated nonzeros=6.37203e+07 >> 481c487 >> < Norme de U 1 7.37266E-02, L 1 1.00000E+00 >> --- >>> Norme de U 1 1.61172E-02, L 1 1.00000E+00 >> 483c489 >> < Temps total d execution : 198.373291969299 >> --- >>> Temps total d execution : 216.934082031250 >> >> >> Which does not reveal any striking differences, except in the L2 norm of the solution vectors. >> >> I need assistance to help me to overcome this quite bizarre behavior. >> >> Thank you. >> >> Marc MEDALE >> >> ========================================================= >> Universit? Aix-Marseille, Polytech'Marseille, D?pt M?canique Energ?tique >> Laboratoire IUSTI, UMR 7343 CNRS-Universit? Aix-Marseille >> Technopole de Chateau-Gombert, 5 rue Enrico Fermi >> 13453 MARSEILLE, Cedex 13, FRANCE >> --------------------------------------------------------------------------------------------------- >> Tel : +33 (0)4.91.10.69.14 ou 38 >> Fax : +33 (0)4.91.10.69.69 >> e-mail : marc.medale at univ-amu.fr >> ========================================================= >> >> >> >> >> >> >> >> Le 11 d?c. 2014 ? 11:43, Matthew Knepley a ?crit : >> >>> On Thu, Dec 11, 2014 at 4:38 AM, Marc MEDALE wrote: >>> Dear PETSC Users, >>> >>> I have just updated to PETSc-3.5 my research code that uses PETSc for a while but I'm facing an astonishing difference between PETSc-3.4 to 3.5 versions when solving a very ill conditioned algebraic system with MUMPS (4.10.0 in both cases). >>> >>> The only differences that arise in my fortran source code are the following: >>> Loma1-medale% diff ../version_3.5/solvEFL_MAN_SBIF.F ../version_3.4/solvEFL_MAN_SBIF.F >>> 336,337d335 >>> < CALL MatSetOption(MATGLOB,MAT_KEEP_NONZERO_PATTERN, >>> < & PETSC_TRUE,IER) >>> 749,750c747,748 >>> < CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_REAL, >>> < & PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,IER) >>> --- >>>> CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_DOUBLE_PRECISION, >>>> & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,IER) >>> 909c907,908 >>> < CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB,IER) >>> --- >>>> CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB, >>>> & SAME_NONZERO_PATTERN,IER) >>> >>> When I run the corresponding program versions on 128 cores of our cluster with the same input data and the following command line arguments: >>> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 >>> >>> I get the following outputs: >>> a) with PETSc-3.4p4: >>> L2 norm of solution vector: 7.39640E-02, >>> >>> b) with PETSc-3.5p1: >>> L2 norm of solution vector: 1.61325E-02 >>> >>> Do I have change something else in updating my code based on KSP from PETSc-3.4 to 3.5 versions? >>> Do any default values in the PETSc-MUMPS interface have been changed from PETSc-3.4 to 3.5? >>> Any hints or suggestions are welcome to help me to recover the right results (obtained with PETSc-3.4). >>> >>> Send the output from -ksp_monitor -ksp_view for both runs. I am guessing that a MUMPS default changed between versions. >>> >>> Thanks, >>> >>> Matt >>> >>> Thank you very much. >>> >>> Marc MEDALE. >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex10.c Type: application/octet-stream Size: 18867 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Test_ex10-3.4p4_rhs-3.4p4.out Type: application/octet-stream Size: 434 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Test_ex10-3.4p4_rhs-3.5p1.out Type: application/octet-stream Size: 434 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Test_ex10-3.5p1_rhs-3.4p4.out Type: application/octet-stream Size: 434 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Test_ex10-3.5p1_rhs-3.5p1.out Type: application/octet-stream Size: 433 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.scott at ed.ac.uk Tue Dec 16 09:12:38 2014 From: d.scott at ed.ac.uk (David Scott) Date: Tue, 16 Dec 2014 15:12:38 +0000 Subject: [petsc-users] GAMG with more than one DOF In-Reply-To: References: <548B028B.6010700@ed.ac.uk> Message-ID: <54904BE6.7030607@ed.ac.uk> On 12/12/2014 16:07, Matthew Knepley wrote: > On Fri, Dec 12, 2014 at 8:58 AM, David Scott > wrote: > > How can I get GAMG to take into account the fact that I have more > than one degree of freedom per node when it is producing a coarser > grid? I am dealing with a problem in structural mechanics and have > an unstructued grid. The matrix is read from a file produced by > another application. The components of the displacement for a node > are grouped together. > > > GAMG uses the block size from the matrix. Did you set that? > > Thanks, > > Matt > I did set the block size but I had a strange result from one of my runs and I wanted to check that I had done the right thing. I had seen the performance improvement that I expected for a 10^6 x 10^6 matrix but not for a 10^7 x 10^7 matrix (for which the performance was worse). I have subsequently tried a matrix of intermediate size for which I did see a performance improvement. I'll try to find out what is happening with the largest matrix. Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From souza.michael at gmail.com Tue Dec 16 13:02:12 2014 From: souza.michael at gmail.com (Michael Souza) Date: Tue, 16 Dec 2014 16:02:12 -0300 Subject: [petsc-users] Memory leak when destroying an IS created using ISConcatenate Message-ID: There is a memory leak when destroying an IS object created with ISConcatenate function. The leak can be reproduced with code below. Cheers, Michael Souza ------------------------------------------------------------------ static char help[] = "Memory leak in ISConcatenate function\n\n"; #include #include "matblock.h" int main(int argc, char **args) { PetscErrorCode ierr; IS isa, isb, isc; ierr = PetscInitialize(&argc, &args, (char *) 0, help); CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_WORLD,2,0,1,&isa); CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_WORLD,2,2,1,&isb); CHKERRQ(ierr); IS isarray[] = {isa, isb}; ierr = ISConcatenate(PETSC_COMM_WORLD,2,isarray,&isc); CHKERRQ(ierr); ierr = ISDestroy(&isa); CHKERRQ(ierr); ierr = ISDestroy(&isb); CHKERRQ(ierr); ierr = ISDestroy(&isc); CHKERRQ(ierr); ierr = PetscFinalize(); CHKERRQ(ierr); PetscFunctionReturn(0); } ------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Dec 16 14:10:37 2014 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 16 Dec 2014 14:10:37 -0600 Subject: [petsc-users] Memory leak when destroying an IS created using ISConcatenate In-Reply-To: References: Message-ID: Thanks for the report. The following patch should fix it. Satish ----------- diff --git a/src/vec/is/is/utils/isdiff.c b/src/vec/is/is/utils/isdiff.c index 6660fff..eb18ccf 100644 --- a/src/vec/is/is/utils/isdiff.c +++ b/src/vec/is/is/utils/isdiff.c @@ -362,6 +362,7 @@ PetscErrorCode ISConcatenate(MPI_Comm comm, PetscInt len, const IS islist[], IS ierr = ISGetLocalSize(islist[i], &n);CHKERRQ(ierr); ierr = ISGetIndices(islist[i], &iidx);CHKERRQ(ierr); ierr = PetscMemcpy(idx+N,iidx, sizeof(PetscInt)*n);CHKERRQ(ierr); + ierr = ISRestoreIndices(islist[i], &iidx);CHKERRQ(ierr); N += n; } ierr = ISCreateGeneral(comm, N, idx, PETSC_OWN_POINTER, isout);CHKERRQ(ierr); On Tue, 16 Dec 2014, Michael Souza wrote: > There is a memory leak when destroying an IS object created with > ISConcatenate function. > > The leak can be reproduced with code below. > > Cheers, > Michael Souza > ------------------------------------------------------------------ > static char help[] = "Memory leak in ISConcatenate function\n\n"; > #include > #include "matblock.h" > int main(int argc, char **args) { > PetscErrorCode ierr; > IS isa, isb, isc; > > ierr = PetscInitialize(&argc, &args, (char *) 0, help); CHKERRQ(ierr); > > ierr = ISCreateStride(PETSC_COMM_WORLD,2,0,1,&isa); CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_WORLD,2,2,1,&isb); CHKERRQ(ierr); > > IS isarray[] = {isa, isb}; > ierr = ISConcatenate(PETSC_COMM_WORLD,2,isarray,&isc); CHKERRQ(ierr); > > ierr = ISDestroy(&isa); CHKERRQ(ierr); > ierr = ISDestroy(&isb); CHKERRQ(ierr); > ierr = ISDestroy(&isc); CHKERRQ(ierr); > > ierr = PetscFinalize(); CHKERRQ(ierr); > PetscFunctionReturn(0); > } > ------------------------------------------------------------------ > From paulhuaizhang at gmail.com Wed Dec 17 08:58:17 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Wed, 17 Dec 2014 09:58:17 -0500 Subject: [petsc-users] with hdf5 dir Message-ID: Hi All, I attempted to include hdf5 dir in my configuration, but failed. --with-hdf5-dir=$HDF5PATH I tried HDF5PATH=/share/cluster/SLES9/x86_64/apps/hdf5/1.8.3, but it shows UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- --with-hdf5-dir=/share/cluster/SLES9/x86_64/apps/hdf5/1.8.3 did not work ******************************************************************************* As I run $module show hdf5, I got *-------------------------------------------------------------------* */usr/share/Modules/modulefiles/hdf5/1.8.3/icc/default:* *module-whatis load HDF5 environment * *prepend-path PATH /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/bin * *append-path LD_LIBRARY_PATH /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/lib * *prepend-path MANPATH /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/share/man * *setenv HDF_INCL /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/include * *-------------------------------------------------------------------* What is my HDF5PATH? Thanks, Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at uky.edu Wed Dec 17 09:05:40 2014 From: paulhuaizhang at uky.edu (UK) Date: Wed, 17 Dec 2014 10:05:40 -0500 Subject: [petsc-users] PETSc and some external libraries configured with CMake? Message-ID: Hi All, Does anyone have the experience of using cmake to get some out-of-source libraries to work with PETSc? Your help is highly appreciated it. Thanks, Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 17 09:19:24 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Dec 2014 09:19:24 -0600 Subject: [petsc-users] with hdf5 dir In-Reply-To: References: Message-ID: On Wed, Dec 17, 2014 at 8:58 AM, paul zhang wrote: > > Hi All, > > I attempted to include hdf5 dir in my configuration, but failed. > > --with-hdf5-dir=$HDF5PATH > > I tried HDF5PATH=/share/cluster/SLES9/x86_64/apps/hdf5/1.8.3, but it shows > When you get a failure like this, you have to send configure.log Thanks, Matt > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > ------------------------------------------------------------------------------- > --with-hdf5-dir=/share/cluster/SLES9/x86_64/apps/hdf5/1.8.3 did not work > > ******************************************************************************* > > > As I run $module show hdf5, I got > > *-------------------------------------------------------------------* > */usr/share/Modules/modulefiles/hdf5/1.8.3/icc/default:* > > *module-whatis load HDF5 environment * > *prepend-path PATH /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/bin * > *append-path LD_LIBRARY_PATH > /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/lib * > *prepend-path MANPATH > /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/share/man * > *setenv HDF_INCL /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/include * > *-------------------------------------------------------------------* > > What is my HDF5PATH? > > > Thanks, > Paul > > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulhuaizhang at gmail.com Wed Dec 17 09:30:33 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Wed, 17 Dec 2014 10:30:33 -0500 Subject: [petsc-users] with hdf5 dir In-Reply-To: References: Message-ID: ?? Attached. Thanks, Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Wed, Dec 17, 2014 at 10:19 AM, Matthew Knepley wrote: > > On Wed, Dec 17, 2014 at 8:58 AM, paul zhang > wrote: >> >> Hi All, >> >> I attempted to include hdf5 dir in my configuration, but failed. >> >> --with-hdf5-dir=$HDF5PATH >> >> I tried HDF5PATH=/share/cluster/SLES9/x86_64/apps/hdf5/1.8.3, but it >> shows >> > > When you get a failure like this, you have to send configure.log > > Thanks, > > Matt > > >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> details): >> >> ------------------------------------------------------------------------------- >> --with-hdf5-dir=/share/cluster/SLES9/x86_64/apps/hdf5/1.8.3 did not work >> >> ******************************************************************************* >> >> >> As I run $module show hdf5, I got >> >> *-------------------------------------------------------------------* >> */usr/share/Modules/modulefiles/hdf5/1.8.3/icc/default:* >> >> *module-whatis load HDF5 environment * >> *prepend-path PATH /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/bin * >> *append-path LD_LIBRARY_PATH >> /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/lib * >> *prepend-path MANPATH >> /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/share/man * >> *setenv HDF_INCL /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/include * >> *-------------------------------------------------------------------* >> >> What is my HDF5PATH? >> >> >> Thanks, >> Paul >> >> >> Huaibao (Paul) Zhang >> *Gas Surface Interactions Lab* >> Department of Mechanical Engineering >> University of Kentucky, >> Lexington, >> KY, 40506-0503 >> *Office*: 216 Ralph G. Anderson Building >> *Web*:gsil.engineering.uky.edu >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 2445585 bytes Desc: not available URL: From knepley at gmail.com Wed Dec 17 09:36:16 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Dec 2014 09:36:16 -0600 Subject: [petsc-users] with hdf5 dir In-Reply-To: References: Message-ID: On Wed, Dec 17, 2014 at 9:30 AM, paul zhang wrote: > > ?? > Attached. > You HDF5 installation is incomplete. It is missing libhdf5_hl.a Thanks, Matt > Thanks, > Paul > > Huaibao (Paul) Zhang > *Gas Surface Interactions Lab* > Department of Mechanical Engineering > University of Kentucky, > Lexington, > KY, 40506-0503 > *Office*: 216 Ralph G. Anderson Building > *Web*:gsil.engineering.uky.edu > > On Wed, Dec 17, 2014 at 10:19 AM, Matthew Knepley > wrote: >> >> On Wed, Dec 17, 2014 at 8:58 AM, paul zhang >> wrote: >>> >>> Hi All, >>> >>> I attempted to include hdf5 dir in my configuration, but failed. >>> >>> --with-hdf5-dir=$HDF5PATH >>> >>> I tried HDF5PATH=/share/cluster/SLES9/x86_64/apps/hdf5/1.8.3, but it >>> shows >>> >> >> When you get a failure like this, you have to send configure.log >> >> Thanks, >> >> Matt >> >> >>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >>> for details): >>> >>> ------------------------------------------------------------------------------- >>> --with-hdf5-dir=/share/cluster/SLES9/x86_64/apps/hdf5/1.8.3 did not work >>> >>> ******************************************************************************* >>> >>> >>> As I run $module show hdf5, I got >>> >>> *-------------------------------------------------------------------* >>> */usr/share/Modules/modulefiles/hdf5/1.8.3/icc/default:* >>> >>> *module-whatis load HDF5 environment * >>> *prepend-path PATH /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/bin * >>> *append-path LD_LIBRARY_PATH >>> /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/lib * >>> *prepend-path MANPATH >>> /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/share/man * >>> *setenv HDF_INCL >>> /share/cluster/SLES9/x86_64/apps/hdf5/1.8.3/icc/include * >>> *-------------------------------------------------------------------* >>> >>> What is my HDF5PATH? >>> >>> >>> Thanks, >>> Paul >>> >>> >>> Huaibao (Paul) Zhang >>> *Gas Surface Interactions Lab* >>> Department of Mechanical Engineering >>> University of Kentucky, >>> Lexington, >>> KY, 40506-0503 >>> *Office*: 216 Ralph G. Anderson Building >>> *Web*:gsil.engineering.uky.edu >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Dec 17 10:49:24 2014 From: jed at jedbrown.org (Jed Brown) Date: Wed, 17 Dec 2014 08:49:24 -0800 Subject: [petsc-users] PETSc and some external libraries configured with CMake? In-Reply-To: References: Message-ID: <87egrykyqz.fsf@jedbrown.org> UK writes: > Hi All, > > Does anyone have the experience of using cmake to get some out-of-source > libraries to work with PETSc? Your help is highly appreciated it. Can you be more precise? You built some package and you want PETSc to use it, or you want to use CMake for a package that depends on PETSc, or something else? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From paulhuaizhang at gmail.com Wed Dec 17 11:36:05 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Wed, 17 Dec 2014 12:36:05 -0500 Subject: [petsc-users] PETSc and some external libraries configured with CMake? In-Reply-To: <87egrykyqz.fsf@jedbrown.org> References: <87egrykyqz.fsf@jedbrown.org> Message-ID: Jed, I want to use CMake for a package that dependents on PETSc. It seems work this morning. Thanks, Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Wed, Dec 17, 2014 at 11:49 AM, Jed Brown wrote: > > UK writes: > > > Hi All, > > > > Does anyone have the experience of using cmake to get some out-of-source > > libraries to work with PETSc? Your help is highly appreciated it. > > Can you be more precise? You built some package and you want PETSc to > use it, or you want to use CMake for a package that depends on PETSc, or > something else? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckhuangf at gmail.com Wed Dec 17 15:59:11 2014 From: ckhuangf at gmail.com (Chung-Kan Huang) Date: Wed, 17 Dec 2014 15:59:11 -0600 Subject: [petsc-users] Adaptive implicit method with PETSC Message-ID: Hi, I am trying to find a best way to implement adaptive implicit method (AIM) using PETSc for the linear solver. I am solving conservation equations so if I have a N cells problem then I will end of solving a N*M by N*M linear system matrix for each cell has M equations (or variables) when I solve them fully implicitly. For such system I can use blocked matrix (BAIJ) In case each cell has its implicitness level varing from 1 to M so each sub block will have different size range from 1 by to M by M so BAIJ will be very inefficient in both storage and computations I supposed. So I am actually thinking if I should reconstruct matrix and using point matrix (AIJ) when implicitness changes but will this also too inefficient for constructing a new matrix? So what is the best way for me to implement AIM? Kan -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 17 16:08:46 2014 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Dec 2014 16:08:46 -0600 Subject: [petsc-users] Adaptive implicit method with PETSC In-Reply-To: References: Message-ID: On Wed, Dec 17, 2014 at 3:59 PM, Chung-Kan Huang wrote: > > Hi, > > I am trying to find a best way to implement adaptive implicit method > (AIM) using PETSc for the linear solver. > > I am solving conservation equations so if I have a N cells problem then I > will end of solving a N*M by N*M linear system matrix for each cell has M > equations (or variables) when I solve them fully implicitly. For such > system I can use blocked matrix (BAIJ) > In case each cell has its implicitness level varing from 1 to M so each > sub block will have different size range from 1 by to M by M so BAIJ will > be very inefficient in both storage and computations I supposed. > > So I am actually thinking if I should reconstruct matrix and using point > matrix (AIJ) when implicitness changes but will this also too inefficient > for constructing a new matrix? > This should be alright with AIJ since we have an "inodes" mechanism which looks like adaptive block size. Thanks, Matt > > So what is the best way for me to implement AIM? > > Kan > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 17 17:00:17 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 17 Dec 2014 17:00:17 -0600 Subject: [petsc-users] Adaptive implicit method with PETSC In-Reply-To: References: Message-ID: <2C8484C2-CC98-4060-916E-E9104CEB3A9A@mcs.anl.gov> AIJ > On Dec 17, 2014, at 3:59 PM, Chung-Kan Huang wrote: > > Hi, > > I am trying to find a best way to implement adaptive implicit method (AIM) using PETSc for the linear solver. > > I am solving conservation equations so if I have a N cells problem then I will end of solving a N*M by N*M linear system matrix for each cell has M equations (or variables) when I solve them fully implicitly. For such system I can use blocked matrix (BAIJ) > In case each cell has its implicitness level varing from 1 to M so each sub block will have different size range from 1 by to M by M so BAIJ will be very inefficient in both storage and computations I supposed. > > So I am actually thinking if I should reconstruct matrix and using point matrix (AIJ) when implicitness changes but will this also too inefficient for constructing a new matrix? > > So what is the best way for me to implement AIM? > > Kan > From stm8086 at yahoo.com Thu Dec 18 02:36:06 2014 From: stm8086 at yahoo.com (Steena M) Date: Thu, 18 Dec 2014 00:36:06 -0800 Subject: [petsc-users] MPIBAIJ MatMult and non conforming object sizes error on some matrices Message-ID: <1418891766.53368.YahooMailBasic@web125402.mail.ne1.yahoo.com> Hello, I am loading symmetric sparse matrices (source: Florida sparse matrix database) in binary format, converting it to MPIBAIJ format for executing parallel MatMult. The following piece of code seems to work for most matrices but aborts with a "non conforming object sizes " error for some matrices. For instance, MPIBAIJ MatMult() on a sparse matrix of size 19366x19366 with block size 2 using two MPI ranks: [1]PETSC ERROR: Nonconforming object sizes! [1]PETSC ERROR: Mat mat,Vec y: local dim 9682 9683! Another instance, matrix thermomech_TK with dimensions 102158*102158 with block size 2 using two MPI ranks: [1]PETSC ERROR: Nonconforming object sizes! [1]PETSC ERROR: Mat mat,Vec y: local dim 51078 51079! thermomech_TK completes a clean execution with block size 7 without errors. Does this mean that depending on the sparsity pattern of a matrix, compatible block sizes will not always work? On a different note: For unsymmetric matrices in MATMULT, in addition to setting "-mat_nonsym", is there a different recommended technique to load matrices? ================ PETSc code for the symmetric case is as follows: static char help[] = "Parallel SpMV--reads binary matrix file"; #include #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc,char **args) { Vec x,y; Mat A; PetscViewer fd; int rank, global_row_size, global_col_size,ierr, fd1; PetscBool PetscPreLoad = PETSC_FALSE; PetscInt fileheader[4]; char filein[PETSC_MAX_PATH_LEN] ; /*binary .dat matrix file */ PetscScalar one = 1.0; PetscScalar zero = 0.0; PetscInt bs; int m, n,M,N, total_ranks; PetscInitialize(&argc,&args,(char *)0,help); MPI_Comm_rank(PETSC_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&total_ranks); PetscPrintf (PETSC_COMM_WORLD,"Total ranks is %d", total_ranks); int local_size = m/total_ranks; ierr = PetscOptionsGetString(NULL,"-fin",filein,PETSC_MAX_PATH_LEN,NULL); //filename from command prompt ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,filein,FILE_MODE_READ,&fd); //Send it to the petscviewer /*Matrix creating and loading*/ ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); ierr = MatSetType(A, MATMPIBAIJ); ierr = MatLoad(A,fd);CHKERRQ(ierr); ierr = PetscViewerDestroy(&fd);CHKERRQ(ierr); /* Vector setup */ ierr = MatGetSize(A,&m,&n); ierr = VecCreate(PETSC_COMM_WORLD,&x);CHKERRQ(ierr); ierr = VecSetType(x,VECMPI); ierr = VecSetSizes(x,m/total_ranks,m);CHKERRQ(ierr); //Force local size instead of PETSC_DECIDE ierr = VecSetFromOptions(x);CHKERRQ(ierr); ierr = VecSetType(x,VECMPI); ierr = VecCreate(PETSC_COMM_WORLD,&y);CHKERRQ(ierr); ierr = VecSetSizes(y,m/total_ranks,m);CHKERRQ(ierr); //Force local size instead of PETSC_DECIDE ierr = VecSetFromOptions(y);CHKERRQ(ierr); ierr = VecSet(x,one);CHKERRQ(ierr); ierr = VecSet(y,zero); CHKERRQ(ierr); /* SpMV*/ ierr = MatMult(A,x,y);CHKERRQ(ierr); ierr = VecDestroy(&x);CHKERRQ(ierr); ierr = VecDestroy(&y);CHKERRQ(ierr); ierr = MatDestroy(&A);CHKERRQ(ierr); ierr = PetscFinalize(); return 0; } From bsmith at mcs.anl.gov Thu Dec 18 03:35:12 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 18 Dec 2014 03:35:12 -0600 Subject: [petsc-users] MPIBAIJ MatMult and non conforming object sizes error on some matrices In-Reply-To: <1418891766.53368.YahooMailBasic@web125402.mail.ne1.yahoo.com> References: <1418891766.53368.YahooMailBasic@web125402.mail.ne1.yahoo.com> Message-ID: You are setting the local vector size per process without regard to the local size of the matrix ierr = VecSetSizes(x,m/total_ranks,m);CHKERRQ(ierr); //Force local size instead of PETSC_DECIDE ierr = VecSetFromOptions(x);CHKERRQ(ierr); hence the parallel layout of the vector does not match that of the matrix, The easiest fix is to use MatCreateVecs() (previously called MatGetVecs()) to get an empty vector with the right layout to match that matrix. Barry > On Dec 18, 2014, at 2:36 AM, Steena M wrote: > > Hello, > > I am loading symmetric sparse matrices (source: Florida sparse matrix database) in binary format, converting it to MPIBAIJ format for executing parallel MatMult. The following piece of code seems to work for most matrices but aborts with a "non conforming object sizes " error for some matrices. For instance, MPIBAIJ MatMult() on a sparse matrix of size 19366x19366 with block size 2 using two MPI ranks: > > [1]PETSC ERROR: Nonconforming object sizes! > [1]PETSC ERROR: Mat mat,Vec y: local dim 9682 9683! > > Another instance, matrix thermomech_TK with dimensions 102158*102158 with block size 2 using two MPI ranks: > > [1]PETSC ERROR: Nonconforming object sizes! > [1]PETSC ERROR: Mat mat,Vec y: local dim 51078 51079! > > thermomech_TK completes a clean execution with block size 7 without errors. Does this mean that depending on the sparsity pattern of a matrix, compatible block sizes will not always work? > > On a different note: For unsymmetric matrices in MATMULT, in addition to setting "-mat_nonsym", is there a different recommended technique to load matrices? > > ================ > > PETSc code for the symmetric case is as follows: > > static char help[] = "Parallel SpMV--reads binary matrix file"; > > #include > > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc,char **args) > { > > Vec x,y; > Mat A; > PetscViewer fd; > int rank, global_row_size, global_col_size,ierr, fd1; > PetscBool PetscPreLoad = PETSC_FALSE; > PetscInt fileheader[4]; > char filein[PETSC_MAX_PATH_LEN] ; /*binary .dat matrix file */ > PetscScalar one = 1.0; > PetscScalar zero = 0.0; > PetscInt bs; > int m, n,M,N, total_ranks; > > PetscInitialize(&argc,&args,(char *)0,help); > MPI_Comm_rank(PETSC_COMM_WORLD,&rank); > MPI_Comm_size(MPI_COMM_WORLD,&total_ranks); > PetscPrintf (PETSC_COMM_WORLD,"Total ranks is %d", total_ranks); > int local_size = m/total_ranks; > > ierr = PetscOptionsGetString(NULL,"-fin",filein,PETSC_MAX_PATH_LEN,NULL); //filename from command prompt > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,filein,FILE_MODE_READ,&fd); //Send it to the petscviewer > > /*Matrix creating and loading*/ > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > ierr = MatSetType(A, MATMPIBAIJ); > ierr = MatLoad(A,fd);CHKERRQ(ierr); > ierr = PetscViewerDestroy(&fd);CHKERRQ(ierr); > > > > /* Vector setup */ > ierr = MatGetSize(A,&m,&n); > > > > ierr = VecCreate(PETSC_COMM_WORLD,&x);CHKERRQ(ierr); > ierr = VecSetType(x,VECMPI); > ierr = VecSetSizes(x,m/total_ranks,m);CHKERRQ(ierr); //Force local size instead of PETSC_DECIDE > ierr = VecSetFromOptions(x);CHKERRQ(ierr); > > ierr = VecSetType(x,VECMPI); > ierr = VecCreate(PETSC_COMM_WORLD,&y);CHKERRQ(ierr); > ierr = VecSetSizes(y,m/total_ranks,m);CHKERRQ(ierr); //Force local size instead of PETSC_DECIDE > ierr = VecSetFromOptions(y);CHKERRQ(ierr); > > ierr = VecSet(x,one);CHKERRQ(ierr); > ierr = VecSet(y,zero); CHKERRQ(ierr); > > > > /* SpMV*/ > ierr = MatMult(A,x,y);CHKERRQ(ierr); > > ierr = VecDestroy(&x);CHKERRQ(ierr); > ierr = VecDestroy(&y);CHKERRQ(ierr); > ierr = MatDestroy(&A);CHKERRQ(ierr); > > ierr = PetscFinalize(); > return 0; > > } > From hgbk2008 at gmail.com Thu Dec 18 04:42:52 2014 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Thu, 18 Dec 2014 11:42:52 +0100 Subject: [petsc-users] assemble the matrix on proc 0 Message-ID: <5492AFAC.2050306@gmail.com> Hello I want to assemble petsc matrix from csr matrix on proc 0. I did that like: ierr = MatSetType(A, MATMPIAIJ); ierr = MatMPIAIJSetPreallocation(A, PETSC_DEFAULT, PETSC_NULL, PETSC_DEFAULT, PETSC_NULL); on proc 0, I assemble the matrix for(Ii = Istart; Ii < Iend; ++Ii) { int nz = ia[Ii + 1] - ia[Ii]; ierr = MatSetValues(A, 1, &Ii, nz, &ja[ia[Ii]], &v[ia[Ii]], INSERT_VALUES); } the other proc also called this code, but input matrix only exists in proc 0. The matrix print out correctly: MatView(A, PETSC_VIEWER_STDOUT_WORLD); row 0: (0, 1) (3, 6) row 1: (1, 10.5) row 2: (2, 0.015) row 3: (1, 250.5) (3, -280) (4, 33.32) row 4: (4, 12) However, when solved by ksp, it created an error: [3]PETSC ERROR: Invalid argument [3]PETSC ERROR: Must be square matrix, rows 0 columns 1 What should be wrong in this case? Regards, Bui From bsmith at mcs.anl.gov Thu Dec 18 07:17:40 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 18 Dec 2014 07:17:40 -0600 Subject: [petsc-users] assemble the matrix on proc 0 In-Reply-To: <5492AFAC.2050306@gmail.com> References: <5492AFAC.2050306@gmail.com> Message-ID: <4CC88DF9-15CC-42AE-A848-78D5AD561AE4@mcs.anl.gov> Likely when you set the sizes for the matrix you did not set the local size properly on each process. You need to set the local size to be the complete matrix size on process 0 and 0 on all the other processes. Barry > On Dec 18, 2014, at 4:42 AM, Hoang Giang Bui wrote: > > Hello > > I want to assemble petsc matrix from csr matrix on proc 0. I did that like: > > ierr = MatSetType(A, MATMPIAIJ); > ierr = MatMPIAIJSetPreallocation(A, PETSC_DEFAULT, PETSC_NULL, PETSC_DEFAULT, PETSC_NULL); > > on proc 0, I assemble the matrix > > for(Ii = Istart; Ii < Iend; ++Ii) > { > int nz = ia[Ii + 1] - ia[Ii]; > ierr = MatSetValues(A, 1, &Ii, nz, &ja[ia[Ii]], &v[ia[Ii]], INSERT_VALUES); > } > > the other proc also called this code, but input matrix only exists in proc 0. > > The matrix print out correctly: > MatView(A, PETSC_VIEWER_STDOUT_WORLD); > row 0: (0, 1) (3, 6) > row 1: (1, 10.5) > row 2: (2, 0.015) > row 3: (1, 250.5) (3, -280) (4, 33.32) > row 4: (4, 12) > > However, when solved by ksp, it created an error: > [3]PETSC ERROR: Invalid argument > [3]PETSC ERROR: Must be square matrix, rows 0 columns 1 > > What should be wrong in this case? > > Regards, > Bui > From hgbk2008 at gmail.com Thu Dec 18 08:03:40 2014 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Thu, 18 Dec 2014 15:03:40 +0100 Subject: [petsc-users] assemble the matrix on proc 0 In-Reply-To: <4CC88DF9-15CC-42AE-A848-78D5AD561AE4@mcs.anl.gov> References: <5492AFAC.2050306@gmail.com> <4CC88DF9-15CC-42AE-A848-78D5AD561AE4@mcs.anl.gov> Message-ID: <5492DEBC.90802@gmail.com> Yes, I did it, like: int my_size; if(rank == 0) my_size = n; else my_size = 0; PetscErrorCode ierr; Mat A; ierr = MatCreate(PETSC_COMM_WORLD, &A); ierr = MatSetSizes(A, my_size, PETSC_DECIDE, n, n); Regards, Bui On 12/18/2014 02:17 PM, Barry Smith wrote: > Likely when you set the sizes for the matrix you did not set the local size properly on each process. You need to set the local size to be the complete matrix size on process 0 and 0 on all the other processes. > > Barry > >> On Dec 18, 2014, at 4:42 AM, Hoang Giang Bui wrote: >> >> Hello >> >> I want to assemble petsc matrix from csr matrix on proc 0. I did that like: >> >> ierr = MatSetType(A, MATMPIAIJ); >> ierr = MatMPIAIJSetPreallocation(A, PETSC_DEFAULT, PETSC_NULL, PETSC_DEFAULT, PETSC_NULL); >> >> on proc 0, I assemble the matrix >> >> for(Ii = Istart; Ii < Iend; ++Ii) >> { >> int nz = ia[Ii + 1] - ia[Ii]; >> ierr = MatSetValues(A, 1, &Ii, nz, &ja[ia[Ii]], &v[ia[Ii]], INSERT_VALUES); >> } >> >> the other proc also called this code, but input matrix only exists in proc 0. >> >> The matrix print out correctly: >> MatView(A, PETSC_VIEWER_STDOUT_WORLD); >> row 0: (0, 1) (3, 6) >> row 1: (1, 10.5) >> row 2: (2, 0.015) >> row 3: (1, 250.5) (3, -280) (4, 33.32) >> row 4: (4, 12) >> >> However, when solved by ksp, it created an error: >> [3]PETSC ERROR: Invalid argument >> [3]PETSC ERROR: Must be square matrix, rows 0 columns 1 >> >> What should be wrong in this case? >> >> Regards, >> Bui >> From knepley at gmail.com Thu Dec 18 08:21:03 2014 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Dec 2014 08:21:03 -0600 Subject: [petsc-users] assemble the matrix on proc 0 In-Reply-To: <5492DEBC.90802@gmail.com> References: <5492AFAC.2050306@gmail.com> <4CC88DF9-15CC-42AE-A848-78D5AD561AE4@mcs.anl.gov> <5492DEBC.90802@gmail.com> Message-ID: On Thu, Dec 18, 2014 at 8:03 AM, Hoang Giang Bui wrote: > > Yes, I did it, like: > > int my_size; > if(rank == 0) > my_size = n; > else > my_size = 0; > PetscErrorCode ierr; > Mat A; > ierr = MatCreate(PETSC_COMM_WORLD, &A); > ierr = MatSetSizes(A, my_size, PETSC_DECIDE, n, n); > View the matrix first ierr = MatView(A, PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); There are a lot of possible mistakes to make. This can help sort it out. Thanks, Matt > Regards, > Bui > > > On 12/18/2014 02:17 PM, Barry Smith wrote: > >> Likely when you set the sizes for the matrix you did not set the >> local size properly on each process. You need to set the local size to be >> the complete matrix size on process 0 and 0 on all the other processes. >> >> Barry >> >> On Dec 18, 2014, at 4:42 AM, Hoang Giang Bui wrote: >>> >>> Hello >>> >>> I want to assemble petsc matrix from csr matrix on proc 0. I did that >>> like: >>> >>> ierr = MatSetType(A, MATMPIAIJ); >>> ierr = MatMPIAIJSetPreallocation(A, PETSC_DEFAULT, PETSC_NULL, >>> PETSC_DEFAULT, PETSC_NULL); >>> >>> on proc 0, I assemble the matrix >>> >>> for(Ii = Istart; Ii < Iend; ++Ii) >>> { >>> int nz = ia[Ii + 1] - ia[Ii]; >>> ierr = MatSetValues(A, 1, &Ii, nz, &ja[ia[Ii]], &v[ia[Ii]], >>> INSERT_VALUES); >>> } >>> >>> the other proc also called this code, but input matrix only exists in >>> proc 0. >>> >>> The matrix print out correctly: >>> MatView(A, PETSC_VIEWER_STDOUT_WORLD); >>> row 0: (0, 1) (3, 6) >>> row 1: (1, 10.5) >>> row 2: (2, 0.015) >>> row 3: (1, 250.5) (3, -280) (4, 33.32) >>> row 4: (4, 12) >>> >>> However, when solved by ksp, it created an error: >>> [3]PETSC ERROR: Invalid argument >>> [3]PETSC ERROR: Must be square matrix, rows 0 columns 1 >>> >>> What should be wrong in this case? >>> >>> Regards, >>> Bui >>> >>> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Dec 18 11:31:16 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 18 Dec 2014 11:31:16 -0600 Subject: [petsc-users] assemble the matrix on proc 0 In-Reply-To: References: <5492AFAC.2050306@gmail.com> <4CC88DF9-15CC-42AE-A848-78D5AD561AE4@mcs.anl.gov> <5492DEBC.90802@gmail.com> Message-ID: > ierr = MatSetSizes(A, my_size, PETSC_DECIDE, n, n); should be > ierr = MatSetSizes(A, my_size, my_size, n, n); > On Dec 18, 2014, at 8:21 AM, Matthew Knepley wrote: > > On Thu, Dec 18, 2014 at 8:03 AM, Hoang Giang Bui wrote: > Yes, I did it, like: > > int my_size; > if(rank == 0) > my_size = n; > else > my_size = 0; > PetscErrorCode ierr; > Mat A; > ierr = MatCreate(PETSC_COMM_WORLD, &A); > ierr = MatSetSizes(A, my_size, PETSC_DECIDE, n, n); > > View the matrix first > > ierr = MatView(A, PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > > There are a lot of possible mistakes to make. This can help sort it out. > > Thanks, > > Matt > > Regards, > Bui > > > On 12/18/2014 02:17 PM, Barry Smith wrote: > Likely when you set the sizes for the matrix you did not set the local size properly on each process. You need to set the local size to be the complete matrix size on process 0 and 0 on all the other processes. > > Barry > > On Dec 18, 2014, at 4:42 AM, Hoang Giang Bui wrote: > > Hello > > I want to assemble petsc matrix from csr matrix on proc 0. I did that like: > > ierr = MatSetType(A, MATMPIAIJ); > ierr = MatMPIAIJSetPreallocation(A, PETSC_DEFAULT, PETSC_NULL, PETSC_DEFAULT, PETSC_NULL); > > on proc 0, I assemble the matrix > > for(Ii = Istart; Ii < Iend; ++Ii) > { > int nz = ia[Ii + 1] - ia[Ii]; > ierr = MatSetValues(A, 1, &Ii, nz, &ja[ia[Ii]], &v[ia[Ii]], INSERT_VALUES); > } > > the other proc also called this code, but input matrix only exists in proc 0. > > The matrix print out correctly: > MatView(A, PETSC_VIEWER_STDOUT_WORLD); > row 0: (0, 1) (3, 6) > row 1: (1, 10.5) > row 2: (2, 0.015) > row 3: (1, 250.5) (3, -280) (4, 33.32) > row 4: (4, 12) > > However, when solved by ksp, it created an error: > [3]PETSC ERROR: Invalid argument > [3]PETSC ERROR: Must be square matrix, rows 0 columns 1 > > What should be wrong in this case? > > Regards, > Bui > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From hgbk2008 at gmail.com Thu Dec 18 11:40:17 2014 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Thu, 18 Dec 2014 18:40:17 +0100 Subject: [petsc-users] assemble the matrix on proc 0 In-Reply-To: References: <5492AFAC.2050306@gmail.com> <4CC88DF9-15CC-42AE-A848-78D5AD561AE4@mcs.anl.gov> <5492DEBC.90802@gmail.com> Message-ID: <54931181.8000905@gmail.com> THis is the correct one. Thank you Bui On 12/18/2014 06:31 PM, Barry Smith wrote: >> ierr = MatSetSizes(A, my_size, PETSC_DECIDE, n, n); > should be > >> ierr = MatSetSizes(A, my_size, my_size, n, n); > >> On Dec 18, 2014, at 8:21 AM, Matthew Knepley wrote: >> >> On Thu, Dec 18, 2014 at 8:03 AM, Hoang Giang Bui wrote: >> Yes, I did it, like: >> >> int my_size; >> if(rank == 0) >> my_size = n; >> else >> my_size = 0; >> PetscErrorCode ierr; >> Mat A; >> ierr = MatCreate(PETSC_COMM_WORLD, &A); >> ierr = MatSetSizes(A, my_size, PETSC_DECIDE, n, n); >> >> View the matrix first >> >> ierr = MatView(A, PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); >> >> There are a lot of possible mistakes to make. This can help sort it out. >> >> Thanks, >> >> Matt >> >> Regards, >> Bui >> >> >> On 12/18/2014 02:17 PM, Barry Smith wrote: >> Likely when you set the sizes for the matrix you did not set the local size properly on each process. You need to set the local size to be the complete matrix size on process 0 and 0 on all the other processes. >> >> Barry >> >> On Dec 18, 2014, at 4:42 AM, Hoang Giang Bui wrote: >> >> Hello >> >> I want to assemble petsc matrix from csr matrix on proc 0. I did that like: >> >> ierr = MatSetType(A, MATMPIAIJ); >> ierr = MatMPIAIJSetPreallocation(A, PETSC_DEFAULT, PETSC_NULL, PETSC_DEFAULT, PETSC_NULL); >> >> on proc 0, I assemble the matrix >> >> for(Ii = Istart; Ii < Iend; ++Ii) >> { >> int nz = ia[Ii + 1] - ia[Ii]; >> ierr = MatSetValues(A, 1, &Ii, nz, &ja[ia[Ii]], &v[ia[Ii]], INSERT_VALUES); >> } >> >> the other proc also called this code, but input matrix only exists in proc 0. >> >> The matrix print out correctly: >> MatView(A, PETSC_VIEWER_STDOUT_WORLD); >> row 0: (0, 1) (3, 6) >> row 1: (1, 10.5) >> row 2: (2, 0.015) >> row 3: (1, 250.5) (3, -280) (4, 33.32) >> row 4: (4, 12) >> >> However, when solved by ksp, it created an error: >> [3]PETSC ERROR: Invalid argument >> [3]PETSC ERROR: Must be square matrix, rows 0 columns 1 >> >> What should be wrong in this case? >> >> Regards, >> Bui >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener From bsmith at mcs.anl.gov Thu Dec 18 17:29:19 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 18 Dec 2014 17:29:19 -0600 Subject: [petsc-users] Troubles updating my code from PETSc-3.4 to 3.5 Using MUMPS for KSPSolve() In-Reply-To: References: Message-ID: <43F19065-655C-46D0-A978-1AE1B175CEAC@mcs.anl.gov> Marc, I have played around with your matrix using several "direct" solvers; all of them produced residual norms of around 10^-8 except Matlab which produced a residual norm of 10^-5 and a warning that the recond of the matrix was 10^-16 I also solved with the PETSc LU factorization in quad precision and got a residual norm of 10^-26. UMFPACK ran out of memory. Attached I plotted the solution (as a 1d vector for all the solvers), as you can see all the answers are very different. Note that the figures are black inside the "envelop" of the solution because intermediate values of the vector are dense (ie. at a very fine scale the solution is oscillating a great deal). Interesting when I ran with MUMPS on 1 and 2 processors using PETSc 3.5p2 (actually the maint branch) I get a solution very near your "old" solution. IThe matrix has 8,000 rows of the identity (1 on the diagonal) and the rest with entries of -10^8. Don't put those trivial rows into the matrix and scale the matrix so it has positive diagonal entires. I've done this and it doesn't help the solver but it is still the right thing to do. I do not know if the quad precision solution is "accurate" but I am pretty confident that all the other answers are equally valid and equally worthless. How do you know the "old" mumps solution is correct and the new mumps solution wrong? Can you try with the maint branch of PETSc and see if you get the solutions you want? Barry > On Dec 11, 2014, at 4:38 AM, Marc MEDALE wrote: > > Dear PETSC Users, > > I have just updated to PETSc-3.5 my research code that uses PETSc for a while but I'm facing an astonishing difference between PETSc-3.4 to 3.5 versions when solving a very ill conditioned algebraic system with MUMPS (4.10.0 in both cases). > > The only differences that arise in my fortran source code are the following: > Loma1-medale% diff ../version_3.5/solvEFL_MAN_SBIF.F ../version_3.4/solvEFL_MAN_SBIF.F > 336,337d335 > < CALL MatSetOption(MATGLOB,MAT_KEEP_NONZERO_PATTERN, > < & PETSC_TRUE,IER) > 749,750c747,748 > < CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_REAL, > < & PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,IER) > --- >> CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_DOUBLE_PRECISION, >> & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,IER) > 909c907,908 > < CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB,IER) > --- >> CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB, >> & SAME_NONZERO_PATTERN,IER) > > When I run the corresponding program versions on 128 cores of our cluster with the same input data and the following command line arguments: > -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 > > I get the following outputs: > a) with PETSc-3.4p4: > L2 norm of solution vector: 7.39640E-02, > > b) with PETSc-3.5p1: > L2 norm of solution vector: 1.61325E-02 > > Do I have change something else in updating my code based on KSP from PETSc-3.4 to 3.5 versions? > Do any default values in the PETSc-MUMPS interface have been changed from PETSc-3.4 to 3.5? > Any hints or suggestions are welcome to help me to recover the right results (obtained with PETSc-3.4). > > Thank you very much. > > Marc MEDALE. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Untitled.png Type: image/png Size: 1650014 bytes Desc: not available URL: From marc.medale at univ-amu.fr Fri Dec 19 05:39:04 2014 From: marc.medale at univ-amu.fr (Marc MEDALE) Date: Fri, 19 Dec 2014 12:39:04 +0100 Subject: [petsc-users] Troubles updating my code from PETSc-3.4 to 3.5 Using MUMPS for KSPSolve() In-Reply-To: <43F19065-655C-46D0-A978-1AE1B175CEAC@mcs.anl.gov> References: <43F19065-655C-46D0-A978-1AE1B175CEAC@mcs.anl.gov> Message-ID: <2046A0C9-1D64-4CBA-911A-D8A960C038D6@univ-amu.fr> Barry, First of all I would like to thank you very much for the time you have spent on this problem. > Attached I plotted the solution (as a 1d vector for all the solvers), as you can see all the answers are very different. Secondly I am well aware of most of the many limits in solving very ill-conditioned systems, however the reason that I keep going with that is I want to take advantage of direct solvers for computing bifurcation diagrams in incompressible fluid flow problems with a method that transforms the original non-linear problem into a set of linear ones, with the same operator but different rhs (further details can be found in [1]). [1] B. Cochelin and M. Medale. Power series analysis as a major breakthrough to improve the efficiency of Asymptotic Numerical Method in the vicinity of bifurcations. J. Comput. Phys., Vol. 236, 594-607, 2013. > Interesting when I ran with MUMPS on 1 and 2 processors using PETSc 3.5p2 (actually the maint branch) I get a solution very near your "old" solution. I do not know if the quad precision solution is "accurate" but I am pretty confident that all the other answers are equally valid and equally worthless. I agree with you that except quad precision all other solutions can be worthless, but in our method it is only in the vicinity of the bifurcation point that the algebraic system degenerates and becomes close to singular, so I have to know what is the reliability limit to switch to an augmented system that recovers a more convenient condition number, but at more than twice the original cost. > How do you know the "old" mumps solution is correct and the new mumps solution wrong? After any changes or upgrade to new PETSc versions I use to run tests which compare the outputs between previous release and the new one. And I was very surprised to see such huge differences in outputs, for components that haven't been told to be changed in the changes_3.5 log file (neither KSP nor MUMPS were told to be changed in the changes from PETSc-3.4 to 3.5). So, as longtime PETSc user (and aficionado), I would greatly appreciate you give me some hints about the following fundamental question: what are the changes that have been done in PETSc to change in such a way the solution computed by the MUMPS solver throughout the PETSc interface? Thank you again for your time and pertinent explanations, as usual. Have a good day. Marc MEDALE Le 19 d?c. 2014 ? 00:29, Barry Smith a ?crit : > > Marc, > > I have played around with your matrix using several "direct" solvers; all of them produced residual norms of around 10^-8 except Matlab which produced a residual norm of 10^-5 and a warning that the recond of the matrix was 10^-16 > I also solved with the PETSc LU factorization in quad precision and got a residual norm of 10^-26. UMFPACK ran out of memory. > > Attached I plotted the solution (as a 1d vector for all the solvers), as you can see all the answers are very different. Note that the figures are black inside the "envelop" of the solution because intermediate values of the vector are dense (ie. at a very fine scale the solution is oscillating a great deal). Interesting when I ran with MUMPS on 1 and 2 processors using PETSc 3.5p2 (actually the maint branch) I get a solution very near your "old" solution. > > IThe matrix has 8,000 rows of the identity (1 on the diagonal) and the rest with entries of -10^8. Don't put those trivial rows into the matrix and scale the matrix so it has positive diagonal entires. I've done this and it doesn't help the solver but it is still the right thing to do. > > > > I do not know if the quad precision solution is "accurate" but I am pretty confident that all the other answers are equally valid and equally worthless. How do you know the "old" mumps solution is correct and the new mumps solution wrong? > > Can you try with the maint branch of PETSc and see if you get the solutions you want? > > > Barry > > > >> On Dec 11, 2014, at 4:38 AM, Marc MEDALE wrote: >> >> Dear PETSC Users, >> >> I have just updated to PETSc-3.5 my research code that uses PETSc for a while but I'm facing an astonishing difference between PETSc-3.4 to 3.5 versions when solving a very ill conditioned algebraic system with MUMPS (4.10.0 in both cases). >> >> The only differences that arise in my fortran source code are the following: >> Loma1-medale% diff ../version_3.5/solvEFL_MAN_SBIF.F ../version_3.4/solvEFL_MAN_SBIF.F >> 336,337d335 >> < CALL MatSetOption(MATGLOB,MAT_KEEP_NONZERO_PATTERN, >> < & PETSC_TRUE,IER) >> 749,750c747,748 >> < CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_REAL, >> < & PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,IER) >> --- >>> CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_DOUBLE_PRECISION, >>> & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,IER) >> 909c907,908 >> < CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB,IER) >> --- >>> CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB, >>> & SAME_NONZERO_PATTERN,IER) >> >> When I run the corresponding program versions on 128 cores of our cluster with the same input data and the following command line arguments: >> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 >> >> I get the following outputs: >> a) with PETSc-3.4p4: >> L2 norm of solution vector: 7.39640E-02, >> >> b) with PETSc-3.5p1: >> L2 norm of solution vector: 1.61325E-02 >> >> Do I have change something else in updating my code based on KSP from PETSc-3.4 to 3.5 versions? >> Do any default values in the PETSc-MUMPS interface have been changed from PETSc-3.4 to 3.5? >> Any hints or suggestions are welcome to help me to recover the right results (obtained with PETSc-3.4). >> >> Thank you very much. >> >> Marc MEDALE. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gilles.steiner at epfl.ch Fri Dec 19 08:14:10 2014 From: gilles.steiner at epfl.ch (Gilles Steiner) Date: Fri, 19 Dec 2014 15:14:10 +0100 Subject: [petsc-users] Trying to apply FieldSplitPC by reading bloc matrix Message-ID: <549432B2.6080309@epfl.ch> Hello Petsc Users, I have an issue trying to use FiledSplitPC in parallel. My goal : I want to get a linear system from petsc binary files and solve this in parallel with the FieldSplitPC. The problem I want to solve is an FE approximation of the Stokes equations. Skipping the details, my code looks like : // Reading the four blocs UU, UP, PU and PP for(int i=0; i < 4; ++i) { string name = matrix + to_string(i) + ".petscbin"; PetscViewer PETSC_matreader; PetscViewerBinaryOpen(PETSC_COMM_WORLD, name.c_str(), FILE_MODE_READ, &PETSC_matreader); MatCreate(PETSC_COMM_WORLD,&PETSC_subA[i]); MatLoad(PETSC_subA[i],PETSC_matreader); PetscViewerDestroy(&PETSC_matreader); } // Reading the RHS vector and duplicating it to create the solution vector PetscViewerBinaryOpen(PETSC_COMM_WORLD, rhs.c_str(), FILE_MODE_READ, &PETSC_vecreader); VecCreate(PETSC_COMM_WORLD,&PETSC_rhs); VecLoad(PETSC_rhs,PETSC_vecreader); PetscViewerDestroy(&PETSC_vecreader); VecDuplicate(PETSC_rhs,&PETSC_sol); // Create global matrixwith MatCreateNest MatCreateNest(PETSC_COMM_WORLD, 2, NULL, 2, NULL, PETSC_subA, &PETSC_A); MatNestGetISs(PETSC_A, PETSC_isg, NULL); // Setting up the ksp and precond KSPCreate(PETSC_COMM_WORLD,&PETSC_ksp); KSPSetOperators(PETSC_ksp,PETSC_A,PETSC_A); KSPSetFromOptions(PETSC_ksp); KSPGetPC(PETSC_ksp, &PETSC_pc); PCSetType(PETSC_pc, PCFIELDSPLIT); PCFieldSplitSetIS(PETSC_pc, "0", PETSC_isg[0]); PCFieldSplitSetIS(PETSC_pc, "1", PETSC_isg[1]); PCSetFromOptions(PETSC_pc); // Solving the system and writing back the solution in rhs file KSPSolve(PETSC_ksp,PETSC_rhs,PETSC_sol); PetscViewer PETSC_vecwriter; PetscViewerBinaryOpen(PETSC_COMM_WORLD, rhs.c_str(), FILE_MODE_WRITE, &PETSC_vecwriter); VecView(PETSC_sol,PETSC_vecwriter); PetscViewerDestroy(&PETSC_vecwriter); When I run it with 1 proc, everything works fine and I get the correct solution : 0 KSP preconditioned resid norm 1.271697253018e+03 true resid norm 2.400000000000e+01 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 5.009545069728e+01 true resid norm 9.166803391041e-02 ||r(i)||/||b|| 3.819501412934e-03 2 KSP preconditioned resid norm 6.460631387766e+00 true resid norm 4.995542253831e-02 ||r(i)||/||b|| 2.081475939096e-03 3 KSP preconditioned resid norm 1.155895209298e+00 true resid norm 1.515734830704e-02 ||r(i)||/||b|| 6.315561794600e-04 4 KSP preconditioned resid norm 7.407384739634e-02 true resid norm 9.992802256200e-03 ||r(i)||/||b|| 4.163667606750e-04 5 KSP preconditioned resid norm 1.574456882990e-02 true resid norm 9.994876664681e-03 ||r(i)||/||b|| 4.164531943617e-04 6 KSP preconditioned resid norm 2.383022349902e-03 true resid norm 9.990760645581e-03 ||r(i)||/||b|| 4.162816935659e-04 7 KSP preconditioned resid norm 6.175379834254e-04 true resid norm 9.990821066459e-03 ||r(i)||/||b|| 4.162842111025e-04 8 KSP preconditioned resid norm 6.867982689960e-05 true resid norm 9.990532094790e-03 ||r(i)||/||b|| 4.162721706163e-04 9 KSP preconditioned resid norm 1.041091257246e-05 true resid norm 9.990558069113e-03 ||r(i)||/||b|| 4.162732528797e-04 10 KSP preconditioned resid norm 1.447793722489e-06 true resid norm 9.990557786778e-03 ||r(i)||/||b|| 4.162732411158e-04 11 KSP preconditioned resid norm 2.139317335854e-07 true resid norm 9.990557262754e-03 ||r(i)||/||b|| 4.162732192814e-04 12 KSP preconditioned resid norm 4.383129810322e-08 true resid norm 9.990557306920e-03 ||r(i)||/||b|| 4.162732211217e-04 13 KSP preconditioned resid norm 3.351461304399e-09 true resid norm 9.990557311707e-03 ||r(i)||/||b|| 4.162732213211e-04 14 KSP preconditioned resid norm 5.169032607321e-10 true resid norm 9.990557312817e-03 ||r(i)||/||b|| 4.162732213674e-04 [14:49:10::INFO ] System Solved. Final tolerance reached is 5.16903e-10 in 14 iterations. But if I do it with 2 procs, the resolution seems fine but the solution is wrong : 0 KSP preconditioned resid norm 1.247694088756e+03 true resid norm 2.400000000000e+01 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 4.481954484303e+01 true resid norm 5.277507840772e-01 ||r(i)||/||b|| 2.198961600321e-02 2 KSP preconditioned resid norm 1.110647693456e+01 true resid norm 4.005558168981e-02 ||r(i)||/||b|| 1.668982570409e-03 3 KSP preconditioned resid norm 1.220368027409e+00 true resid norm 1.877650834971e-02 ||r(i)||/||b|| 7.823545145714e-04 4 KSP preconditioned resid norm 2.834261749922e-01 true resid norm 1.613967205264e-02 ||r(i)||/||b|| 6.724863355265e-04 5 KSP preconditioned resid norm 4.215090288154e-02 true resid norm 1.562561614611e-02 ||r(i)||/||b|| 6.510673394212e-04 6 KSP preconditioned resid norm 1.209476134754e-02 true resid norm 1.563808960492e-02 ||r(i)||/||b|| 6.515870668718e-04 7 KSP preconditioned resid norm 2.038835108629e-03 true resid norm 1.564163643064e-02 ||r(i)||/||b|| 6.517348512765e-04 8 KSP preconditioned resid norm 1.928844666836e-04 true resid norm 1.564072761376e-02 ||r(i)||/||b|| 6.516969839065e-04 9 KSP preconditioned resid norm 3.138911950605e-05 true resid norm 1.564047323377e-02 ||r(i)||/||b|| 6.516863847403e-04 10 KSP preconditioned resid norm 4.950062975470e-06 true resid norm 1.564048216528e-02 ||r(i)||/||b|| 6.516867568865e-04 11 KSP preconditioned resid norm 7.677242244159e-07 true resid norm 1.564049253364e-02 ||r(i)||/||b|| 6.516871889019e-04 12 KSP preconditioned resid norm 1.870521888617e-07 true resid norm 1.564049269566e-02 ||r(i)||/||b|| 6.516871956526e-04 13 KSP preconditioned resid norm 3.077235724319e-08 true resid norm 1.564049264800e-02 ||r(i)||/||b|| 6.516871936666e-04 14 KSP preconditioned resid norm 6.584409191524e-09 true resid norm 1.564049264183e-02 ||r(i)||/||b|| 6.516871934095e-04 15 KSP preconditioned resid norm 1.091619359913e-09 true resid norm 1.564049263170e-02 ||r(i)||/||b|| 6.516871929874e-04 [15:10:58::INFO ] System Solved. Final tolerance reached is 1.09162e-09 in 15 iterations. Any idea of what is wrong with this ? Is it the code or the base concept ? Thank you. Gilles From mailinglists at xgm.de Fri Dec 19 09:05:33 2014 From: mailinglists at xgm.de (Florian Lindner) Date: Fri, 19 Dec 2014 16:05:33 +0100 Subject: [petsc-users] Why is PETSC_COMM_WORLD changed? Message-ID: <1767358.e8nFK7Wd2s@asaru> Hello, I have a piece of code that looks like that: // PETSC_COMM_WORLD = MPI_COMM_WORLD; PetscBool petscIsInitialized; PetscInitialized(&petscIsInitialized); if (not petscIsInitialized) { PetscErrorCode ierr; std::cout << "PETSC == WORLD: " << (PETSC_COMM_WORLD == MPI_COMM_WORLD) << std::endl; std::cout << "PETSC_COMM_WORLD: " << PETSC_COMM_WORLD << std::endl; std::cout << "Petsc before PetscInitializeNoArguments()" << std::endl; ierr = PetscInitializeNoArguments(); CHKERRV(ierr); std::cout << "Petsc after PetscInitializeNoArguments()" << std::endl; } PETSC_COMM_WORLD is touched nowhere else in our source, I promise, having grepped through right now. The code runs fine like that, but when I uncomment the first line it does not anymore. As far as I know PETSC_COMM_WORLD equals to MPI_COMM_WORLD unless changed, but when I run it prints PETSC == WORLD: 0 PETSC_COMM_WORLD: 67108864 The first line uncommented gives: PETSC == WORLD: 1 PETSC_COMM_WORLD: 1140850688 (and an error in my program: Attempting to use an MPI routine before initializing MPICH) Just trying to understand what's going on... Thanks and have a nice weekend! Florian From lawrence.mitchell at imperial.ac.uk Fri Dec 19 09:10:36 2014 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Fri, 19 Dec 2014 15:10:36 +0000 Subject: [petsc-users] Confusion with MatGetLocalSubMatrix Message-ID: <33DAE0D3-4F42-47E2-B8F0-5364AA00F51D@imperial.ac.uk> Dear petsc-users, I'm trying to setup matrices and data structures for use with MatGetLocalSubMatrix, but I'm rather lost in a world of ISes and block sizes. I have the layouts and so forth correct where all my fields have block size 1, but am struggling to follow how things fit together for block size > 1. I have a Taylor-Hood discretisation so a P2 velocity space with block size 2, and a P1 pressure space with block size 1. On each process, I build the full local to global mapping for both fields. This has block size 1. Then I create strided ISes to define the local blocks for each field, and set the block size on each of them (2 for the velocity space, 1 for the pressure). Aside, when I do ISCreateStride for an IS with a block size > 1, do I provide all the indices, or just the blocked indices? Should I be using ISCreateBlock for block size > 1 instead? Calling MatGetLocalSubMatrix results in an error: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: Blocksize of localtoglobalmapping 1 must match that of layout 2 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-679-gfd874b2 GIT Date: 2014-10-14 23:50:11 -0500 [0]PETSC ERROR: foo.py on a arch-darwin-c-dbg named yam-laptop.local by lmitche1 Fri Dec 19 14:50:37 2014 [0]PETSC ERROR: Configure options --download-chaco=1 --download-ctetgen=1 --download-exodusii=1 --download-hypre=1 --download-metis=1 --download-ml --download-parmetis=1 --download-triangle=1 --with-c2html=0 --with-debugging=1 --with-hdf5-dir=/usr/local --with-hdf5=1 --with-netcdf-dir=/usr/local --with-netcdf=1 --with-shared-libraries=1 PETSC_ARCH=arch-darwin-c-dbg [0]PETSC ERROR: #1 PetscLayoutSetBlockSize() line 438 in /Users/lmitche1/Documents/work/mapdes/petsc/src/vec/is/utils/pmap.c [0]PETSC ERROR: #2 MatCreateLocalRef() line 259 in /Users/lmitche1/Documents/work/mapdes/petsc/src/mat/impls/localref/mlocalref.c [0]PETSC ERROR: #3 MatGetLocalSubMatrix() line 9480 in /Users/lmitche1/Documents/work/mapdes/petsc/src/mat/interface/matrix.c Should I therefore not set the block size on this IS? This seems wrong to me, since it's the only way to forward that to the local created sub matrix, such that I can do matsetvaluesblockedlocal on it. As an aside, I note that the local sub matrix only supports blocked insertion for square blocks, but I think matsetvaluesblocked now works for non-square blocks: should this check be relaxed appropriately? Cheers, Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mfadams at lbl.gov Fri Dec 19 09:20:43 2014 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 19 Dec 2014 10:20:43 -0500 Subject: [petsc-users] Richardson: damping factor=0 Message-ID: Richardson damping factor does not seem to do anything. I set it to zero and the solve was fine. Am I missing something? Thanks, Mark Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 1024 MPI processes type: richardson Richardson: damping factor=0 maximum iterations=4 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_3_) 1024 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix followed by preconditioner matrix: Mat Object: 1024 MPI processes type: shell rows=33427584, cols=33427584 Mat Object: 1024 MPI processes type: mpiaij rows=33427584, cols=33427584 total: nonzeros=1.7272e+09, allocated nonzeros=3.4544e+09 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Dec 19 09:23:02 2014 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 19 Dec 2014 10:23:02 -0500 Subject: [petsc-users] Richardson: damping factor=0 In-Reply-To: References: Message-ID: Oh, SOR probably grabs the iteration ... Never mind. On Fri, Dec 19, 2014 at 10:20 AM, Mark Adams wrote: > > Richardson damping factor does not seem to do anything. I set it to zero > and the solve was fine. > > Am I missing something? > > Thanks, > Mark > > Down solver (pre-smoother) on level 3 ------------------------------- > KSP Object: (mg_levels_3_) 1024 MPI processes > type: richardson > Richardson: damping factor=0 > maximum iterations=4 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_3_) 1024 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, > omega = 1 > linear system matrix followed by preconditioner matrix: > Mat Object: 1024 MPI processes > type: shell > rows=33427584, cols=33427584 > Mat Object: 1024 MPI processes > type: mpiaij > rows=33427584, cols=33427584 > total: nonzeros=1.7272e+09, allocated nonzeros=3.4544e+09 > total number of mallocs used during MatSetValues calls =0 > not using I-node (on process 0) routines > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Dec 19 11:04:33 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 19 Dec 2014 11:04:33 -0600 Subject: [petsc-users] Richardson: damping factor=0 In-Reply-To: References: Message-ID: <25A59CA8-FF35-478E-92C5-52942CDEB542@mcs.anl.gov> Mark, Thanks for pointing this out. Perhaps this is a bug, since all the PCApplyRichardson_*() have an (implicit) damping of 1 perhaps we should error out when the damping is not one and one attempts to use a PCApplyRichardson_*(). The current code is questionable since people get a different result then they expect with no explanation why. Barry > On Dec 19, 2014, at 9:23 AM, Mark Adams wrote: > > Oh, SOR probably grabs the iteration ... > > Never mind. > > On Fri, Dec 19, 2014 at 10:20 AM, Mark Adams wrote: > Richardson damping factor does not seem to do anything. I set it to zero and the solve was fine. > > Am I missing something? > > Thanks, > Mark > > Down solver (pre-smoother) on level 3 ------------------------------- > KSP Object: (mg_levels_3_) 1024 MPI processes > type: richardson > Richardson: damping factor=0 > maximum iterations=4 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_3_) 1024 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 > linear system matrix followed by preconditioner matrix: > Mat Object: 1024 MPI processes > type: shell > rows=33427584, cols=33427584 > Mat Object: 1024 MPI processes > type: mpiaij > rows=33427584, cols=33427584 > total: nonzeros=1.7272e+09, allocated nonzeros=3.4544e+09 > total number of mallocs used during MatSetValues calls =0 > not using I-node (on process 0) routines > From jed at jedbrown.org Fri Dec 19 11:12:38 2014 From: jed at jedbrown.org (Jed Brown) Date: Fri, 19 Dec 2014 09:12:38 -0800 Subject: [petsc-users] Confusion with MatGetLocalSubMatrix In-Reply-To: <33DAE0D3-4F42-47E2-B8F0-5364AA00F51D@imperial.ac.uk> References: <33DAE0D3-4F42-47E2-B8F0-5364AA00F51D@imperial.ac.uk> Message-ID: <87tx0rimwp.fsf@jedbrown.org> Lawrence Mitchell writes: > Dear petsc-users, > > I'm trying to setup matrices and data structures for use with MatGetLocalSubMatrix, but I'm rather lost in a world of ISes and block sizes. I have the layouts and so forth correct where all my fields have block size 1, but am struggling to follow how things fit together for block size > 1. > > I have a Taylor-Hood discretisation so a P2 velocity space with block size 2, and a P1 pressure space with block size 1. > > On each process, I build the full local to global mapping for both fields. This has block size 1. How are you ordering the fields on each process? > Then I create strided ISes to define the local blocks for each field, and set the block size on each of them (2 for the velocity space, 1 for the pressure). Aside, when I do ISCreateStride for an IS with a block size > 1, do I provide all the indices, or just the blocked indices? Should I be using ISCreateBlock for block size > 1 instead? ISSTRIDE has no concept of block size and can't be used to describe blocked index sets. Use ISBLOCK instead. > Calling MatGetLocalSubMatrix results in an error: > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Petsc has generated inconsistent data > [0]PETSC ERROR: Blocksize of localtoglobalmapping 1 must match that of layout 2 Hmm, I'm concerned that this might not work right after some recent changes to ISLocalToGlobalMapping. Can you reproduce this with some code I can run? > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-679-gfd874b2 GIT Date: 2014-10-14 23:50:11 -0500 > [0]PETSC ERROR: foo.py on a arch-darwin-c-dbg named yam-laptop.local by lmitche1 Fri Dec 19 14:50:37 2014 > [0]PETSC ERROR: Configure options --download-chaco=1 --download-ctetgen=1 --download-exodusii=1 --download-hypre=1 --download-metis=1 --download-ml --download-parmetis=1 --download-triangle=1 --with-c2html=0 --with-debugging=1 --with-hdf5-dir=/usr/local --with-hdf5=1 --with-netcdf-dir=/usr/local --with-netcdf=1 --with-shared-libraries=1 PETSC_ARCH=arch-darwin-c-dbg > [0]PETSC ERROR: #1 PetscLayoutSetBlockSize() line 438 in /Users/lmitche1/Documents/work/mapdes/petsc/src/vec/is/utils/pmap.c > [0]PETSC ERROR: #2 MatCreateLocalRef() line 259 in /Users/lmitche1/Documents/work/mapdes/petsc/src/mat/impls/localref/mlocalref.c > [0]PETSC ERROR: #3 MatGetLocalSubMatrix() line 9480 in /Users/lmitche1/Documents/work/mapdes/petsc/src/mat/interface/matrix.c > > Should I therefore not set the block size on this IS? This seems wrong to me, since it's the only way to forward that to the local created sub matrix, such that I can do matsetvaluesblockedlocal on it. As an aside, I note that the local sub matrix only supports blocked insertion for square blocks, but I think matsetvaluesblocked now works for non-square blocks: should this check be relaxed appropriately? > > > Cheers, > > Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Fri Dec 19 11:25:30 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 19 Dec 2014 11:25:30 -0600 Subject: [petsc-users] Troubles updating my code from PETSc-3.4 to 3.5 Using MUMPS for KSPSolve() In-Reply-To: <2046A0C9-1D64-4CBA-911A-D8A960C038D6@univ-amu.fr> References: <43F19065-655C-46D0-A978-1AE1B175CEAC@mcs.anl.gov> <2046A0C9-1D64-4CBA-911A-D8A960C038D6@univ-amu.fr> Message-ID: <00EEA146-A314-4740-8822-458512534CDA@mcs.anl.gov> Marc, The "correct", but time consuming way, to determine exactly what code change changed your result is to use git bisection with the two endpoints of maint-3.4 and maint. You'll need to read up on running it but essentially you configure and run with maint-3.4 confirm your one result, then configure and run with maint and confirm the result is something else. Then git will bisect through all the changes giving you new places to configure and run until it finds the exact change set that changed the results. But this will be time consuming because to really check you should rerun configure and make all each time. It will eventually find the change that changed your solution. Before doing that I checked a few things. Both maint-3.4 and maint use the same versions of the external packages: 'http://ftp.mcs.anl.gov/pub/petsc/externalpackages/MUMPS_4.10.0-p3.tar.gz'] 'http://ftp.mcs.anl.gov/pub/petsc/externalpackages/parmetis-4.0.2-p5.tar.gz' 'https://gforge.inria.fr/frs/download.php/31832/scotch_6.0.0_esmumps.tar.gz', http://www.netlib.org/scalapack/scalapack-2.0.2.tgz', I also checked out out the PETSc interface to mumps: called mumps.c for both maint and maint-3.4 and did a diff (all attached) it is mostly just adding options but perhaps it changes something. -------------- next part -------------- A non-text attachment was scrubbed... Name: mumps.c Type: application/octet-stream Size: 75025 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mumps.c-3.4 Type: application/octet-stream Size: 64792 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mumps.c-diff Type: application/octet-stream Size: 16670 bytes Desc: not available URL: -------------- next part -------------- I suggest starting with maint-3.4 and changing the one mumps.c file to the new one (mumps.c in the attachment) and see if your application gives the old or new result. This will tell if something changed in mumps.c to make the solution change. We did not consciously change anything (that I can remember or am aware of) that would change the parameters MUMPS is called with but .... Good luck and let us know how it goes, Barry > On Dec 19, 2014, at 5:39 AM, Marc MEDALE wrote: > > > After any changes or upgrade to new PETSc versions I use to run tests which compare the outputs between previous release and the new one. And I was very surprised to see such huge differences in outputs, for components that haven't been told to be changed in the changes_3.5 log file (neither KSP nor MUMPS were told to be changed in the changes from PETSc-3.4 to 3.5). > > So, as longtime PETSc user (and aficionado), I would greatly appreciate you give me some hints about the following fundamental question: what are the changes that have been done in PETSc to change in such a way the solution computed by the MUMPS solver throughout the PETSc interface? > > Thank you again for your time and pertinent explanations, as usual. Have a good day. > > Marc MEDALE > > > > > > Le 19 d?c. 2014 ? 00:29, Barry Smith a ?crit : > >> >> Marc, >> >> I have played around with your matrix using several "direct" solvers; all of them produced residual norms of around 10^-8 except Matlab which produced a residual norm of 10^-5 and a warning that the recond of the matrix was 10^-16 >> I also solved with the PETSc LU factorization in quad precision and got a residual norm of 10^-26. UMFPACK ran out of memory. >> >> Attached I plotted the solution (as a 1d vector for all the solvers), as you can see all the answers are very different. Note that the figures are black inside the "envelop" of the solution because intermediate values of the vector are dense (ie. at a very fine scale the solution is oscillating a great deal). Interesting when I ran with MUMPS on 1 and 2 processors using PETSc 3.5p2 (actually the maint branch) I get a solution very near your "old" solution. >> >> IThe matrix has 8,000 rows of the identity (1 on the diagonal) and the rest with entries of -10^8. Don't put those trivial rows into the matrix and scale the matrix so it has positive diagonal entires. I've done this and it doesn't help the solver but it is still the right thing to do. >> >> >> >> I do not know if the quad precision solution is "accurate" but I am pretty confident that all the other answers are equally valid and equally worthless. How do you know the "old" mumps solution is correct and the new mumps solution wrong? >> >> Can you try with the maint branch of PETSc and see if you get the solutions you want? >> >> >> Barry >> >> >> >>> On Dec 11, 2014, at 4:38 AM, Marc MEDALE wrote: >>> >>> Dear PETSC Users, >>> >>> I have just updated to PETSc-3.5 my research code that uses PETSc for a while but I'm facing an astonishing difference between PETSc-3.4 to 3.5 versions when solving a very ill conditioned algebraic system with MUMPS (4.10.0 in both cases). >>> >>> The only differences that arise in my fortran source code are the following: >>> Loma1-medale% diff ../version_3.5/solvEFL_MAN_SBIF.F ../version_3.4/solvEFL_MAN_SBIF.F >>> 336,337d335 >>> < CALL MatSetOption(MATGLOB,MAT_KEEP_NONZERO_PATTERN, >>> < & PETSC_TRUE,IER) >>> 749,750c747,748 >>> < CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_REAL, >>> < & PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,IER) >>> --- >>>> CALL KSPSetTolerances(KSP1,TOL,PETSC_DEFAULT_DOUBLE_PRECISION, >>>> & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,IER) >>> 909c907,908 >>> < CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB,IER) >>> --- >>>> CALL KSPSetOperators(KSP1,MATGLOB,MATGLOB, >>>> & SAME_NONZERO_PATTERN,IER) >>> >>> When I run the corresponding program versions on 128 cores of our cluster with the same input data and the following command line arguments: >>> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps -mat_mumps_icntl_8 0 >>> >>> I get the following outputs: >>> a) with PETSc-3.4p4: >>> L2 norm of solution vector: 7.39640E-02, >>> >>> b) with PETSc-3.5p1: >>> L2 norm of solution vector: 1.61325E-02 >>> >>> Do I have change something else in updating my code based on KSP from PETSc-3.4 to 3.5 versions? >>> Do any default values in the PETSc-MUMPS interface have been changed from PETSc-3.4 to 3.5? >>> Any hints or suggestions are welcome to help me to recover the right results (obtained with PETSc-3.4). >>> >>> Thank you very much. >>> >>> Marc MEDALE. >> > From jed at jedbrown.org Fri Dec 19 11:30:49 2014 From: jed at jedbrown.org (Jed Brown) Date: Fri, 19 Dec 2014 09:30:49 -0800 Subject: [petsc-users] Why is PETSC_COMM_WORLD changed? In-Reply-To: <1767358.e8nFK7Wd2s@asaru> References: <1767358.e8nFK7Wd2s@asaru> Message-ID: <87r3vvim2e.fsf@jedbrown.org> Florian Lindner writes: > Hello, > > I have a piece of code that looks like that: > > // PETSC_COMM_WORLD = MPI_COMM_WORLD; > PetscBool petscIsInitialized; > PetscInitialized(&petscIsInitialized); > > if (not petscIsInitialized) { > PetscErrorCode ierr; > > std::cout << "PETSC == WORLD: " << (PETSC_COMM_WORLD == MPI_COMM_WORLD) << std::endl; > std::cout << "PETSC_COMM_WORLD: " << PETSC_COMM_WORLD << std::endl; > std::cout << "Petsc before PetscInitializeNoArguments()" << std::endl; > ierr = PetscInitializeNoArguments(); CHKERRV(ierr); > std::cout << "Petsc after PetscInitializeNoArguments()" << std::endl; > } > > PETSC_COMM_WORLD is touched nowhere else in our source, I promise, having grepped through right now. > > The code runs fine like that, but when I uncomment the first line it does not anymore. > > As far as I know PETSC_COMM_WORLD equals to MPI_COMM_WORLD unless > changed, src/sys/objects/pinit.c: /* user may set this BEFORE calling PetscInitialize() */ MPI_Comm PETSC_COMM_WORLD = MPI_COMM_NULL; (and later) if (PETSC_COMM_WORLD == MPI_COMM_NULL) PETSC_COMM_WORLD = MPI_COMM_WORLD; You can set PETSC_COMM_WORLD before PetscInitialize, but its value is not valid/useful before PetscInitialize. > but when I run it prints > > PETSC == WORLD: 0 > PETSC_COMM_WORLD: 67108864 > > The first line uncommented gives: > > PETSC == WORLD: 1 > PETSC_COMM_WORLD: 1140850688 > > (and an error in my program: Attempting to use an MPI routine before initializing MPICH) > > Just trying to understand what's going on... > > Thanks and have a nice weekend! > > Florian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Fri Dec 19 11:43:43 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 19 Dec 2014 11:43:43 -0600 Subject: [petsc-users] Why is PETSC_COMM_WORLD changed? In-Reply-To: <87r3vvim2e.fsf@jedbrown.org> References: <1767358.e8nFK7Wd2s@asaru> <87r3vvim2e.fsf@jedbrown.org> Message-ID: > You can set PETSC_COMM_WORLD before PetscInitialize, but its value is > not valid/useful before PetscInitialize. I have added text to the manual page for PETSC_COMM_WORLD to make this clear. > On Dec 19, 2014, at 11:30 AM, Jed Brown wrote: > > Florian Lindner writes: > >> Hello, >> >> I have a piece of code that looks like that: >> >> // PETSC_COMM_WORLD = MPI_COMM_WORLD; >> PetscBool petscIsInitialized; >> PetscInitialized(&petscIsInitialized); >> >> if (not petscIsInitialized) { >> PetscErrorCode ierr; >> >> std::cout << "PETSC == WORLD: " << (PETSC_COMM_WORLD == MPI_COMM_WORLD) << std::endl; >> std::cout << "PETSC_COMM_WORLD: " << PETSC_COMM_WORLD << std::endl; >> std::cout << "Petsc before PetscInitializeNoArguments()" << std::endl; >> ierr = PetscInitializeNoArguments(); CHKERRV(ierr); >> std::cout << "Petsc after PetscInitializeNoArguments()" << std::endl; >> } >> >> PETSC_COMM_WORLD is touched nowhere else in our source, I promise, having grepped through right now. >> >> The code runs fine like that, but when I uncomment the first line it does not anymore. >> >> As far as I know PETSC_COMM_WORLD equals to MPI_COMM_WORLD unless >> changed, > > src/sys/objects/pinit.c: > > /* user may set this BEFORE calling PetscInitialize() */ > MPI_Comm PETSC_COMM_WORLD = MPI_COMM_NULL; > > (and later) > > if (PETSC_COMM_WORLD == MPI_COMM_NULL) PETSC_COMM_WORLD = MPI_COMM_WORLD; > > You can set PETSC_COMM_WORLD before PetscInitialize, but its value is > not valid/useful before PetscInitialize. > >> but when I run it prints >> >> PETSC == WORLD: 0 >> PETSC_COMM_WORLD: 67108864 >> >> The first line uncommented gives: >> >> PETSC == WORLD: 1 >> PETSC_COMM_WORLD: 1140850688 >> >> (and an error in my program: Attempting to use an MPI routine before initializing MPICH) >> >> Just trying to understand what's going on... >> >> Thanks and have a nice weekend! >> >> Florian From lawrence.mitchell at imperial.ac.uk Fri Dec 19 11:57:55 2014 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Fri, 19 Dec 2014 17:57:55 +0000 Subject: [petsc-users] Confusion with MatGetLocalSubMatrix In-Reply-To: <87tx0rimwp.fsf@jedbrown.org> References: <33DAE0D3-4F42-47E2-B8F0-5364AA00F51D@imperial.ac.uk> <87tx0rimwp.fsf@jedbrown.org> Message-ID: <43E77A82-CEEB-43EB-8BB7-A28D8EFF4C9D@imperial.ac.uk> On 19 Dec 2014, at 17:12, Jed Brown wrote: > Lawrence Mitchell writes: > >> Dear petsc-users, >> >> I'm trying to setup matrices and data structures for use with MatGetLocalSubMatrix, but I'm rather lost in a world of ISes and block sizes. I have the layouts and so forth correct where all my fields have block size 1, but am struggling to follow how things fit together for block size > 1. >> >> I have a Taylor-Hood discretisation so a P2 velocity space with block size 2, and a P1 pressure space with block size 1. >> >> On each process, I build the full local to global mapping for both fields. This has block size 1. > > How are you ordering the fields on each process? field_0_proc_0, field_1_proc_0, ..., field_N_proc_0; field_1_proc_0, ..., ...; ... field_N_proc_P > >> Then I create strided ISes to define the local blocks for each field, and set the block size on each of them (2 for the velocity space, 1 for the pressure). Aside, when I do ISCreateStride for an IS with a block size > 1, do I provide all the indices, or just the blocked indices? Should I be using ISCreateBlock for block size > 1 instead? > > ISSTRIDE has no concept of block size and can't be used to describe > blocked index sets. Use ISBLOCK instead. What is ISSetBlockSize for then? Just hanging information on the IS for use elsewhere? >> Calling MatGetLocalSubMatrix results in an error: >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Petsc has generated inconsistent data >> [0]PETSC ERROR: Blocksize of localtoglobalmapping 1 must match that of layout 2 > > Hmm, I'm concerned that this might not work right after some recent > changes to ISLocalToGlobalMapping. Can you reproduce this with some > code I can run? I'll try and put something together. Thanks, Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jed at jedbrown.org Fri Dec 19 12:01:38 2014 From: jed at jedbrown.org (Jed Brown) Date: Fri, 19 Dec 2014 10:01:38 -0800 Subject: [petsc-users] Confusion with MatGetLocalSubMatrix In-Reply-To: <43E77A82-CEEB-43EB-8BB7-A28D8EFF4C9D@imperial.ac.uk> References: <33DAE0D3-4F42-47E2-B8F0-5364AA00F51D@imperial.ac.uk> <87tx0rimwp.fsf@jedbrown.org> <43E77A82-CEEB-43EB-8BB7-A28D8EFF4C9D@imperial.ac.uk> Message-ID: <87a92jikn1.fsf@jedbrown.org> Lawrence Mitchell writes: > What is ISSetBlockSize for then? Just hanging information on the IS for use elsewhere? The index set would need to be contiguous: static PetscErrorCode ISSetBlockSize_Stride(IS is,PetscInt bs) { IS_Stride *sub = (IS_Stride*)is->data; PetscErrorCode ierr; PetscFunctionBegin; if (sub->step != 1 && bs != 1) SETERRQ2(PetscObjectComm((PetscObject)is),PETSC_ERR_ARG_SIZ,"ISSTRIDE has stride %D, cannot be blocked of size %D",sub->step,bs); ierr = PetscLayoutSetBlockSize(is->map, bs);CHKERRQ(ierr); PetscFunctionReturn(0); } -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From lawrence.mitchell at imperial.ac.uk Fri Dec 19 12:19:16 2014 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Fri, 19 Dec 2014 18:19:16 +0000 Subject: [petsc-users] Confusion with MatGetLocalSubMatrix In-Reply-To: <87tx0rimwp.fsf@jedbrown.org> References: <33DAE0D3-4F42-47E2-B8F0-5364AA00F51D@imperial.ac.uk> <87tx0rimwp.fsf@jedbrown.org> Message-ID: <0B2304C7-7802-40A2-8765-7209EF93A6E3@imperial.ac.uk> ... >> Calling MatGetLocalSubMatrix results in an error: >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Petsc has generated inconsistent data >> [0]PETSC ERROR: Blocksize of localtoglobalmapping 1 must match that of layout 2 > > Hmm, I'm concerned that this might not work right after some recent > changes to ISLocalToGlobalMapping. Can you reproduce this with some > code I can run? Here we go: #include int main(int argc, char **argv) { PetscErrorCode ierr; DM v; DM p; DM pack; PetscSection vsec; PetscSection psec; PetscInt rbs, cbs; PetscInt srbs, scbs; Mat mat; Mat submat; IS *ises; MPI_Comm c; PetscInitialize(&argc, &argv, NULL, NULL); c = PETSC_COMM_WORLD; ierr = DMDACreate1d(c, DM_BOUNDARY_NONE, 10, 2, 1, NULL, &v); CHKERRQ(ierr); ierr = DMDACreate1d(c, DM_BOUNDARY_NONE, 10, 1, 1, NULL, &p); CHKERRQ(ierr); ierr = DMSetFromOptions(v); CHKERRQ(ierr); ierr = DMSetFromOptions(p); CHKERRQ(ierr); ierr = DMCreateMatrix(v, &mat); CHKERRQ(ierr); ierr = MatGetBlockSizes(mat, &rbs, &cbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "Global Mat block size (%d, %d)\n", rbs, cbs); CHKERRQ(ierr); ierr = DMCompositeCreate(c, &pack); CHKERRQ(ierr); ierr = DMCompositeAddDM(pack, v); CHKERRQ(ierr); ierr = DMCompositeAddDM(pack, p); CHKERRQ(ierr); ierr = DMSetFromOptions(pack); CHKERRQ(ierr); ierr = DMCompositeGetLocalISs(pack, &ises); CHKERRQ(ierr); ierr = DMCreateMatrix(pack, &mat); CHKERRQ(ierr); ierr = MatGetBlockSizes(mat, &rbs, &cbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "Global Mat block size (%d, %d)\n", rbs, cbs); CHKERRQ(ierr); ierr = MatGetLocalSubMatrix(mat, ises[0], ises[0], &submat); CHKERRQ(ierr); ierr = MatGetBlockSizes(submat, &srbs, &scbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "Local Mat block size (%d, %d)\n", srbs, scbs); CHKERRQ(ierr); ierr = MatDestroy(&submat); CHKERRQ(ierr); ierr = MatDestroy(&mat); CHKERRQ(ierr); ierr = PetscSectionDestroy(&vsec); CHKERRQ(ierr); ierr = PetscSectionDestroy(&psec); CHKERRQ(ierr); ierr = DMDestroy(&pack); CHKERRQ(ierr); ierr = DMDestroy(&v); CHKERRQ(ierr); ierr = DMDestroy(&p); CHKERRQ(ierr); ierr = ISDestroy(&(ises[0])); CHKERRQ(ierr); ierr = ISDestroy(&(ises[1])); CHKERRQ(ierr); ierr = PetscFree(ises); CHKERRQ(ierr); PetscFinalize(); return 0; } Cheers, Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From lawrence.mitchell at imperial.ac.uk Fri Dec 19 12:20:46 2014 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Fri, 19 Dec 2014 18:20:46 +0000 Subject: [petsc-users] Confusion with MatGetLocalSubMatrix In-Reply-To: <87a92jikn1.fsf@jedbrown.org> References: <33DAE0D3-4F42-47E2-B8F0-5364AA00F51D@imperial.ac.uk> <87tx0rimwp.fsf@jedbrown.org> <43E77A82-CEEB-43EB-8BB7-A28D8EFF4C9D@imperial.ac.uk> <87a92jikn1.fsf@jedbrown.org> Message-ID: On 19 Dec 2014, at 18:01, Jed Brown wrote: > Lawrence Mitchell writes: >> What is ISSetBlockSize for then? Just hanging information on the IS for use elsewhere? > > The index set would need to be contiguous: So given my field layout, I think I do have a contiguous set, but this is probably an aside. Cheers, Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From lawrence.mitchell at imperial.ac.uk Fri Dec 19 12:29:08 2014 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Fri, 19 Dec 2014 18:29:08 +0000 Subject: [petsc-users] Confusion with MatGetLocalSubMatrix In-Reply-To: <0B2304C7-7802-40A2-8765-7209EF93A6E3@imperial.ac.uk> References: <33DAE0D3-4F42-47E2-B8F0-5364AA00F51D@imperial.ac.uk> <87tx0rimwp.fsf@jedbrown.org> <0B2304C7-7802-40A2-8765-7209EF93A6E3@imperial.ac.uk> Message-ID: <3561DF00-A718-4464-B880-52D91E77BE6C@imperial.ac.uk> On 19 Dec 2014, at 18:19, Lawrence Mitchell wrote: > ... > >>> Calling MatGetLocalSubMatrix results in an error: >>> >>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> [0]PETSC ERROR: Petsc has generated inconsistent data >>> [0]PETSC ERROR: Blocksize of localtoglobalmapping 1 must match that of layout 2 >> >> Hmm, I'm concerned that this might not work right after some recent >> changes to ISLocalToGlobalMapping. Can you reproduce this with some >> code I can run? ... Here's one that doesn't try to destroy sections that don't exist, and endeavours to print block sizes of all the sub matrices. I expect to see (2, 2), (2, 1), (1, 2) and (1, 1) I think. Cheers, Lawrence #include int main(int argc, char **argv) { PetscErrorCode ierr; DM v; DM p; DM pack; PetscInt rbs, cbs; PetscInt srbs, scbs; PetscInt i, j; Mat mat; Mat submat; IS *ises; MPI_Comm c; PetscInitialize(&argc, &argv, NULL, NULL); c = PETSC_COMM_WORLD; ierr = DMDACreate1d(c, DM_BOUNDARY_NONE, 10, 2, 1, NULL, &v); CHKERRQ(ierr); ierr = DMDACreate1d(c, DM_BOUNDARY_NONE, 10, 1, 1, NULL, &p); CHKERRQ(ierr); ierr = DMSetFromOptions(v); CHKERRQ(ierr); ierr = DMSetFromOptions(p); CHKERRQ(ierr); ierr = DMCreateMatrix(v, &mat); CHKERRQ(ierr); ierr = MatGetBlockSizes(mat, &rbs, &cbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "Global Mat block size (%d, %d)\n", rbs, cbs); CHKERRQ(ierr); ierr = MatDestroy(&mat); CHKERRQ(ierr); ierr = DMCompositeCreate(c, &pack); CHKERRQ(ierr); ierr = DMCompositeAddDM(pack, v); CHKERRQ(ierr); ierr = DMCompositeAddDM(pack, p); CHKERRQ(ierr); ierr = DMSetFromOptions(pack); CHKERRQ(ierr); ierr = DMCompositeGetLocalISs(pack, &ises); CHKERRQ(ierr); ierr = DMCreateMatrix(pack, &mat); CHKERRQ(ierr); ierr = MatGetBlockSizes(mat, &rbs, &cbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "Global Mat block size (%d, %d)\n", rbs, cbs); CHKERRQ(ierr); for (i=0; i < 2; i++ ) { for (j=0; j < 2; j++ ) { ierr = MatGetLocalSubMatrix(mat, ises[i], ises[j], &submat); CHKERRQ(ierr); ierr = MatGetBlockSizes(submat, &srbs, &scbs); CHKERRQ(ierr); ierr = PetscPrintf(c, "Local (%d, %d) block has block size (%d, %d)\n", i, j, srbs, scbs); CHKERRQ(ierr); ierr = MatDestroy(&submat); CHKERRQ(ierr); } } ierr = MatDestroy(&mat); CHKERRQ(ierr); ierr = DMDestroy(&pack); CHKERRQ(ierr); ierr = DMDestroy(&v); CHKERRQ(ierr); ierr = DMDestroy(&p); CHKERRQ(ierr); ierr = ISDestroy(&(ises[0])); CHKERRQ(ierr); ierr = ISDestroy(&(ises[1])); CHKERRQ(ierr); ierr = PetscFree(ises); CHKERRQ(ierr); PetscFinalize(); return 0; } -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jed at jedbrown.org Fri Dec 19 18:08:51 2014 From: jed at jedbrown.org (Jed Brown) Date: Fri, 19 Dec 2014 16:08:51 -0800 Subject: [petsc-users] Trying to apply FieldSplitPC by reading bloc matrix In-Reply-To: <549432B2.6080309@epfl.ch> References: <549432B2.6080309@epfl.ch> Message-ID: <87388bi3n0.fsf@jedbrown.org> Gilles Steiner writes: > Hello Petsc Users, > > I have an issue trying to use FiledSplitPC in parallel. > > My goal : I want to get a linear system from petsc binary files and > solve this in parallel with the FieldSplitPC. > > The problem I want to solve is an FE approximation of the Stokes equations. What FE discretization? We don't recommend using files as part of your workflow, but if you're just experimenting, you could start with src/ksp/ksp/examples/tests/ex11.c which solves a Q1-P0 Stokes problem From Underworld by reading the blocks in as matrices. So start there and let us know how it goes. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Fri Dec 19 18:23:41 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 19 Dec 2014 18:23:41 -0600 Subject: [petsc-users] Trying to apply FieldSplitPC by reading bloc matrix In-Reply-To: <549432B2.6080309@epfl.ch> References: <549432B2.6080309@epfl.ch> Message-ID: <51940104-BF4B-4616-90CB-E2F044CFAB81@mcs.anl.gov> In both cases the preconditioned residual is decreasing nicely but the unpreconditioned residual is not decreasing, so something is wrong even in the sequential case! > 14 KSP preconditioned resid norm 5.169032607321e-10 true resid norm 9.990557312817e-03 ||r(i)||/||b|| 4.162732213674e-04 So you need to go back to the sequential case and see what is going on. Don't even touch the parallel case until the true residual is converging properly for the sequential. First try running with -ksp_pc_side right and watch the residuals. Next run with direct solvers everywhere you can and see what happens. Barry > On Dec 19, 2014, at 8:14 AM, Gilles Steiner wrote: > > Hello Petsc Users, > > I have an issue trying to use FiledSplitPC in parallel. > > My goal : I want to get a linear system from petsc binary files and solve this in parallel with the FieldSplitPC. > > The problem I want to solve is an FE approximation of the Stokes equations. > > Skipping the details, my code looks like : > > // Reading the four blocs UU, UP, PU and PP > for(int i=0; i < 4; ++i) > { > string name = matrix + to_string(i) + ".petscbin"; > PetscViewer PETSC_matreader; > PetscViewerBinaryOpen(PETSC_COMM_WORLD, name.c_str(), FILE_MODE_READ, &PETSC_matreader); > MatCreate(PETSC_COMM_WORLD,&PETSC_subA[i]); > MatLoad(PETSC_subA[i],PETSC_matreader); > PetscViewerDestroy(&PETSC_matreader); > } > > // Reading the RHS vector and duplicating it to create the solution vector > PetscViewerBinaryOpen(PETSC_COMM_WORLD, rhs.c_str(), FILE_MODE_READ, &PETSC_vecreader); > VecCreate(PETSC_COMM_WORLD,&PETSC_rhs); > VecLoad(PETSC_rhs,PETSC_vecreader); > PetscViewerDestroy(&PETSC_vecreader); > VecDuplicate(PETSC_rhs,&PETSC_sol); > > // Create global matrixwith MatCreateNest > MatCreateNest(PETSC_COMM_WORLD, 2, NULL, 2, NULL, PETSC_subA, &PETSC_A); > MatNestGetISs(PETSC_A, PETSC_isg, NULL); > > // Setting up the ksp and precond > KSPCreate(PETSC_COMM_WORLD,&PETSC_ksp); > KSPSetOperators(PETSC_ksp,PETSC_A,PETSC_A); > KSPSetFromOptions(PETSC_ksp); > > KSPGetPC(PETSC_ksp, &PETSC_pc); > PCSetType(PETSC_pc, PCFIELDSPLIT); > PCFieldSplitSetIS(PETSC_pc, "0", PETSC_isg[0]); > PCFieldSplitSetIS(PETSC_pc, "1", PETSC_isg[1]); > PCSetFromOptions(PETSC_pc); > > // Solving the system and writing back the solution in rhs file > KSPSolve(PETSC_ksp,PETSC_rhs,PETSC_sol); > > PetscViewer PETSC_vecwriter; > PetscViewerBinaryOpen(PETSC_COMM_WORLD, rhs.c_str(), FILE_MODE_WRITE, &PETSC_vecwriter); > VecView(PETSC_sol,PETSC_vecwriter); > PetscViewerDestroy(&PETSC_vecwriter); > > When I run it with 1 proc, everything works fine and I get the correct solution : > 0 KSP preconditioned resid norm 1.271697253018e+03 true resid norm 2.400000000000e+01 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 5.009545069728e+01 true resid norm 9.166803391041e-02 ||r(i)||/||b|| 3.819501412934e-03 > 2 KSP preconditioned resid norm 6.460631387766e+00 true resid norm 4.995542253831e-02 ||r(i)||/||b|| 2.081475939096e-03 > 3 KSP preconditioned resid norm 1.155895209298e+00 true resid norm 1.515734830704e-02 ||r(i)||/||b|| 6.315561794600e-04 > 4 KSP preconditioned resid norm 7.407384739634e-02 true resid norm 9.992802256200e-03 ||r(i)||/||b|| 4.163667606750e-04 > 5 KSP preconditioned resid norm 1.574456882990e-02 true resid norm 9.994876664681e-03 ||r(i)||/||b|| 4.164531943617e-04 > 6 KSP preconditioned resid norm 2.383022349902e-03 true resid norm 9.990760645581e-03 ||r(i)||/||b|| 4.162816935659e-04 > 7 KSP preconditioned resid norm 6.175379834254e-04 true resid norm 9.990821066459e-03 ||r(i)||/||b|| 4.162842111025e-04 > 8 KSP preconditioned resid norm 6.867982689960e-05 true resid norm 9.990532094790e-03 ||r(i)||/||b|| 4.162721706163e-04 > 9 KSP preconditioned resid norm 1.041091257246e-05 true resid norm 9.990558069113e-03 ||r(i)||/||b|| 4.162732528797e-04 > 10 KSP preconditioned resid norm 1.447793722489e-06 true resid norm 9.990557786778e-03 ||r(i)||/||b|| 4.162732411158e-04 > 11 KSP preconditioned resid norm 2.139317335854e-07 true resid norm 9.990557262754e-03 ||r(i)||/||b|| 4.162732192814e-04 > 12 KSP preconditioned resid norm 4.383129810322e-08 true resid norm 9.990557306920e-03 ||r(i)||/||b|| 4.162732211217e-04 > 13 KSP preconditioned resid norm 3.351461304399e-09 true resid norm 9.990557311707e-03 ||r(i)||/||b|| 4.162732213211e-04 > 14 KSP preconditioned resid norm 5.169032607321e-10 true resid norm 9.990557312817e-03 ||r(i)||/||b|| 4.162732213674e-04 > > [14:49:10::INFO ] System Solved. Final tolerance reached is 5.16903e-10 in 14 iterations. > > But if I do it with 2 procs, the resolution seems fine but the solution is wrong : > 0 KSP preconditioned resid norm 1.247694088756e+03 true resid norm 2.400000000000e+01 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 4.481954484303e+01 true resid norm 5.277507840772e-01 ||r(i)||/||b|| 2.198961600321e-02 > 2 KSP preconditioned resid norm 1.110647693456e+01 true resid norm 4.005558168981e-02 ||r(i)||/||b|| 1.668982570409e-03 > 3 KSP preconditioned resid norm 1.220368027409e+00 true resid norm 1.877650834971e-02 ||r(i)||/||b|| 7.823545145714e-04 > 4 KSP preconditioned resid norm 2.834261749922e-01 true resid norm 1.613967205264e-02 ||r(i)||/||b|| 6.724863355265e-04 > 5 KSP preconditioned resid norm 4.215090288154e-02 true resid norm 1.562561614611e-02 ||r(i)||/||b|| 6.510673394212e-04 > 6 KSP preconditioned resid norm 1.209476134754e-02 true resid norm 1.563808960492e-02 ||r(i)||/||b|| 6.515870668718e-04 > 7 KSP preconditioned resid norm 2.038835108629e-03 true resid norm 1.564163643064e-02 ||r(i)||/||b|| 6.517348512765e-04 > 8 KSP preconditioned resid norm 1.928844666836e-04 true resid norm 1.564072761376e-02 ||r(i)||/||b|| 6.516969839065e-04 > 9 KSP preconditioned resid norm 3.138911950605e-05 true resid norm 1.564047323377e-02 ||r(i)||/||b|| 6.516863847403e-04 > 10 KSP preconditioned resid norm 4.950062975470e-06 true resid norm 1.564048216528e-02 ||r(i)||/||b|| 6.516867568865e-04 > 11 KSP preconditioned resid norm 7.677242244159e-07 true resid norm 1.564049253364e-02 ||r(i)||/||b|| 6.516871889019e-04 > 12 KSP preconditioned resid norm 1.870521888617e-07 true resid norm 1.564049269566e-02 ||r(i)||/||b|| 6.516871956526e-04 > 13 KSP preconditioned resid norm 3.077235724319e-08 true resid norm 1.564049264800e-02 ||r(i)||/||b|| 6.516871936666e-04 > 14 KSP preconditioned resid norm 6.584409191524e-09 true resid norm 1.564049264183e-02 ||r(i)||/||b|| 6.516871934095e-04 > 15 KSP preconditioned resid norm 1.091619359913e-09 true resid norm 1.564049263170e-02 ||r(i)||/||b|| 6.516871929874e-04 > > [15:10:58::INFO ] System Solved. Final tolerance reached is 1.09162e-09 in 15 iterations. > > Any idea of what is wrong with this ? Is it the code or the base concept ? > > Thank you. > Gilles > From knepley at gmail.com Fri Dec 19 19:29:09 2014 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 19 Dec 2014 19:29:09 -0600 Subject: [petsc-users] Trying to apply FieldSplitPC by reading bloc matrix In-Reply-To: <549432B2.6080309@epfl.ch> References: <549432B2.6080309@epfl.ch> Message-ID: On Dec 19, 2014 6:14 AM, "Gilles Steiner" wrote: > > Hello Petsc Users, > > I have an issue trying to use FiledSplitPC in parallel. > > My goal : I want to get a linear system from petsc binary files and solve this in parallel with the FieldSplitPC. > > The problem I want to solve is an FE approximation of the Stokes equations. > > Skipping the details, my code looks like : > > // Reading the four blocs UU, UP, PU and PP > for(int i=0; i < 4; ++i) > { > string name = matrix + to_string(i) + ".petscbin"; > PetscViewer PETSC_matreader; > PetscViewerBinaryOpen(PETSC_COMM_WORLD, name.c_str(), FILE_MODE_READ, &PETSC_matreader); > MatCreate(PETSC_COMM_WORLD,&PETSC_subA[i]); > MatLoad(PETSC_subA[i],PETSC_matreader); > PetscViewerDestroy(&PETSC_matreader); > } > > // Reading the RHS vector and duplicating it to create the solution vector > PetscViewerBinaryOpen(PETSC_COMM_WORLD, rhs.c_str(), FILE_MODE_READ, &PETSC_vecreader); > VecCreate(PETSC_COMM_WORLD,&PETSC_rhs); > VecLoad(PETSC_rhs,PETSC_vecreader); > PetscViewerDestroy(&PETSC_vecreader); > VecDuplicate(PETSC_rhs,&PETSC_sol); > > // Create global matrixwith MatCreateNest > MatCreateNest(PETSC_COMM_WORLD, 2, NULL, 2, NULL, PETSC_subA, &PETSC_A); > MatNestGetISs(PETSC_A, PETSC_isg, NULL); > > // Setting up the ksp and precond > KSPCreate(PETSC_COMM_WORLD,&PETSC_ksp); > KSPSetOperators(PETSC_ksp,PETSC_A,PETSC_A); > KSPSetFromOptions(PETSC_ksp); > > KSPGetPC(PETSC_ksp, &PETSC_pc); > PCSetType(PETSC_pc, PCFIELDSPLIT); > PCFieldSplitSetIS(PETSC_pc, "0", PETSC_isg[0]); > PCFieldSplitSetIS(PETSC_pc, "1", PETSC_isg[1]); > PCSetFromOptions(PETSC_pc); > > // Solving the system and writing back the solution in rhs file > KSPSolve(PETSC_ksp,PETSC_rhs,PETSC_sol); > > PetscViewer PETSC_vecwriter; > PetscViewerBinaryOpen(PETSC_COMM_WORLD, rhs.c_str(), FILE_MODE_WRITE, &PETSC_vecwriter); > VecView(PETSC_sol,PETSC_vecwriter); > PetscViewerDestroy(&PETSC_vecwriter); > > When I run it with 1 proc, everything works fine and I get the correct solution : > When you get this behavior with true residuals, it very often arises from failure to account for a pressure null space. Do you have one? Matt 0 KSP preconditioned resid norm 1.271697253018e+03 true resid norm 2.400000000000e+01 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 5.009545069728e+01 true resid norm 9.166803391041e-02 ||r(i)||/||b|| 3.819501412934e-03 > 2 KSP preconditioned resid norm 6.460631387766e+00 true resid norm 4.995542253831e-02 ||r(i)||/||b|| 2.081475939096e-03 > 3 KSP preconditioned resid norm 1.155895209298e+00 true resid norm 1.515734830704e-02 ||r(i)||/||b|| 6.315561794600e-04 > 4 KSP preconditioned resid norm 7.407384739634e-02 true resid norm 9.992802256200e-03 ||r(i)||/||b|| 4.163667606750e-04 > 5 KSP preconditioned resid norm 1.574456882990e-02 true resid norm 9.994876664681e-03 ||r(i)||/||b|| 4.164531943617e-04 > 6 KSP preconditioned resid norm 2.383022349902e-03 true resid norm 9.990760645581e-03 ||r(i)||/||b|| 4.162816935659e-04 > 7 KSP preconditioned resid norm 6.175379834254e-04 true resid norm 9.990821066459e-03 ||r(i)||/||b|| 4.162842111025e-04 > 8 KSP preconditioned resid norm 6.867982689960e-05 true resid norm 9.990532094790e-03 ||r(i)||/||b|| 4.162721706163e-04 > 9 KSP preconditioned resid norm 1.041091257246e-05 true resid norm 9.990558069113e-03 ||r(i)||/||b|| 4.162732528797e-04 > 10 KSP preconditioned resid norm 1.447793722489e-06 true resid norm 9.990557786778e-03 ||r(i)||/||b|| 4.162732411158e-04 > 11 KSP preconditioned resid norm 2.139317335854e-07 true resid norm 9.990557262754e-03 ||r(i)||/||b|| 4.162732192814e-04 > 12 KSP preconditioned resid norm 4.383129810322e-08 true resid norm 9.990557306920e-03 ||r(i)||/||b|| 4.162732211217e-04 > 13 KSP preconditioned resid norm 3.351461304399e-09 true resid norm 9.990557311707e-03 ||r(i)||/||b|| 4.162732213211e-04 > 14 KSP preconditioned resid norm 5.169032607321e-10 true resid norm 9.990557312817e-03 ||r(i)||/||b|| 4.162732213674e-04 > > [14:49:10::INFO ] System Solved. Final tolerance reached is 5.16903e-10 in 14 iterations. > > But if I do it with 2 procs, the resolution seems fine but the solution is wrong : > 0 KSP preconditioned resid norm 1.247694088756e+03 true resid norm 2.400000000000e+01 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 4.481954484303e+01 true resid norm 5.277507840772e-01 ||r(i)||/||b|| 2.198961600321e-02 > 2 KSP preconditioned resid norm 1.110647693456e+01 true resid norm 4.005558168981e-02 ||r(i)||/||b|| 1.668982570409e-03 > 3 KSP preconditioned resid norm 1.220368027409e+00 true resid norm 1.877650834971e-02 ||r(i)||/||b|| 7.823545145714e-04 > 4 KSP preconditioned resid norm 2.834261749922e-01 true resid norm 1.613967205264e-02 ||r(i)||/||b|| 6.724863355265e-04 > 5 KSP preconditioned resid norm 4.215090288154e-02 true resid norm 1.562561614611e-02 ||r(i)||/||b|| 6.510673394212e-04 > 6 KSP preconditioned resid norm 1.209476134754e-02 true resid norm 1.563808960492e-02 ||r(i)||/||b|| 6.515870668718e-04 > 7 KSP preconditioned resid norm 2.038835108629e-03 true resid norm 1.564163643064e-02 ||r(i)||/||b|| 6.517348512765e-04 > 8 KSP preconditioned resid norm 1.928844666836e-04 true resid norm 1.564072761376e-02 ||r(i)||/||b|| 6.516969839065e-04 > 9 KSP preconditioned resid norm 3.138911950605e-05 true resid norm 1.564047323377e-02 ||r(i)||/||b|| 6.516863847403e-04 > 10 KSP preconditioned resid norm 4.950062975470e-06 true resid norm 1.564048216528e-02 ||r(i)||/||b|| 6.516867568865e-04 > 11 KSP preconditioned resid norm 7.677242244159e-07 true resid norm 1.564049253364e-02 ||r(i)||/||b|| 6.516871889019e-04 > 12 KSP preconditioned resid norm 1.870521888617e-07 true resid norm 1.564049269566e-02 ||r(i)||/||b|| 6.516871956526e-04 > 13 KSP preconditioned resid norm 3.077235724319e-08 true resid norm 1.564049264800e-02 ||r(i)||/||b|| 6.516871936666e-04 > 14 KSP preconditioned resid norm 6.584409191524e-09 true resid norm 1.564049264183e-02 ||r(i)||/||b|| 6.516871934095e-04 > 15 KSP preconditioned resid norm 1.091619359913e-09 true resid norm 1.564049263170e-02 ||r(i)||/||b|| 6.516871929874e-04 > > [15:10:58::INFO ] System Solved. Final tolerance reached is 1.09162e-09 in 15 iterations. > > Any idea of what is wrong with this ? Is it the code or the base concept ? > > Thank you. > Gilles > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Dec 19 21:05:14 2014 From: jed at jedbrown.org (Jed Brown) Date: Fri, 19 Dec 2014 20:05:14 -0700 Subject: [petsc-users] PETSc and some external libraries configured with CMake? In-Reply-To: References: <87egrykyqz.fsf@jedbrown.org> Message-ID: <87h9wrggwl.fsf@jedbrown.org> paul zhang writes: > Jed, > > I want to use CMake for a package that dependents on PETSc. It seems work > this morning. Sounds like you're set. You can use my FindPETSc.cmake if you want. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From paulhuaizhang at gmail.com Sat Dec 20 13:16:32 2014 From: paulhuaizhang at gmail.com (paul zhang) Date: Sat, 20 Dec 2014 14:16:32 -0500 Subject: [petsc-users] PETSc and some external libraries configured with CMake? In-Reply-To: <87h9wrggwl.fsf@jedbrown.org> References: <87egrykyqz.fsf@jedbrown.org> <87h9wrggwl.fsf@jedbrown.org> Message-ID: Jed, Thanks for your consistent help Bro. I wish I could help you or the community back in some way. I am developing a CFD code on the cluster of my university. My code calls for PETSc to solve the linear solver, and it uses ParMetis and CGNS libraries. I once configured the whole code (my source code, the needed libraries above, etc) with CMake, back then the PETSc version is 3.1-p8. After I updated the PETSc to the newest, it never worked with me. It got me a couple of weeks to figure out what is the real reason. One of the reason I guess is that I have to use the mpi compiler from the cluster. There is nothing wrong with the installation guide on the PETSc website. The issue is it does not work with me (maybe only with me). The other is I have add the valgrind include dir and lib to my previous CMakeList, since it is necessary for the new PETSc. The FindPETSc.cmake you wrote works pretty good if I just want to use PETSc. Well, it is my problem (or maybe fault) again since I used old fashioned cmake and I have to call some other libraries besides PETSc. Attached is my script to install PETSc. Maybe it helps to the others. #!/bin/sh export PETSC_DIR=`pwd` export PETSC_ARCH=linux-gnu-intel MKLPATH=/share/cluster/RHEL6.2/x86_64/apps/intel/ict/composer_xe_2013.0.079/mkl/lib/intel64 VALGRINDPATH=/share/cluster/RHEL6.2/x86_64/apps/valgrind/3.9.0 HDF5PATH=/share/cluster/SLES9/x86_64/apps/hdf5/1.8.3 ./configure --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions --with-blas-lapack-dir=$MKLPATH -lf2clapack -lf2cblas --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --with-debugging=1 --with-precision=double --with-shared-libraries --with-x=no --with-x11=no --with-mpi=1 --with-mpi-dir="/share/cluster/RHEL6.2/x86_64/apps/openmpi/1.6.2" --with-valgrind-dir=$VALGRINDPATH --download-metis --download-parmetis --with-hdf5-dir=$HDF5PATH Best, Paul Huaibao (Paul) Zhang *Gas Surface Interactions Lab* Department of Mechanical Engineering University of Kentucky, Lexington, KY, 40506-0503 *Office*: 216 Ralph G. Anderson Building *Web*:gsil.engineering.uky.edu On Fri, Dec 19, 2014 at 10:05 PM, Jed Brown wrote: > paul zhang writes: > > > Jed, > > > > I want to use CMake for a package that dependents on PETSc. It seems work > > this morning. > > Sounds like you're set. You can use my FindPETSc.cmake if you want. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sat Dec 20 22:20:19 2014 From: jed at jedbrown.org (Jed Brown) Date: Sat, 20 Dec 2014 21:20:19 -0700 Subject: [petsc-users] PETSc and some external libraries configured with CMake? In-Reply-To: References: <87egrykyqz.fsf@jedbrown.org> <87h9wrggwl.fsf@jedbrown.org> Message-ID: <87egrtzla4.fsf@jedbrown.org> paul zhang writes: > Jed, > > Thanks for your consistent help Bro. I wish I could help you or the > community back in some way. > > I am developing a CFD code on the cluster of my university. My code calls > for PETSc to solve the linear solver, and it uses ParMetis and CGNS > libraries. I once configured the whole code (my source code, the needed > libraries above, etc) with CMake, back then the PETSc version is 3.1-p8. > After I updated the PETSc to the newest, it never worked with me. It got me > a couple of weeks to figure out what is the real reason. One of the reason > I guess is that I have to use the mpi compiler from the cluster. If you have an MPI installation available, you should use that instead of having PETSc download a new one (which may not use your network optimally). > There is nothing wrong with the installation guide on the PETSc > website. The issue is it does not work with me (maybe only with > me). The other is I have add the valgrind include dir and lib to my > previous CMakeList, since it is necessary for the new PETSc. Not necessary unless PETSc configure found it, but FindPETSc.cmake will propagate that information. > The FindPETSc.cmake you wrote works pretty good if I just want to use > PETSc. Well, it is my problem (or maybe fault) again since I used > old fashioned cmake and I have to call some other libraries besides PETSc. The intent is to use FindPETSc.cmake to determine how to compile and link with PETSc, but you would use other methods (typically FindXXX.cmake or XXXConfig.cmake) to find other packages. Personally, I think CMake is more hassle than its worth, but many projects use it, so I maintain FindPETSc.cmake to make it easier for them. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From Metal-Gear-Rex at web.de Sun Dec 21 05:09:22 2014 From: Metal-Gear-Rex at web.de (Christoph Pohl) Date: Sun, 21 Dec 2014 12:09:22 +0100 Subject: [petsc-users] MatAXPY not working as expected Message-ID: Dear fellow PETSc users, I've been trying to subtract one matrix from another, i.e. Y = Y - X, using `call MatAXPY(Y, minus_one, X, SAME_NONZERO_PATTERN, ierr)`. In my example, some entries come out correct (e.g. at [0,0]), some come out wrong (e.g. at [29,29]). Here is the complete code, in Fortran: program main implicit none #include "finclude/petscsys.h" #include "finclude/petscvec.h" #include "finclude/petscmat.h" #include "finclude/petscviewer.h" Mat :: X, Y PetscReal :: minus_one = -1.d0 PetscErrorCode :: ierr PetscViewer :: X_viewer, Y_viewer call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr) call MatCreate(PETSC_COMM_WORLD,X,ierr); CHKERRQ(ierr) call MatCreate(PETSC_COMM_WORLD,Y,ierr); CHKERRQ(ierr) call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat1.dat", FILE_MODE_READ, X_viewer, ierr); CHKERRQ(ierr) call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat2.dat", FILE_MODE_READ, Y_viewer, ierr); CHKERRQ(ierr) call MatLoad(X, X_viewer, ierr); CHKERRQ(ierr) call MatLoad(Y, Y_viewer, ierr); CHKERRQ(ierr) call PetscViewerDestroy(X_viewer, ierr); CHKERRQ(ierr) call PetscViewerDestroy(Y_viewer, ierr); CHKERRQ(ierr) call MatView(X, PETSC_VIEWER_STDOUT_SELF, ierr); CHKERRQ(ierr) call MatView(Y, PETSC_VIEWER_STDOUT_SELF, ierr); CHKERRQ(ierr) call MatAXPY(Y, minus_one, X, SAME_NONZERO_PATTERN, ierr); CHKERRQ(ierr) call MatView(Y, PETSC_VIEWER_STDOUT_SELF, ierr); CHKERRQ(ierr) call MatDestroy(X, ierr); CHKERRQ(ierr) call MatDestroy(Y, ierr); CHKERRQ(ierr) call PetscFinalize(ierr); CHKERRQ(ierr) end program -------------- next part -------------- A non-text attachment was scrubbed... Name: mat1.dat Type: application/x-ns-proxy-autoconfig Size: 3460 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mat2.dat Type: application/x-ns-proxy-autoconfig Size: 3484 bytes Desc: not available URL: From jed at jedbrown.org Sun Dec 21 10:54:27 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 21 Dec 2014 09:54:27 -0700 Subject: [petsc-users] MatAXPY not working as expected In-Reply-To: References: Message-ID: <87tx0px7ss.fsf@jedbrown.org> Christoph Pohl writes: > Dear fellow PETSc users, > > I've been trying to subtract one matrix from another, i.e. Y = Y - X, using > `call MatAXPY(Y, minus_one, X, SAME_NONZERO_PATTERN, ierr)`. You can only use SAME_NONZERO_PATTERN pattern if the matrices have the same nonzero pattern. These matrices don't even have the same number of nonzeros and there are quite a few entries in different places (neither is a subset of the other). Use DIFFERENT_NONZERO_PATTERN in cases like this. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sun Dec 21 18:07:37 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 21 Dec 2014 18:07:37 -0600 Subject: [petsc-users] PETSc conference and tutorials June 15 - 18, 2015 at ANL Message-ID: <11C02030-90CA-49DD-BE67-9C06E81AEF53@mcs.anl.gov> We are excited to invite you to the PETSc 20th anniversary conference and tutorial from June 15 to June 18 2014 at Argonne National Laboratory. The conference announcement can be found at http://www.mcs.anl.gov/petsc-20 and in the attached PDE document. Thanks to the generosity of the Computing, Environment, and Life Sciences (CELS) directorate of Argonne, there are no registration fees, and we can provide some travel support for students, post-docs and those with limited travel budgets. Please register soon so we can include you in the program. Hope to see you there, Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc-20.pdf Type: application/pdf Size: 2548842 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sun Dec 21 22:10:51 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 21 Dec 2014 22:10:51 -0600 Subject: [petsc-users] Richardson: damping factor=0 In-Reply-To: References: Message-ID: <794B4641-A671-42F0-B7C2-A62E390A569D@mcs.anl.gov> Fixed in branches barry/fix-damp-not-one-pcapplyrichardson and next > On Dec 19, 2014, at 9:23 AM, Mark Adams wrote: > > Oh, SOR probably grabs the iteration ... > > Never mind. > > On Fri, Dec 19, 2014 at 10:20 AM, Mark Adams wrote: > Richardson damping factor does not seem to do anything. I set it to zero and the solve was fine. > > Am I missing something? > > Thanks, > Mark > > Down solver (pre-smoother) on level 3 ------------------------------- > KSP Object: (mg_levels_3_) 1024 MPI processes > type: richardson > Richardson: damping factor=0 > maximum iterations=4 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_3_) 1024 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 > linear system matrix followed by preconditioner matrix: > Mat Object: 1024 MPI processes > type: shell > rows=33427584, cols=33427584 > Mat Object: 1024 MPI processes > type: mpiaij > rows=33427584, cols=33427584 > total: nonzeros=1.7272e+09, allocated nonzeros=3.4544e+09 > total number of mallocs used during MatSetValues calls =0 > not using I-node (on process 0) routines > From knepley at gmail.com Mon Dec 22 16:54:57 2014 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Dec 2014 16:54:57 -0600 Subject: [petsc-users] DMPlex and MatSetValuesLocal In-Reply-To: References: Message-ID: On Mon, Dec 15, 2014 at 8:33 PM, Abhyankar, Shrirang G. wrote: > > Matt, > Does MatSetValuesLocal work with a matrix that is created with DMPlex? > Well, actually I am using DMNetwork. I am getting the following error > because ISLocalToGlobalMapping mat->rmap->mapping and mat->cmap->mapping > are not set on the matrix. Perhaps I am not setting up something correctly? > You are definitely right. I am not setting any L2G mapping. Do you want to add it? I think you would just call DMGetLocalToGlobalMapping() at the end of DMCreateMatrix_Plex() and set it. Personally, it would seem better for the Mat to create it on the fly when someone uses that interface, but that seems to mix levels too much. Matt > Shri > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Null argument, when expecting valid pointer > [0]PETSC ERROR: Null Object: Parameter # 1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-1134-g7fbfed6 GIT > Date: 2014-12-13 14:24:34 -0600 > [0]PETSC ERROR: ./DYN on a debug-master named Shrirangs-MacBook-Pro.local > by Shri Mon Dec 15 20:11:18 2014 > [0]PETSC ERROR: Configure options --download-chaco --download-metis > --download-parmetis --download-superlu_dist PETSC_ARCH=debug-master > [0]PETSC ERROR: #1 ISLocalToGlobalMappingApply() line 396 in > /Users/Shri/packages/petsc/src/vec/is/utils/isltog.c > [0]PETSC ERROR: #2 MatSetValuesLocal() line 2017 in > /Users/Shri/packages/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: #3 DYNIJacobian() line 282 in > /Users/Shri/Documents/tsopf-code/src/dyn/dyn.c > [0]PETSC ERROR: #4 TSComputeIJacobian() line 763 in > /Users/Shri/packages/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #5 SNESTSFormJacobian_Theta() line 320 in > /Users/Shri/packages/petsc/src/ts/impls/implicit/theta/theta.c > [0]PETSC ERROR: #6 SNESTSFormJacobian() line 3552 in > /Users/Shri/packages/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #7 SNESComputeJacobian() line 2193 in > /Users/Shri/packages/petsc/src/snes/interface/snes.c > [0]PETSC ERROR: #8 SNESSolve_NEWTONLS() line 230 in > /Users/Shri/packages/petsc/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #9 SNESSolve() line 3743 in > /Users/Shri/packages/petsc/src/snes/interface/snes.c > [0]PETSC ERROR: #10 TSStep_Theta() line 195 in > /Users/Shri/packages/petsc/src/ts/impls/implicit/theta/theta.c > [0]PETSC ERROR: #11 TSStep() line 2628 in > /Users/Shri/packages/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #12 TSSolve() line 2745 in > /Users/Shri/packages/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #13 DYNSolve() line 620 in > /Users/Shri/Documents/tsopf-code/src/dyn/dyn.c > [0]PETSC ERROR: #14 main() line 35 in > /Users/Shri/Documents/tsopf-code/applications/dyn-main.c > [0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_WORLD, 85) - process 0 > > Shri > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaymanwani07 at gmail.com Thu Dec 25 02:27:29 2014 From: ajaymanwani07 at gmail.com (Ajay Manwani) Date: Thu, 25 Dec 2014 13:57:29 +0530 Subject: [petsc-users] Precision Problem in computing Eigenvalue Message-ID: Hello, I am trying to solve Schroginger equation with open boundary conditions. The problem boils down to a complex polynomial eigenvalue equation of order two with A0,A1,A2 matrices of the size 300 x 300. Here, A0 and A2 are tridiagonal matrices while A1 has only single element at with imaginary value. The complex eigenvalues are such that real part is around 1 while imaginary part is in the range of 1e-12 to 1e-15. PEP solver gives out 600 complex eigenvalues having 300 complex conjugate pairs. However problem is that the real part of two states is same while complex part of the eigenvalues differs by a order of one.( i mean ~e-14 and ~e-15) The solver gives eigenvalues of the pair the same when the imaginary values are more than 1e-10. I tried with -pep_tol 1e-18. -pep_max_it 10000. However, answer does not change for eigenvalues with imaginary part of the order of ~1e-15. other options -pep_smallest imaginary and changing the solver types also does not seem to work. Is there some way one can improve answers I mean more precise value of very small imaginary part of eigenvalue? We have tried -st_type sinvert -st_transform as well as -pep_target and -rg_type ellipse(region filtering) but there still the problem persists. Kindly suggest any way to improve precision for imaginary values below (1e-10) Regards, Ajay Manwani -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Dec 25 03:42:41 2014 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 25 Dec 2014 10:42:41 +0100 Subject: [petsc-users] Precision Problem in computing Eigenvalue In-Reply-To: References: Message-ID: <16086B17-E176-45AB-8987-7AF5BEA485FF@dsic.upv.es> El 25/12/2014, a las 09:27, Ajay Manwani escribi?: > Hello, > > I am trying to solve Schroginger equation with open boundary conditions. The problem boils down to a complex polynomial eigenvalue equation of order two with A0,A1,A2 matrices of the size 300 x 300. > Here, A0 and A2 are tridiagonal matrices while A1 has only single element at with imaginary value. > The complex eigenvalues are such that real part is around 1 while imaginary part is in the range of 1e-12 to 1e-15. > > PEP solver gives out 600 complex eigenvalues having 300 complex conjugate pairs. > However problem is that the real part of two states is same while complex part of the eigenvalues differs by a order of one.( i mean ~e-14 and ~e-15) > > The solver gives eigenvalues of the pair the same when the imaginary values are more than 1e-10. > > I tried with -pep_tol 1e-18. -pep_max_it 10000. However, answer does not change for eigenvalues with imaginary part of the order of ~1e-15. > > other options -pep_smallest imaginary and changing the solver types also does not seem to work. > > Is there some way one can improve answers I mean more precise value of very small imaginary part of eigenvalue? > > We have tried -st_type sinvert -st_transform as well as -pep_target and -rg_type ellipse(region filtering) but there still the problem persists. > Kindly suggest any way to improve precision for imaginary values below (1e-10) > > Regards, > Ajay Manwani > You are close to the machine precision, so I would not be surprised of such small differences. You could try in quad precision to get higher relative accuracy (let me know if problems arise). I would try sinvert on target=1 with different types of scaling, EPSSetScale(). We are also in the process of adjusting the convergence criteria, which may affect you, so if you want send us your matrices and we will give them a try (send them to slepc-maint). Jose From alpkalpalp at gmail.com Sat Dec 27 04:00:00 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Sat, 27 Dec 2014 12:00:00 +0200 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping Message-ID: Hi, I implemented a newmark time stepping algorithm without using TS structure. I am following ex59 about PCBDDC. ComputeMatrix ComputeKSPBDDC for // a time loop { Compute RHS KSPSolve GatherResults MoveToNextTS } However, when I watch the iteration counts of KSPSolve they do not decrease signicantly..Decrease was around 5%. So I have some problems; 1-) I guess currently, factorization is not taking place for each time step in my code. Ok this is expected. But I wonder whether Kspsolve stores the Krylov subspace vectors and reuse them for the next time step. 2-) PCBDDC uses KSPCG and AFAIK petsc doesnot have preconditioned conjugate projected gradient (PCPG). Is it possible to simulate PCPG iteration in some way? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Dec 28 08:28:07 2014 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 28 Dec 2014 09:28:07 -0500 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: References: Message-ID: On Sat, Dec 27, 2014 at 5:00 AM, Alp Kalpalp wrote: > Hi, > > I implemented a newmark time stepping algorithm without using TS > structure. I am following ex59 about PCBDDC. > > use TS > ComputeMatrix > ComputeKSPBDDC > > for // a time loop > { > Compute RHS > KSPSolve > GatherResults > MoveToNextTS > } > > > However, when I watch the iteration counts of KSPSolve they do not > decrease signicantly..Decrease was around 5%. > > So I have some problems; > > 1-) I guess currently, factorization is not taking place for each time > step in my code. Ok this is expected. But I wonder whether Kspsolve stores > the Krylov subspace vectors and reuse them for the next time step. > > No. Certainly not by default. > 2-) PCBDDC uses KSPCG and AFAIK petsc doesnot have preconditioned > conjugate projected gradient (PCPG). Is it possible to simulate PCPG > iteration in some way? > > KSP has a PC object and all methods use it AFAIK. > Thanks, > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alpkalpalp at gmail.com Sun Dec 28 10:54:25 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Sun, 28 Dec 2014 18:54:25 +0200 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: References: Message-ID: Hi, Thank you Mark. Let me clarify my questions; 1-)How to implement or activate a Reorthogonalization procedure for KSPCG.. As you know, search directions can be found more rapidly (with less numer of iterations) by using previous successive directions 2-) How to implement or activate a projection space over CG. A sample projection can be; P = I - G*((G'*G)\G'). I need to insert project,scale,precondition,re-scae,re-project steps during each KSPCG iteration. How can I utilize this? Thanks again and merry christmas to all On Sun, Dec 28, 2014 at 4:28 PM, Mark Adams wrote: > > > On Sat, Dec 27, 2014 at 5:00 AM, Alp Kalpalp wrote: > >> Hi, >> >> I implemented a newmark time stepping algorithm without using TS >> structure. I am following ex59 about PCBDDC. >> >> > use TS > > >> ComputeMatrix >> ComputeKSPBDDC >> >> for // a time loop >> { >> Compute RHS >> KSPSolve >> GatherResults >> MoveToNextTS >> } >> >> >> However, when I watch the iteration counts of KSPSolve they do not >> decrease signicantly..Decrease was around 5%. >> >> So I have some problems; >> >> 1-) I guess currently, factorization is not taking place for each time >> step in my code. Ok this is expected. But I wonder whether Kspsolve stores >> the Krylov subspace vectors and reuse them for the next time step. >> >> > No. Certainly not by default. > > >> 2-) PCBDDC uses KSPCG and AFAIK petsc doesnot have preconditioned >> conjugate projected gradient (PCPG). Is it possible to simulate PCPG >> iteration in some way? >> >> > KSP has a PC object and all methods use it AFAIK. > > >> Thanks, >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Sun Dec 28 11:02:33 2014 From: u.tabak at tudelft.nl (Umut Tabak) Date: Sun, 28 Dec 2014 18:02:33 +0100 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: References: Message-ID: <54A037A9.6020308@tudelft.nl> On 12/28/2014 05:54 PM, Alp Kalpalp wrote: > Hi, > > Thank you Mark. > > Let me clarify my questions; > > 1-)How to implement or activate a Reorthogonalization procedure for > KSPCG.. > As you know, search directions can be found more rapidly (with less > numer of iterations) by using previous successive directions Without answering the PETSc related questions, interesting discussion, indeed, but at the cost of purging the previous directions(which means explicit orthogonalizations with respect to these vectors also), so I am not sure if you can gain something with this, cost wise... > > 2-) How to implement or activate a projection space over CG. A sample > projection can be; > P = I - G*((G'*G)\G'). > I need to insert project,scale,precondition,re-scae,re-project steps > during each KSPCG iteration. How can I utilize this? > Just a side note, I had previous experience on this that these kinds of practice increase the cost more... BR, Umut -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 28 11:08:40 2014 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 28 Dec 2014 11:08:40 -0600 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: <54A037A9.6020308@tudelft.nl> References: <54A037A9.6020308@tudelft.nl> Message-ID: On Sun, Dec 28, 2014 at 11:02 AM, Umut Tabak wrote: > On 12/28/2014 05:54 PM, Alp Kalpalp wrote: > > Hi, > > Thank you Mark. > > Let me clarify my questions; > > 1-)How to implement or activate a Reorthogonalization procedure for > KSPCG.. > As you know, search directions can be found more rapidly (with less > numer of iterations) by using previous successive directions > > Without answering the PETSc related questions, interesting discussion, > > indeed, but at the cost of purging the previous directions(which means > explicit orthogonalizations with respect to these vectors also), so I am > not sure if you can gain something with this, cost wise... > This has been proposed many times, but it has never been shown to work. I have tried every variant I could find and it did not work. You can try LGMRES, which is the closest one to working in my opinion. There is definitely no theoretical relation between Krylov directions from subsequent solves unless the operator is identical. Matt 2-) How to implement or activate a projection space over CG. A sample > projection can be; > P = I - G*((G'*G)\G'). > I need to insert project,scale,precondition,re-scae,re-project steps > during each KSPCG iteration. How can I utilize this? > > Just a side note, I had previous experience on this that these kinds of > practice increase the cost more... > BR, > Umut > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From alpkalpalp at gmail.com Sun Dec 28 11:24:53 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Sun, 28 Dec 2014 19:24:53 +0200 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: References: <54A037A9.6020308@tudelft.nl> Message-ID: Thanks for the answers, Please forgive me, I forgot to say that my stiffness matrix is not changing during time steps. I could not remember directly but just after a google search..I just hit this http://web.stanford.edu/group/frg/publications/recent/FETI-stoch.pdf please look around eq37 My problem is not related to this random paper I found. But, I think I can find several others that shows the enhancing power of orthogonalization with successive directions when the system's behaviour is not changing rapidly. In my current sample case a gradually increasing force is applied to a linear system. Since I use FETIDP, preconditioned conjugate projected gradient (PCPG) is crucial in order to select any generalized inverse for the system. So, any suggestions on how to complete these tasks? For example anyway of obtaining search direction from KSPCG? or how to implement a projection space? Is it posible or too difficuly to code a variant of a KSPCG that meets my requirements? On Sun, Dec 28, 2014 at 7:08 PM, Matthew Knepley wrote: > On Sun, Dec 28, 2014 at 11:02 AM, Umut Tabak wrote: > >> On 12/28/2014 05:54 PM, Alp Kalpalp wrote: >> >> Hi, >> >> Thank you Mark. >> >> Let me clarify my questions; >> >> 1-)How to implement or activate a Reorthogonalization procedure for >> KSPCG.. >> As you know, search directions can be found more rapidly (with less >> numer of iterations) by using previous successive directions >> >> Without answering the PETSc related questions, interesting discussion, >> >> indeed, but at the cost of purging the previous directions(which means >> explicit orthogonalizations with respect to these vectors also), so I am >> not sure if you can gain something with this, cost wise... >> > > This has been proposed many times, but it has never been shown to work. I > have tried every variant I could > find and it did not work. You can try LGMRES, which is the closest one to > working in my opinion. There is > definitely no theoretical relation between Krylov directions from > subsequent solves unless the operator is > identical. > > Matt > > 2-) How to implement or activate a projection space over CG. A sample >> projection can be; >> P = I - G*((G'*G)\G'). >> I need to insert project,scale,precondition,re-scae,re-project steps >> during each KSPCG iteration. How can I utilize this? >> >> Just a side note, I had previous experience on this that these kinds >> of practice increase the cost more... >> BR, >> Umut >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Dec 28 12:12:34 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 28 Dec 2014 11:12:34 -0700 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: References: <54A037A9.6020308@tudelft.nl> Message-ID: <874msfod7x.fsf@jedbrown.org> Alp Kalpalp writes: > Thanks for the answers, > > Please forgive me, I forgot to say that my stiffness matrix is not changing > during time steps. I could not remember directly but just after a google > search..I just hit this > > http://web.stanford.edu/group/frg/publications/recent/FETI-stoch.pdf > > please look around eq37 > > My problem is not related to this random paper I found. But, I think I can > find several others that shows the enhancing power of orthogonalization > with successive directions when the system's behaviour is not changing > rapidly. In my current sample case a gradually increasing force is applied > to a linear system. Is the force moving or just increasing? The idea with this class of methods is that you add a Galerkin coarse correction where the basis functions approximate some low-frequency eigenvectors of the system. The projection is relatively expensive in parallel, but could save iterations. It's not "scalable" for many outlier eigenvalues because the projection space would get too big as the problem size is increased, but when combined with a decent-but-not-too-good preconditioner, could improve performance. You can implement this in the general case using PCCOMPOSITE+PCGALERKIN or with PCMG. You can also use KSPDGMRES or KSPAGMRES (man page missing in the release -- I reactivated, but look at the code until it regenerates), which attempt to automatically build a space. http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/KSP/KSPDGMRES.html You could implement a deflated CG in a similar way if you want to automatically extract the deflation vectors from the CG iteration. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From u.tabak at tudelft.nl Sun Dec 28 12:34:59 2014 From: u.tabak at tudelft.nl (Umut Tabak) Date: Sun, 28 Dec 2014 19:34:59 +0100 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: <874msfod7x.fsf@jedbrown.org> References: <54A037A9.6020308@tudelft.nl> <874msfod7x.fsf@jedbrown.org> Message-ID: <54A04D53.3080901@tudelft.nl> On 12/28/2014 07:12 PM, Jed Brown wrote: > Alp Kalpalp writes: > > Is the force moving or just increasing? The idea with this class of > methods is that you add a Galerkin coarse correction where the basis > functions approximate some low-frequency eigenvectors of the system. > The projection is relatively expensive in parallel, but could save > iterations. It's not "scalable" for many outlier eigenvalues because > the projection space would get too big as the problem size is increased, > but when combined with a decent-but-not-too-good preconditioner, could > improve performance. Well, I am following the discussion since it is also interesting for me. I am not sure if it will help for future, but let me give an overview from my research experience: As a poor engineer, my experience was with iterative methods on ill-conditioned problems, all the FE stiffness matrices are ill conditioned due to the multiplication of the gradient operators, especially the ones for thin structures encountered in vibroacoustics, or shell like structures for instance, and on these problems I could never get successful results with my tries even with the projections w.r.t. the previous vectors. Tim Davies has some kind of nice guidelines when to use iterative methods, I do not remember the links now but you should be able to find them easily, after reading them and trying more I did not continue more on that... Preconditioner side: my experience was that one should be really lucky to get a good preconditioner which is really really rare, as mentioned, especially for ill-conditioned problems, almost impossible. If my condition number estimate is above, say, 1e4 1e5, I do not expect much from iterative methods, for SPD problems of course, my 2 cents, without theoretical details of Krylov subspace methods. BR, Umut > > You can implement this in the general case using PCCOMPOSITE+PCGALERKIN > or with PCMG. You can also use KSPDGMRES or KSPAGMRES (man page missing > in the release -- I reactivated, but look at the code until it > regenerates), which attempt to automatically build a space. > > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/KSP/KSPDGMRES.html > > You could implement a deflated CG in a similar way if you want to > automatically extract the deflation vectors from the CG iteration. From jed at jedbrown.org Sun Dec 28 12:48:18 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 28 Dec 2014 11:48:18 -0700 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: <54A04D53.3080901@tudelft.nl> References: <54A037A9.6020308@tudelft.nl> <874msfod7x.fsf@jedbrown.org> <54A04D53.3080901@tudelft.nl> Message-ID: <871tnjobkd.fsf@jedbrown.org> Umut Tabak writes: > Preconditioner side: my experience was that one should be really lucky > to get a good preconditioner which is really really rare, as mentioned, > especially for ill-conditioned problems, almost impossible. If my > condition number estimate is above, say, 1e4 1e5, I do not expect much > from iterative methods, Ill-conditioning is a red herring. For example, FMG can solve well-behaved problems with 1e12 condition number in one cycle (about 5 "work units"). OTOH, very well-conditioned problems with eigenvalues encircling the origin converge extremely slowly (these are nonsymmetric). Anyway, some SPD industrial problems see poor performance with AMG, BDDC, and similar otherwise-scalable methods due to discretization or physical features that elude the heuristics used to produce good coarse spaces. Sometimes these problems can be formulated in more solver-friendly ways. Other times, custom methods would be needed. Or the methods could converge well, but only with high grid complexity (coarse spaces that do not decay in size fast enough). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From alpkalpalp at gmail.com Sun Dec 28 17:15:51 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Mon, 29 Dec 2014 01:15:51 +0200 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: <871tnjobkd.fsf@jedbrown.org> References: <54A037A9.6020308@tudelft.nl> <874msfod7x.fsf@jedbrown.org> <54A04D53.3080901@tudelft.nl> <871tnjobkd.fsf@jedbrown.org> Message-ID: In FETI, system is replaced with a coarse problem of dual variables (drastically smaller coarse problem size) and by using a projector more well-conditioned system is obtained. Condition number is limited to 1+log(H/h)^2. As the literature suggests, I need to apply projector on PCG. I tested KSPDGMRES and it seems CG is more successful. So it seems my only way is to implement my own variant of KSPCG. May I just copy the files and definitions related to KSPCG and rename all as KSPPCPG. And then I can make the orthogonolization implementation similar to KSPDGMRES.. Jed, please warn me if this is a really hard task? I dont want to put myself into a long journey of implementation :) best regards, On Sun, Dec 28, 2014 at 8:48 PM, Jed Brown wrote: > Umut Tabak writes: > > Preconditioner side: my experience was that one should be really lucky > > to get a good preconditioner which is really really rare, as mentioned, > > especially for ill-conditioned problems, almost impossible. If my > > condition number estimate is above, say, 1e4 1e5, I do not expect much > > from iterative methods, > > Ill-conditioning is a red herring. For example, FMG can solve > well-behaved problems with 1e12 condition number in one cycle (about 5 > "work units"). OTOH, very well-conditioned problems with eigenvalues > encircling the origin converge extremely slowly (these are > nonsymmetric). Anyway, some SPD industrial problems see poor > performance with AMG, BDDC, and similar otherwise-scalable methods due > to discretization or physical features that elude the heuristics used to > produce good coarse spaces. Sometimes these problems can be formulated > in more solver-friendly ways. Other times, custom methods would be > needed. Or the methods could converge well, but only with high grid > complexity (coarse spaces that do not decay in size fast enough). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Dec 28 17:48:01 2014 From: jed at jedbrown.org (Jed Brown) Date: Sun, 28 Dec 2014 16:48:01 -0700 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: References: <54A037A9.6020308@tudelft.nl> <874msfod7x.fsf@jedbrown.org> <54A04D53.3080901@tudelft.nl> <871tnjobkd.fsf@jedbrown.org> Message-ID: <87y4prmj4e.fsf@jedbrown.org> Alp Kalpalp writes: > In FETI, system is replaced with a coarse problem of dual variables Not a coarse problem (those are removed in the standard FETI-DP formulation), but the Lagrange multipliers defining an "interface" problem. BDDC is a primal version with the same eigenvalues, so for most purposes, you can use the two interchangeable. PCBDDC can do both. > (drastically smaller coarse problem size) Only if the subdomains are big enough. In 3D with modest-size subdomains, the interface problem is often about half the size of the original problem. > and by using a projector more well-conditioned system is > obtained. Condition number is limited to 1+log(H/h)^2. If you are lucky enough to have constant coefficients or coefficients that line up with subdomain boundaries and you choose a sufficiently rich coarse space. > As the literature suggests, I need to apply projector on PCG. Rather, the literature suggests that it _might_ provide a speedup when solving a sequence of linear systems in which the right hand sides are related. > I tested KSPDGMRES and it seems CG is more successful. Compare DGMRES to standard GMRES (and make sure you are using it right; you should read the man page carefully and perhaps some of the cited papers -- it's not a black box). If deflation (used correctly) helps, it _might_ be worth implementing the CG variant. > So it seems my only way is to implement my own variant of KSPCG. May I > just copy the files and definitions related to KSPCG and rename all as > KSPPCPG. And then I can make the orthogonolization implementation > similar to KSPDGMRES.. Yes, what does PCPG stand for? > Jed, please warn me if this is a really hard task? It is not trivial, but is achievable. I think you should do further experiments to get evidence for how effective this strategy will be. Some people have been happy with these techniques, but many have been disappointed so be aware that YMMV. If you decide you want to implement, work with a Git clone of PETSc and create a new branch from 'master' to develop your new KSP implementation. https://bitbucket.org/petsc/petsc/wiki/Home -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sun Dec 28 18:33:32 2014 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 28 Dec 2014 18:33:32 -0600 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: References: <54A037A9.6020308@tudelft.nl> Message-ID: Take a look at KSPFischerGuessCreate() and the material it points to. From the command line you can run for example -ksp_fischer_guess 1,20 This method "works" by saving information about Krylov directions and then projecting those directions out of the NEXT linear solve at the beginning of the new linear solve (constructing a "better" initial guess), hence it does not remove these directions out each KSP iteration, just for each new linear solve. It can be used with the preconditioned conjugate gradient method. There is a tiny community of people who claim this helps significantly on their problems, we'd love to hear your experience. Barry > On Dec 28, 2014, at 11:24 AM, Alp Kalpalp wrote: > > Thanks for the answers, > > Please forgive me, I forgot to say that my stiffness matrix is not changing during time steps. I could not remember directly but just after a google search..I just hit this > > http://web.stanford.edu/group/frg/publications/recent/FETI-stoch.pdf > > please look around eq37 > > My problem is not related to this random paper I found. But, I think I can find several others that shows the enhancing power of orthogonalization with successive directions when the system's behaviour is not changing rapidly. In my current sample case a gradually increasing force is applied to a linear system. > > Since I use FETIDP, preconditioned conjugate projected gradient (PCPG) is crucial in order to select any generalized inverse for the system. > > So, any suggestions on how to complete these tasks? > > For example anyway of obtaining search direction from KSPCG? > > or > > how to implement a projection space? > > Is it posible or too difficuly to code a variant of a KSPCG that meets my requirements? > > On Sun, Dec 28, 2014 at 7:08 PM, Matthew Knepley wrote: > On Sun, Dec 28, 2014 at 11:02 AM, Umut Tabak wrote: > On 12/28/2014 05:54 PM, Alp Kalpalp wrote: >> Hi, >> >> Thank you Mark. >> >> Let me clarify my questions; >> >> 1-)How to implement or activate a Reorthogonalization procedure for KSPCG.. >> As you know, search directions can be found more rapidly (with less numer of iterations) by using previous successive directions > Without answering the PETSc related questions, interesting discussion, > > indeed, but at the cost of purging the previous directions(which means explicit orthogonalizations with respect to these vectors also), so I am not sure if you can gain something with this, cost wise... > > This has been proposed many times, but it has never been shown to work. I have tried every variant I could > find and it did not work. You can try LGMRES, which is the closest one to working in my opinion. There is > definitely no theoretical relation between Krylov directions from subsequent solves unless the operator is > identical. > > Matt > >> 2-) How to implement or activate a projection space over CG. A sample projection can be; >> P = I - G*((G'*G)\G'). >> I need to insert project,scale,precondition,re-scae,re-project steps during each KSPCG iteration. How can I utilize this? >> > Just a side note, I had previous experience on this that these kinds of practice increase the cost more... > BR, > Umut > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From alpkalpalp at gmail.com Mon Dec 29 13:13:43 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Mon, 29 Dec 2014 21:13:43 +0200 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: References: <54A037A9.6020308@tudelft.nl> Message-ID: Thanks for the answers, Jed, PCPG stands for Preconditioned Conjugate Projected Gradient. Since any FETI literature suggests PCPG, I am planning to go stick with it. My plan is to extend your KSPCG algorithm with optional application of projection space and re-orthogonalizations. Best regards, On Mon, Dec 29, 2014 at 2:33 AM, Barry Smith wrote: > > Take a look at KSPFischerGuessCreate() and the material it points to. > From the command line you can run for example > > -ksp_fischer_guess 1,20 > > This method "works" by saving information about Krylov directions and then > projecting those directions out of the NEXT linear solve at the beginning > of the new linear solve (constructing a "better" initial guess), hence it > does not remove these directions out each KSP iteration, just for each new > linear solve. It can be used with the preconditioned conjugate gradient > method. There is a tiny community of people who claim this helps > significantly on their problems, we'd love to hear your experience. > > Barry > > > > On Dec 28, 2014, at 11:24 AM, Alp Kalpalp wrote: > > > > Thanks for the answers, > > > > Please forgive me, I forgot to say that my stiffness matrix is not > changing during time steps. I could not remember directly but just after a > google search..I just hit this > > > > http://web.stanford.edu/group/frg/publications/recent/FETI-stoch.pdf > > > > please look around eq37 > > > > My problem is not related to this random paper I found. But, I think I > can find several others that shows the enhancing power of orthogonalization > with successive directions when the system's behaviour is not changing > rapidly. In my current sample case a gradually increasing force is applied > to a linear system. > > > > Since I use FETIDP, preconditioned conjugate projected gradient (PCPG) > is crucial in order to select any generalized inverse for the system. > > > > So, any suggestions on how to complete these tasks? > > > > For example anyway of obtaining search direction from KSPCG? > > > > or > > > > how to implement a projection space? > > > > Is it posible or too difficuly to code a variant of a KSPCG that meets > my requirements? > > > > On Sun, Dec 28, 2014 at 7:08 PM, Matthew Knepley > wrote: > > On Sun, Dec 28, 2014 at 11:02 AM, Umut Tabak wrote: > > On 12/28/2014 05:54 PM, Alp Kalpalp wrote: > >> Hi, > >> > >> Thank you Mark. > >> > >> Let me clarify my questions; > >> > >> 1-)How to implement or activate a Reorthogonalization procedure for > KSPCG.. > >> As you know, search directions can be found more rapidly (with less > numer of iterations) by using previous successive directions > > Without answering the PETSc related questions, interesting discussion, > > > > indeed, but at the cost of purging the previous directions(which means > explicit orthogonalizations with respect to these vectors also), so I am > not sure if you can gain something with this, cost wise... > > > > This has been proposed many times, but it has never been shown to work. > I have tried every variant I could > > find and it did not work. You can try LGMRES, which is the closest one > to working in my opinion. There is > > definitely no theoretical relation between Krylov directions from > subsequent solves unless the operator is > > identical. > > > > Matt > > > >> 2-) How to implement or activate a projection space over CG. A sample > projection can be; > >> P = I - G*((G'*G)\G'). > >> I need to insert project,scale,precondition,re-scae,re-project steps > during each KSPCG iteration. How can I utilize this? > >> > > Just a side note, I had previous experience on this that these kinds of > practice increase the cost more... > > BR, > > Umut > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 29 13:31:32 2014 From: jed at jedbrown.org (Jed Brown) Date: Mon, 29 Dec 2014 12:31:32 -0700 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: References: <54A037A9.6020308@tudelft.nl> Message-ID: <87sifyxnfv.fsf@jedbrown.org> Alp Kalpalp writes: > Thanks for the answers, > > Jed, PCPG stands for Preconditioned Acronyms like PCG were outdated before I was born, much like "digital computer". All Krylov methods in PETSc are preconditioned; let's not contribute to meaningless acronyms. > Conjugate Projected Gradient. Since any FETI literature suggests PCPG, > I am planning to go stick with it. My plan is to extend your KSPCG > algorithm with optional application of projection space and > re-orthogonalizations. Great. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From alpkalpalp at gmail.com Mon Dec 29 14:00:26 2014 From: alpkalpalp at gmail.com (Alp Kalpalp) Date: Mon, 29 Dec 2014 22:00:26 +0200 Subject: [petsc-users] no decrease in iteration counts of KSPCG during time stepping In-Reply-To: <87sifyxnfv.fsf@jedbrown.org> References: <54A037A9.6020308@tudelft.nl> <87sifyxnfv.fsf@jedbrown.org> Message-ID: Maybe acronym becomes meaningless with time but AFAIK, this is the acronym still used in recent papers related to feti by even its developers. Sorry to say that but for some people, feti is the only way to beat direct solvers. On Mon, Dec 29, 2014 at 9:31 PM, Jed Brown wrote: > Alp Kalpalp writes: > > > Thanks for the answers, > > > > Jed, PCPG stands for Preconditioned > > Acronyms like PCG were outdated before I was born, much like "digital > computer". All Krylov methods in PETSc are preconditioned; let's not > contribute to meaningless acronyms. > > > Conjugate Projected Gradient. Since any FETI literature suggests PCPG, > > I am planning to go stick with it. My plan is to extend your KSPCG > > algorithm with optional application of projection space and > > re-orthogonalizations. > > Great. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Dec 31 23:20:01 2014 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 01 Jan 2015 13:20:01 +0800 Subject: [petsc-users] Out of memory and parallel issues Message-ID: <54A4D901.9050007@gmail.com> Hi, I used to run my CFD code with 96 procs, with a grid size of 231 x 461 x 368. I used MPI and partition my grid in the z direction. Hence with 96 procs (8 nodes, each 12 procs), each procs has a size of 231 x 461 x 3 or 231 x 461 x 4. It worked fine. Now I modified the code and added some more routines which increases the fixed memory requirement per procs. However, the grid size is still the same. But the code aborts while solving the Poisson eqn, saying: Out of memory trying to allocate XXX bytes I'm using PETSc with HYPRE boomeramg to solve the linear Poisson eqn. I am guessing that now the amt of memory per procs is less because I added some routines which uses some memory. The result is less memory available for the solving of the Poisson eqn. I'm now changing to KSPBCGS but it seems to take forever. When I abort it, the error msg is: Out of memory. This could be due to allocating [10]PETSC ERROR: too large an object or bleeding by not properly [10]PETSC ERROR: destroying unneeded objects. [10]PETSC ERROR: Memory allocated 0 Memory used by process 4028370944 [10]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. I can't use more procs because some procs will have a size of 231 x 461 x 2 (or even 1). This will give error since I need to reference the nearby values along the z direction. So what options do I have? I'm thinking of these at the moment: 1. Remove as much fixed overhead memory per procs as possible so that there's enough memory for each procs. 2. Re-partition my grid in both x,y direction or x,y,z direction so I will not encounter extremely skew grid dimensions per procs. Btw, does having extremely skew grid dimensions affect the performance in solving the linear eqn? Are there other feasible options -- Thank you. Yours sincerely, TAY wee-beng