From flyaway1212 at gmail.com Wed Oct 1 14:36:39 2008 From: flyaway1212 at gmail.com (Victor Prosolin) Date: Wed, 01 Oct 2008 13:36:39 -0600 Subject: PETSc build system (on linux) In-Reply-To: References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> Message-ID: <48E3D147.5040107@gmail.com> Hi. This question is probably more suited for developers rather than users, but I hope the developers read this as well. I have been learning PETSc for the last several months because it's used in another project I am working on. Since I am just learning I rebuild the library quite often so I have a question about the build system. If configure.py generates makefiles why does it rebuild the whole thing every time I type "make" even if I didn't make any changes in configuration? Sincerely, Victor Prosolin. From knepley at gmail.com Wed Oct 1 14:40:08 2008 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 1 Oct 2008 14:40:08 -0500 Subject: PETSc build system (on linux) In-Reply-To: <48E3D147.5040107@gmail.com> References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> <48E3D147.5040107@gmail.com> Message-ID: At the top level, there are no dependencies, it just remakes everything. If you want to remake a specific directory cd dir; make or a subtree cd rootdir; make tree If you want to rebuild everything in that subtree cd rootdir; make ACTION=libfast tree Matt On Wed, Oct 1, 2008 at 2:36 PM, Victor Prosolin wrote: > Hi. > This question is probably more suited for developers rather than users, > but I hope the developers read this as well. > I have been learning PETSc for the last several months because it's used > in another project I am working on. Since I am just learning I rebuild > the library quite often so I have a question about the build system. If > configure.py generates makefiles why does it rebuild the whole thing > every time I type "make" even if I didn't make any changes in > configuration? > > Sincerely, > Victor Prosolin. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bui at calcreek.com Thu Oct 2 22:41:59 2008 From: bui at calcreek.com (Thuc Bui) Date: Thu, 2 Oct 2008 20:41:59 -0700 Subject: Build Petsc DLL's with Visual Studio C++ 2003 compiler In-Reply-To: References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> <48E3D147.5040107@gmail.com> Message-ID: <9655CB22851448798E80AADB44D21119@aphrodite> Dear All, I attempt to build Petsc DLL libraries with configure.py using first with the option --with-shared=1, which is ignored as indicated in configure.log, and with --with-dynamic=1, which the script crashes. Is it possible to build Petsc DLL's? Does any one know how to do this? I would like to reduce the size of my executables since I have several using Petsc, and each of them is now huge comparing with that use the "old" Sparskit solver. Thanks a lot in advance for your help, Thuc Bui From balay at mcs.anl.gov Thu Oct 2 23:10:53 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 2 Oct 2008 23:10:53 -0500 (CDT) Subject: Build Petsc DLL's with Visual Studio C++ 2003 compiler In-Reply-To: <9655CB22851448798E80AADB44D21119@aphrodite> References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> <48E3D147.5040107@gmail.com> <9655CB22851448798E80AADB44D21119@aphrodite> Message-ID: On Thu, 2 Oct 2008, Thuc Bui wrote: > Dear All, > > I attempt to build Petsc DLL libraries with configure.py using first with > the option --with-shared=1, which is ignored as indicated in configure.log, > and with --with-dynamic=1, which the script crashes. > > Is it possible to build Petsc DLL's? Does any one know how to do this? I > would like to reduce the size of my executables since I have several using > Petsc, and each of them is now huge comparing with that use the "old" > Sparskit solver. Sorry - currently we don't have a mechanism to build dlls on windows. So the above options [shared and dynamic] don't work on windows. Satish From chetan at ices.utexas.edu Fri Oct 3 01:21:15 2008 From: chetan at ices.utexas.edu (Chetan Jhurani) Date: Fri, 3 Oct 2008 01:21:15 -0500 Subject: Build Petsc DLL's with Visual Studio C++ 2003 compiler In-Reply-To: References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> <48E3D147.5040107@gmail.com> <9655CB22851448798E80AADB44D21119@aphrodite> Message-ID: <7119DFF445C34B2A8A1280E9C7514EB1@spiff> On Thu, 2 Oct 2008, Satish Balay wrote: > On Thu, 2 Oct 2008, Thuc Bui wrote: > > > I attempt to build Petsc DLL libraries with configure.py using first with > > the option --with-shared=1, which is ignored as indicated in configure.log, > > and with --with-dynamic=1, which the script crashes. > > > > Is it possible to build Petsc DLL's? Does any one know how to do this? I > > would like to reduce the size of my executables since I have several using > > Petsc, and each of them is now huge comparing with that use the "old" > > Sparskit solver. > > Sorry - currently we don't have a mechanism to build dlls on > windows. So the above options [shared and dynamic] don't work on > windows. I have created a set of visual studio project/solution files. They've been tried with petsc-2.3.2-p7. There are 4 configurations -- Debug, Release, DebugDLL, and ReleaseDLL. The first two create a static lib, and the last two create a DLL (one output file per config). This results in a petsc.dll of size 3 MB in release mode. Debug mode is 8.5 MB. You can get them here - http://www.ices.utexas.edu/~chetan/petsc/ Place the files in petsc-2.3.2-p7/src directory. Run petsc config script, rename the $config directories in bmake directory to appropriate name (Debug, Release, DebugDLL, ReleaseDLL). You'll have to run petsc configure ONCE and copy directories. Change PETSC_ARCH_NAME and PETSC_NAME in petscconf.h in each directory. Preprocessor macros like PETSC_DLL_EXPORT are defined in the vcproj file. Warnings: - I'm sure I've missed some steps that I no longer remember. - Not all the files in petsc/src are compiled. Most are. - The output (objs, libs, dlls) go into non-standard directories, so change them. - Some env variables, like $(MPI_DIR), are used in include/link paths. - blas.lib and lapack.lib are assumed to exist in link path. Chetan From bui at calcreek.com Fri Oct 3 12:15:02 2008 From: bui at calcreek.com (Thuc Bui) Date: Fri, 3 Oct 2008 10:15:02 -0700 Subject: Build Petsc DLL's with Visual Studio C++ 2003 compiler In-Reply-To: <7119DFF445C34B2A8A1280E9C7514EB1@spiff> References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> <48E3D147.5040107@gmail.com> <9655CB22851448798E80AADB44D21119@aphrodite> <7119DFF445C34B2A8A1280E9C7514EB1@spiff> Message-ID: Thank you Satish and Chetan very much for your answers. I will try out Chetan's project/solution files this weekend. Before doing so, I will need to re-run configure.py only once per Chetan's instruction then copy and rename the build directories. I will set the configuration option --with-shared=0. If this is incorrect, please let me know. Thanks, Thuc -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Chetan Jhurani Sent: Thursday, October 02, 2008 11:21 PM To: petsc-users at mcs.anl.gov Subject: RE: Build Petsc DLL's with Visual Studio C++ 2003 compiler On Thu, 2 Oct 2008, Satish Balay wrote: > On Thu, 2 Oct 2008, Thuc Bui wrote: > > > I attempt to build Petsc DLL libraries with configure.py using first with > > the option --with-shared=1, which is ignored as indicated in configure.log, > > and with --with-dynamic=1, which the script crashes. > > > > Is it possible to build Petsc DLL's? Does any one know how to do this? I > > would like to reduce the size of my executables since I have several using > > Petsc, and each of them is now huge comparing with that use the "old" > > Sparskit solver. > > Sorry - currently we don't have a mechanism to build dlls on > windows. So the above options [shared and dynamic] don't work on > windows. I have created a set of visual studio project/solution files. They've been tried with petsc-2.3.2-p7. There are 4 configurations -- Debug, Release, DebugDLL, and ReleaseDLL. The first two create a static lib, and the last two create a DLL (one output file per config). This results in a petsc.dll of size 3 MB in release mode. Debug mode is 8.5 MB. You can get them here - http://www.ices.utexas.edu/~chetan/petsc/ Place the files in petsc-2.3.2-p7/src directory. Run petsc config script, rename the $config directories in bmake directory to appropriate name (Debug, Release, DebugDLL, ReleaseDLL). You'll have to run petsc configure ONCE and copy directories. Change PETSC_ARCH_NAME and PETSC_NAME in petscconf.h in each directory. Preprocessor macros like PETSC_DLL_EXPORT are defined in the vcproj file. Warnings: - I'm sure I've missed some steps that I no longer remember. - Not all the files in petsc/src are compiled. Most are. - The output (objs, libs, dlls) go into non-standard directories, so change them. - Some env variables, like $(MPI_DIR), are used in include/link paths. - blas.lib and lapack.lib are assumed to exist in link path. Chetan From chetan at ices.utexas.edu Fri Oct 3 12:42:40 2008 From: chetan at ices.utexas.edu (Chetan Jhurani) Date: Fri, 3 Oct 2008 12:42:40 -0500 Subject: Build Petsc DLL's with Visual Studio C++ 2003 compiler In-Reply-To: References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> <48E3D147.5040107@gmail.com> <9655CB22851448798E80AADB44D21119@aphrodite> <7119DFF445C34B2A8A1280E9C7514EB1@spiff> Message-ID: <3B7CA6EC2BA54FEAB501BB3AF010AA83@spiff> > From: Thuc Bui > > Thank you Satish and Chetan very much for your answers. I will try out > Chetan's project/solution files this weekend. Before doing so, I will need > to re-run configure.py only once per Chetan's instruction then copy and > rename the build directories. I will set the configuration option > --with-shared=0. If this is incorrect, please let me know. The options I had used are written below. Didn't know then that with-dynamic=1 and with-shared=1 don't work on Windows. I was building under the assumption of a normal petsc build, and not for the purpose of using a visual studio project later on (hence the --download-f-blas-lapack). Actually, I remember now why I made this into a separate project. My main exe was using multiple libraries (including petsc). Petsc used the C multithreaded static run time library. Some other one used C multithreaded dynamic run time library. I didn't want to play with the petsc build system on windows. It was extremely slow. I believe it took hours on my machine because process spawing is slower. Or perhaps the python interpreter was slow. So I just took the src tree and created a vcproj out of it and used the appropriate run-time library. Now it takes just 4 minutes to build the full petsc tree (approx 725 object files). I digress. But that was a justification, and some info if you need to know which runtime lib is being used. If you have problems about vcproj/sln files do let me know. I'm sure this will be a bumpy ride and unlikely to be "official supported" either. I forgot to mention the project contains just the C files. No fortran file is compiled. The configuration options -- --with-shared=1 --with-dynamic=1 --download-f-blas-lapack -with-clanguage=C++ -with-debugging=yes --> Debug mode, could be changed. --with-gnu-compilers=0 --with-cc=cl --with-cxx=cl --with-fc=ifort --with-mpi=1 Chetan > Thanks, > Thuc > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov > [mailto:owner-petsc-users at mcs.anl.gov] > On Behalf Of Chetan Jhurani > Sent: Thursday, October 02, 2008 11:21 PM > To: petsc-users at mcs.anl.gov > Subject: RE: Build Petsc DLL's with Visual Studio C++ 2003 compiler > > > On Thu, 2 Oct 2008, Satish Balay wrote: > > On Thu, 2 Oct 2008, Thuc Bui wrote: > > > > > I attempt to build Petsc DLL libraries with configure.py > using first > with > > > the option --with-shared=1, which is ignored as indicated in > configure.log, > > > and with --with-dynamic=1, which the script crashes. > > > > > > Is it possible to build Petsc DLL's? Does any one know > how to do this? I > > > would like to reduce the size of my executables since I > have several > using > > > Petsc, and each of them is now huge comparing with that > use the "old" > > > Sparskit solver. > > > > Sorry - currently we don't have a mechanism to build dlls on > > windows. So the above options [shared and dynamic] don't work on > > windows. > > > I have created a set of visual studio project/solution files. > They've been > tried with petsc-2.3.2-p7. There are 4 configurations -- > Debug, Release, > DebugDLL, and ReleaseDLL. The first two create a static lib, > and the last > two create a DLL (one output file per config). > > This results in a petsc.dll of size 3 MB in release mode. > Debug mode is 8.5 > MB. > > You can get them here - http://www.ices.utexas.edu/~chetan/petsc/ > > Place the files in petsc-2.3.2-p7/src directory. Run petsc > config script, > rename the $config directories in bmake directory to appropriate name > (Debug, > Release, DebugDLL, ReleaseDLL). You'll have to run petsc > configure ONCE and > copy directories. Change PETSC_ARCH_NAME and PETSC_NAME in > petscconf.h > in each directory. Preprocessor macros like PETSC_DLL_EXPORT > are defined > in the vcproj file. > > Warnings: > > - I'm sure I've missed some steps that I no longer remember. > - Not all the files in petsc/src are compiled. Most are. > - The output (objs, libs, dlls) go into non-standard > directories, so change > them. > - Some env variables, like $(MPI_DIR), are used in include/link paths. > - blas.lib and lapack.lib are assumed to exist in link path. > > Chetan > > > From bui at calcreek.com Fri Oct 3 15:44:58 2008 From: bui at calcreek.com (Thuc Bui) Date: Fri, 3 Oct 2008 13:44:58 -0700 Subject: Build Petsc DLL's with Visual Studio C++ 2003 compiler In-Reply-To: <3B7CA6EC2BA54FEAB501BB3AF010AA83@spiff> References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> <48E3D147.5040107@gmail.com> <9655CB22851448798E80AADB44D21119@aphrodite> <7119DFF445C34B2A8A1280E9C7514EB1@spiff> <3B7CA6EC2BA54FEAB501BB3AF010AA83@spiff> Message-ID: Thanks a lot Chetan, for taking your time to provide insightful information. I do build Petsc without Fortran. I will let you know how it goes. Thanks again for your help. Thuc -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Chetan Jhurani Sent: Friday, October 03, 2008 10:43 AM To: petsc-users at mcs.anl.gov Subject: RE: Build Petsc DLL's with Visual Studio C++ 2003 compiler > From: Thuc Bui > > Thank you Satish and Chetan very much for your answers. I will try out > Chetan's project/solution files this weekend. Before doing so, I will need > to re-run configure.py only once per Chetan's instruction then copy and > rename the build directories. I will set the configuration option > --with-shared=0. If this is incorrect, please let me know. The options I had used are written below. Didn't know then that with-dynamic=1 and with-shared=1 don't work on Windows. I was building under the assumption of a normal petsc build, and not for the purpose of using a visual studio project later on (hence the --download-f-blas-lapack). Actually, I remember now why I made this into a separate project. My main exe was using multiple libraries (including petsc). Petsc used the C multithreaded static run time library. Some other one used C multithreaded dynamic run time library. I didn't want to play with the petsc build system on windows. It was extremely slow. I believe it took hours on my machine because process spawing is slower. Or perhaps the python interpreter was slow. So I just took the src tree and created a vcproj out of it and used the appropriate run-time library. Now it takes just 4 minutes to build the full petsc tree (approx 725 object files). I digress. But that was a justification, and some info if you need to know which runtime lib is being used. If you have problems about vcproj/sln files do let me know. I'm sure this will be a bumpy ride and unlikely to be "official supported" either. I forgot to mention the project contains just the C files. No fortran file is compiled. The configuration options -- --with-shared=1 --with-dynamic=1 --download-f-blas-lapack -with-clanguage=C++ -with-debugging=yes --> Debug mode, could be changed. --with-gnu-compilers=0 --with-cc=cl --with-cxx=cl --with-fc=ifort --with-mpi=1 Chetan > Thanks, > Thuc > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov > [mailto:owner-petsc-users at mcs.anl.gov] > On Behalf Of Chetan Jhurani > Sent: Thursday, October 02, 2008 11:21 PM > To: petsc-users at mcs.anl.gov > Subject: RE: Build Petsc DLL's with Visual Studio C++ 2003 compiler > > > On Thu, 2 Oct 2008, Satish Balay wrote: > > On Thu, 2 Oct 2008, Thuc Bui wrote: > > > > > I attempt to build Petsc DLL libraries with configure.py > using first > with > > > the option --with-shared=1, which is ignored as indicated in > configure.log, > > > and with --with-dynamic=1, which the script crashes. > > > > > > Is it possible to build Petsc DLL's? Does any one know > how to do this? I > > > would like to reduce the size of my executables since I > have several > using > > > Petsc, and each of them is now huge comparing with that > use the "old" > > > Sparskit solver. > > > > Sorry - currently we don't have a mechanism to build dlls on > > windows. So the above options [shared and dynamic] don't work on > > windows. > > > I have created a set of visual studio project/solution files. > They've been > tried with petsc-2.3.2-p7. There are 4 configurations -- > Debug, Release, > DebugDLL, and ReleaseDLL. The first two create a static lib, > and the last > two create a DLL (one output file per config). > > This results in a petsc.dll of size 3 MB in release mode. > Debug mode is 8.5 > MB. > > You can get them here - http://www.ices.utexas.edu/~chetan/petsc/ > > Place the files in petsc-2.3.2-p7/src directory. Run petsc > config script, > rename the $config directories in bmake directory to appropriate name > (Debug, > Release, DebugDLL, ReleaseDLL). You'll have to run petsc > configure ONCE and > copy directories. Change PETSC_ARCH_NAME and PETSC_NAME in > petscconf.h > in each directory. Preprocessor macros like PETSC_DLL_EXPORT > are defined > in the vcproj file. > > Warnings: > > - I'm sure I've missed some steps that I no longer remember. > - Not all the files in petsc/src are compiled. Most are. > - The output (objs, libs, dlls) go into non-standard > directories, so change > them. > - Some env variables, like $(MPI_DIR), are used in include/link paths. > - blas.lib and lapack.lib are assumed to exist in link path. > > Chetan > > > From bhatiamanav at gmail.com Sat Oct 4 17:58:37 2008 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Sat, 4 Oct 2008 18:58:37 -0400 Subject: matrix inverse and multiplication Message-ID: Hi I have a problem with two sparse matrices: A and B where I need to calculate the following: C = A^{-1} B What is the best way to do this? Is there a better way to do it than to calculate the columns of C by a linear solution with individual columns of B as rhs vectors? Also, is it possible to have a-priori knowledge of sparsity pattern of C for two sparse matrices A and B? Regards, Manav From bsmith at mcs.anl.gov Sat Oct 4 18:11:16 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 4 Oct 2008 18:11:16 -0500 Subject: matrix inverse and multiplication In-Reply-To: References: Message-ID: <29FDD2DA-61A2-4657-A849-2C0CF1FB72AF@mcs.anl.gov> In petsc-dev http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html use MatMatSolve() after doing a MatLUFactorNumeric(). C will always be dense, hence you input for B a SeqDense matrix, not a sparse matrix. A can be sparse. Barry On Oct 4, 2008, at 5:58 PM, Manav Bhatia wrote: > Hi > > I have a problem with two sparse matrices: A and B where I need to > calculate the following: > > C = A^{-1} B > > What is the best way to do this? Is there a better way to do it > than to calculate the columns of C by a linear solution with > individual columns of B as rhs vectors? > > Also, is it possible to have a-priori knowledge of sparsity > pattern of C for two sparse matrices A and B? > > Regards, > Manav > > From bui at calcreek.com Sun Oct 5 13:57:46 2008 From: bui at calcreek.com (Thuc Bui) Date: Sun, 5 Oct 2008 11:57:46 -0700 Subject: Build Petsc DLL's with Visual Studio C++ 2003 compiler In-Reply-To: References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> <48E3D147.5040107@gmail.com> <9655CB22851448798E80AADB44D21119@aphrodite> <7119DFF445C34B2A8A1280E9C7514EB1@spiff> <3B7CA6EC2BA54FEAB501BB3AF010AA83@spiff> Message-ID: Hi Chetan, I was able to build Petsc-2.3.3-p15 for the Debug and Release builds using your vcproj file albeit having to remove and/or copy several files to different locations in the build directory tree. When I got to the ReleaseDLL build, I had to edit several header files to add to the function and variable declarations with PETSCMAT_DLLEXPORT, PETSCVEC_DLLEXPORT, etc. to get the codes compiled. Unfortunately, when time to link, the compiler wants the file petsc.def. I believe this file is the Visual Studio DLL interface file, and it is specified in the vcproj for the ReleaseDLL and DebugDLL configurations. By any chance do you have this file available? Thanks a lot for your help, Thuc -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Thuc Bui Sent: Friday, October 03, 2008 1:45 PM To: petsc-users at mcs.anl.gov Subject: RE: Build Petsc DLL's with Visual Studio C++ 2003 compiler Thanks a lot Chetan, for taking your time to provide insightful information. I do build Petsc without Fortran. I will let you know how it goes. Thanks again for your help. Thuc -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Chetan Jhurani Sent: Friday, October 03, 2008 10:43 AM To: petsc-users at mcs.anl.gov Subject: RE: Build Petsc DLL's with Visual Studio C++ 2003 compiler > From: Thuc Bui > > Thank you Satish and Chetan very much for your answers. I will try out > Chetan's project/solution files this weekend. Before doing so, I will need > to re-run configure.py only once per Chetan's instruction then copy and > rename the build directories. I will set the configuration option > --with-shared=0. If this is incorrect, please let me know. The options I had used are written below. Didn't know then that with-dynamic=1 and with-shared=1 don't work on Windows. I was building under the assumption of a normal petsc build, and not for the purpose of using a visual studio project later on (hence the --download-f-blas-lapack). Actually, I remember now why I made this into a separate project. My main exe was using multiple libraries (including petsc). Petsc used the C multithreaded static run time library. Some other one used C multithreaded dynamic run time library. I didn't want to play with the petsc build system on windows. It was extremely slow. I believe it took hours on my machine because process spawing is slower. Or perhaps the python interpreter was slow. So I just took the src tree and created a vcproj out of it and used the appropriate run-time library. Now it takes just 4 minutes to build the full petsc tree (approx 725 object files). I digress. But that was a justification, and some info if you need to know which runtime lib is being used. If you have problems about vcproj/sln files do let me know. I'm sure this will be a bumpy ride and unlikely to be "official supported" either. I forgot to mention the project contains just the C files. No fortran file is compiled. The configuration options -- --with-shared=1 --with-dynamic=1 --download-f-blas-lapack -with-clanguage=C++ -with-debugging=yes --> Debug mode, could be changed. --with-gnu-compilers=0 --with-cc=cl --with-cxx=cl --with-fc=ifort --with-mpi=1 Chetan > Thanks, > Thuc > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov > [mailto:owner-petsc-users at mcs.anl.gov] > On Behalf Of Chetan Jhurani > Sent: Thursday, October 02, 2008 11:21 PM > To: petsc-users at mcs.anl.gov > Subject: RE: Build Petsc DLL's with Visual Studio C++ 2003 compiler > > > On Thu, 2 Oct 2008, Satish Balay wrote: > > On Thu, 2 Oct 2008, Thuc Bui wrote: > > > > > I attempt to build Petsc DLL libraries with configure.py > using first > with > > > the option --with-shared=1, which is ignored as indicated in > configure.log, > > > and with --with-dynamic=1, which the script crashes. > > > > > > Is it possible to build Petsc DLL's? Does any one know > how to do this? I > > > would like to reduce the size of my executables since I > have several > using > > > Petsc, and each of them is now huge comparing with that > use the "old" > > > Sparskit solver. > > > > Sorry - currently we don't have a mechanism to build dlls on > > windows. So the above options [shared and dynamic] don't work on > > windows. > > > I have created a set of visual studio project/solution files. > They've been > tried with petsc-2.3.2-p7. There are 4 configurations -- > Debug, Release, > DebugDLL, and ReleaseDLL. The first two create a static lib, > and the last > two create a DLL (one output file per config). > > This results in a petsc.dll of size 3 MB in release mode. > Debug mode is 8.5 > MB. > > You can get them here - http://www.ices.utexas.edu/~chetan/petsc/ > > Place the files in petsc-2.3.2-p7/src directory. Run petsc > config script, > rename the $config directories in bmake directory to appropriate name > (Debug, > Release, DebugDLL, ReleaseDLL). You'll have to run petsc > configure ONCE and > copy directories. Change PETSC_ARCH_NAME and PETSC_NAME in > petscconf.h > in each directory. Preprocessor macros like PETSC_DLL_EXPORT > are defined > in the vcproj file. > > Warnings: > > - I'm sure I've missed some steps that I no longer remember. > - Not all the files in petsc/src are compiled. Most are. > - The output (objs, libs, dlls) go into non-standard > directories, so change > them. > - Some env variables, like $(MPI_DIR), are used in include/link paths. > - blas.lib and lapack.lib are assumed to exist in link path. > > Chetan > > > From chetan at ices.utexas.edu Sun Oct 5 14:20:04 2008 From: chetan at ices.utexas.edu (Chetan Jhurani) Date: Sun, 5 Oct 2008 14:20:04 -0500 Subject: Build Petsc DLL's with Visual Studio C++ 2003 compiler In-Reply-To: References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> <48E3D147.5040107@gmail.com> <9655CB22851448798E80AADB44D21119@aphrodite> <7119DFF445C34B2A8A1280E9C7514EB1@spiff> <3B7CA6EC2BA54FEAB501BB3AF010AA83@spiff> Message-ID: > From: Thuc Bui > > I was able to build Petsc-2.3.3-p15 for the Debug and Release builds using > your vcproj file albeit having to remove and/or copy several files to > different locations in the build directory tree. I guess this is because of structural differences between 2.3.3-p15 and 2.3.2-p7. > When I got to the ReleaseDLL build, I had to edit several header files to > add to the function and variable declarations with PETSCMAT_DLLEXPORT, > PETSCVEC_DLLEXPORT, etc. to get the codes compiled. Unfortunately, when time > to link, the compiler wants the file petsc.def. I believe this file is the > Visual Studio DLL interface file, and it is specified in the vcproj for the > ReleaseDLL and DebugDLL configurations. By any chance do you have this file > available? My mistake. You don't need it. Just remove the petsc.def entry from Project Properties -> Linker -> Input (for both ReleaseDLL and DebugDLL) and it should link fine. That file is not really needed to link since exports are done via compile time declarations. I had a default petsc.def file in my source tree when I was testing symbol exports and hence it worked here. Chetan From Hung.V.Nguyen at usace.army.mil Thu Oct 9 09:04:38 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Thu, 9 Oct 2008 09:04:38 -0500 Subject: question Message-ID: All, I am looking for an example code that read A (in csr format) and b. Then it builds A and b petsc format and solves Ax = b. I found an example below, but it seems that it doesn't work. If you have similar like an example below or let me know where is a problem, I would appreciate very much. Thanks, -Hung --- /* Purpose: Test a sparse matrix solver. */ #include #include "petscksp.h" int main(int argc,char **args) { /* My sample sparse matrix A */ /* 11.0 0 0 14.0 0 21.0 22.0 0 24.0 0 31.0 0 33. 34.0 35.0 0 0 43. 44.0 0 0 0 0 0 55. */ const int sizeMat=5; // Matrix is 5 by 5. int i,j; int nonZero=12; double val[] ={11., 14.,21., 22., 24., 31., 33., 34., 35., 43., 44., 55.}; int col_ind[]={0, 3, 0, 1, 3, 0, 2, 3, 4, 2, 3, 4}; int row_ptr[]={0, 2, 5, 9, 11, 12}; double knB[]={2.0, 0.0, 1.0, 1.0, 2.0}; double answer[]={-0.0348308, -0.152452, -0.150927, 0.170224, 0.0363636}; // calculate row_index, vector_index and number of nonzero per row: int nZperRow[]={3,4,2,1}; int row_ind[]={0,0, 1,1,1, 2,2,2,2, 3,3, 4}; int vec_ind[]={0,1,2,3,4}; double initX[]={9.,9.,9.,9.,9.}; /* PetSc codes start. */ printf("\n*** PetSC Testing phase. ***\n"); /* Create variables of PetSc */ Vec x,b,u; /* approx solution, RHS, exact solution */ /*a linear system, Ax = b. */ Mat A; /* linear system matrix */ KSP ksp; /* linear solver context */ PetscInt Istart,Iend; /* Index for local matrix of each processor */ PetscInt istart,iend; /* Index for local vector of each processor */ PetscViewer viewer; PetscMPIInt rank; PetscErrorCode ierr; PetscTruth flg; static char help[] = "Parallel vector layout.\n\n"; /* Initialization of PetSc */ PetscInitialize(&argc,&args,(char*)0,help); MPI_Comm_rank(PETSC_COMM_WORLD,&rank); /* Create parallel matrix, specifying only its global dimensions. : When using MatCreate(), the matrix format can be specified at runtime. Also, the parallel partitioning of the matrix is determined by PETSc at runtime. Performance tuning note: For problems of substantial size, preallocation of matrix memory is crucial for attaining good performance. See the matrix chapter of the users manual for details. - Allocates memory for a sparse parallel matrix in AIJ format (the default parallel PETSc format: Compressed Sparse Row). */ ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,sizeMat,sizeMat);CHKERRQ(ierr); ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr); ierr = MatSetFromOptions(A);CHKERRQ(ierr); /* Currently, all PETSc parallel matrix formats are partitioned by contiguous chunks of rows across the processors. Determine which rows of the matrix are locally owned. */ ierr = MatGetOwnershipRange(A,&Istart,&Iend);CHKERRQ(ierr); printf(" Rank= %d, Istart_row= %d, Iend_row+1 = %d \n", rank, Istart, Iend); /* ierr = MatMPIAIJSetPreallocationCSR(A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr) ; // Standard format, CSR ierr = MatSeqAIJSetPreallocation(A,0,nZperRow);CHKERRQ(ierr); // Defining the number of nonzero for each row. */ ierr = MatMPIAIJSetPreallocationCSR(A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr) ; // Standard format, CSR ierr = MatSeqAIJSetPreallocation(A,0,nZperRow);CHKERRQ(ierr); // Defining the number of nonzero for each row. /* Set matrix elements in parallel. - Each processor needs to insert only elements that it owns locally (but any non-local elements will be sent to the appropriate processor during matrix assembly). - Always specify global rows and columns of matrix entries. */ /* Method 1: Efficient method. */ for (i=row_ptr[Istart]; i References: Message-ID: <3CB9DEDE-AF58-4CC3-850B-0C728CAC224D@mcs.anl.gov> You can use the utilities: MatCreateSeqAIJWithArrays() or MatCreateMPIAIWithArrays() they handle all the details for you. Barry On Oct 9, 2008, at 9:04 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > All, > > I am looking for an example code that read A (in csr format) and b. > Then it > builds A and b petsc format and solves Ax = b. > > I found an example below, but it seems that it doesn't work. > > If you have similar like an example below or let me know where is a > problem, > I would appreciate very much. > > Thanks, > > -Hung > --- > /* > Purpose: Test a sparse matrix solver. > */ > #include > #include "petscksp.h" > > int main(int argc,char **args) > { > /* My sample sparse matrix A */ > > /* > 11.0 0 0 14.0 0 > 21.0 22.0 0 24.0 0 > 31.0 0 33. 34.0 35.0 > 0 0 43. 44.0 0 > 0 0 0 0 55. > */ > > > const int sizeMat=5; // Matrix is 5 by 5. > int i,j; > int nonZero=12; > double val[] ={11., 14.,21., 22., 24., 31., 33., 34., 35., > 43., 44., > 55.}; > int col_ind[]={0, 3, 0, 1, 3, 0, 2, 3, 4, 2, 3, 4}; > int row_ptr[]={0, 2, 5, 9, 11, 12}; > double knB[]={2.0, 0.0, 1.0, 1.0, 2.0}; > double answer[]={-0.0348308, -0.152452, -0.150927, 0.170224, > 0.0363636}; > > // calculate row_index, vector_index and number of nonzero per row: > > int nZperRow[]={3,4,2,1}; > int row_ind[]={0,0, 1,1,1, 2,2,2,2, 3,3, 4}; > int vec_ind[]={0,1,2,3,4}; > double initX[]={9.,9.,9.,9.,9.}; > > > > /* > PetSc codes start. > */ > printf("\n*** PetSC Testing phase. ***\n"); > /* Create variables of PetSc */ > Vec x,b,u; /* approx solution, RHS, exact solution > */ /*a > linear system, Ax = b. */ > Mat A; /* linear system matrix */ > KSP ksp; /* linear solver context */ > PetscInt Istart,Iend; /* Index for local matrix of > each > processor */ > PetscInt istart,iend; /* Index for local vector of > each > processor */ > PetscViewer viewer; > PetscMPIInt rank; > PetscErrorCode ierr; > PetscTruth flg; > > static char help[] = "Parallel vector layout.\n\n"; > /* Initialization of PetSc */ > PetscInitialize(&argc,&args,(char*)0,help); > MPI_Comm_rank(PETSC_COMM_WORLD,&rank); > > /* > Create parallel matrix, specifying only its global > dimensions. : > When using MatCreate(), the matrix format can be > specified at > runtime. Also, the parallel partitioning of the > matrix is > determined by PETSc at runtime. > > Performance tuning note: For problems of substantial > size, > preallocation of matrix memory is crucial for > attaining good > performance. See the matrix chapter of the users > manual for > details. > - Allocates memory for a sparse parallel matrix in > AIJ format > > (the default parallel PETSc format: Compressed Sparse > Row). > */ > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > ierr = > MatSetSizes > (A,PETSC_DECIDE,PETSC_DECIDE,sizeMat,sizeMat);CHKERRQ(ierr); > ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr); > ierr = MatSetFromOptions(A);CHKERRQ(ierr); > > /* > Currently, all PETSc parallel matrix formats are > partitioned > by > contiguous chunks of rows across the processors. > Determine > which > rows of the matrix are locally owned. > */ > ierr = MatGetOwnershipRange(A,&Istart,&Iend);CHKERRQ(ierr); > printf(" Rank= %d, Istart_row= %d, Iend_row+1 = %d \n", rank, > Istart, > Iend); > > /* > ierr = > MatMPIAIJSetPreallocationCSR > (A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr) > ; > // Standard format, CSR > ierr = > MatSeqAIJSetPreallocation(A,0,nZperRow);CHKERRQ(ierr); > // Defining the number of nonzero for each row. > */ > > ierr = > MatMPIAIJSetPreallocationCSR > (A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr) > ; > // Standard format, CSR > ierr = MatSeqAIJSetPreallocation(A, > 0,nZperRow);CHKERRQ(ierr); // > Defining the number of nonzero for each row. > > /* > Set matrix elements in parallel. > - Each processor needs to insert only elements that > it owns > locally (but any non-local elements will be > sent to > the > appropriate processor during matrix assembly). > - Always specify global rows and columns of matrix > entries. > */ > /* Method 1: Efficient method. */ > for (i=row_ptr[Istart]; i { > //ierr = > MatSetValue > (A,row_ind[i],col_ind[i],val[i],INSERT_VALUES);CHKERRQ(ierr); > ierr = > MatSetValues(A,1,&(row_ind[i]), > 1,&(col_ind[i]),&(val[i]),INSERT_VALUES); > CHKER > RQ(ierr); > } > > ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > > /* > Visaualize a matrix. Set a viewer's style. > To see a dense matrix, use the following two lines: > Line1: viewer = PETSC_VIEWER_STDOUT_(PETSC_COMM_WORLD); > Line2: ierr = > PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_DENSE); > */ > ierr = MatView(A,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > > /* > Create parallel vectors. > */ > ierr = VecCreate(PETSC_COMM_WORLD,&u);CHKERRQ(ierr); > ierr = VecSetSizes(u,PETSC_DECIDE,sizeMat);CHKERRQ(ierr); > ierr = VecSetFromOptions(u);CHKERRQ(ierr); > ierr = VecDuplicate(u,&b);CHKERRQ(ierr); > ierr = VecDuplicate(b,&x);CHKERRQ(ierr); > /* > PETSc parallel vectors are partitioned by > contiguous chunks of rows across the processors. > Determine > which vector are locally owned. > */ > VecGetOwnershipRange(b,&istart,&iend); > /* > Insert vector values > */ > VecSetValues(u,sizeMat,vec_ind,answer,INSERT_VALUES); > VecSetValues(x,sizeMat,vec_ind,initX,INSERT_VALUES); > VecSetValues(b,sizeMat,vec_ind,knB,INSERT_VALUES); > /* > Assemble vector, using the 2-step process: > VecAssemblyBegin(), VecAssemblyEnd() > Computations can be done while messages are in > transition > by placing code between these two statements. > */ > VecAssemblyBegin(u); VecAssemblyEnd(u); > VecAssemblyBegin(x); VecAssemblyEnd(x); > VecAssemblyBegin(b); VecAssemblyEnd(b); > /* > View the exact solution vector if desired > */ > if(rank==0) printf("Vector u: \n"); > flg = 1; > if (flg) {ierr = > VecView(u,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} > if(rank==0) printf("Vector x: \n"); > if (flg) {ierr = > VecView(x,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} > if(rank==0) printf("Vector b: \n"); > if (flg) {ierr = > VecView(b,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} > > /* > Create the linear solver and set various options > */ > KSPCreate(PETSC_COMM_WORLD,&ksp); > KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN); > KSPSetInitialGuessNonzero(ksp,PETSC_TRUE); > KSPSetFromOptions(ksp); > > /* > Solve the linear system > */ > KSPSolve(ksp,b,x); > > if(rank==0) printf("Solved Vector x: \n"); > if (flg) {ierr = > VecView(x,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} > > /* > Free work space. All PETSc objects should be > destroyed when > they are no longer needed. > */ > ierr = KSPDestroy(ksp);CHKERRQ(ierr); > ierr = MatDestroy(A);CHKERRQ(ierr); > ierr = VecDestroy(u);CHKERRQ(ierr); ierr = > VecDestroy(x);CHKERRQ(ierr); > ierr = VecDestroy(b);CHKERRQ(ierr); > > ierr = PetscFinalize();CHKERRQ(ierr); > > printf("\n"); > return 0; > } > From mafunk at nmsu.edu Thu Oct 9 13:22:59 2008 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 9 Oct 2008 12:22:59 -0600 Subject: analyze preconditioned operator? Message-ID: <200810091223.00078.mafunk@nmsu.edu> Hi, i am using PETSC and on top of it also SLEPC to do some matrix analysis. In Slepc i simply pass it the Petsc matrix to analyze it. However, is there anyway to analyze the preconditioned matrix at all? The only ways i can think of are either by getting the preconditioned matrix back out of PETSC (i.e. some sort of pointer to it if it is stored at all), or by Slepc applying the preconditioner (i tried to contact the developers but no one has responded to that question so far)? I looked around a little bit but i could not find anything on this. Does anyone have experience with this? thanks matt From dalcinl at gmail.com Thu Oct 9 13:37:54 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 9 Oct 2008 15:37:54 -0300 Subject: analyze preconditioned operator? In-Reply-To: <200810091223.00078.mafunk@nmsu.edu> References: <200810091223.00078.mafunk@nmsu.edu> Message-ID: Take a look at KSPComputeExplicitOperator() routine. Note that it could be rather slow if your large matrices. Additionally, it returns a dense matrix in the uniprocessor case. It this trick does not fit your need, then perhaps you could use a shell matrix having the actual Mat and a PC, but that approach is a bit harder to setup. On Thu, Oct 9, 2008 at 3:22 PM, Matt Funk wrote: > Hi, > > i am using PETSC and on top of it also SLEPC to do some matrix analysis. > In Slepc i simply pass it the Petsc matrix to analyze it. > However, is there anyway to analyze the preconditioned matrix at all? > > The only ways i can think of are either by getting the preconditioned matrix > back out of PETSC (i.e. some sort of pointer to it if it is stored at all), > or by Slepc applying the preconditioner (i tried to contact the developers > but no one has responded to that question so far)? > > I looked around a little bit but i could not find anything on this. > Does anyone have experience with this? > > thanks > matt > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From jed at 59A2.org Thu Oct 9 14:02:45 2008 From: jed at 59A2.org (Jed Brown) Date: Thu, 9 Oct 2008 21:02:45 +0200 Subject: analyze preconditioned operator? In-Reply-To: <200810091223.00078.mafunk@nmsu.edu> References: <200810091223.00078.mafunk@nmsu.edu> Message-ID: <383ade90810091202r7a67fbf8wa80f3bbbaa86f8b7@mail.gmail.com> On Thu, Oct 9, 2008 at 20:22, Matt Funk wrote: > i am using PETSC and on top of it also SLEPC to do some matrix analysis. > In Slepc i simply pass it the Petsc matrix to analyze it. > However, is there anyway to analyze the preconditioned matrix at all? You can certainly write a MatShell which applies B = P^-1 A and hand this to SLEPc with no preconditioner. This seems like a common thing so maybe SLEPc has an option to use the preconditioned operator. Jed From Hung.V.Nguyen at usace.army.mil Thu Oct 9 15:21:22 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Thu, 9 Oct 2008 15:21:22 -0500 Subject: question In-Reply-To: <3CB9DEDE-AF58-4CC3-850B-0C728CAC224D@mcs.anl.gov> References: <3CB9DEDE-AF58-4CC3-850B-0C728CAC224D@mcs.anl.gov> Message-ID: I am having trouble to use MatCreateMPIAIWithArrays(). Do you have any example of using this function? Thanks -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith Sent: Thursday, October 09, 2008 9:24 AM To: petsc-users at mcs.anl.gov Subject: Re: question You can use the utilities: MatCreateSeqAIJWithArrays() or MatCreateMPIAIWithArrays() they handle all the details for you. Barry On Oct 9, 2008, at 9:04 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > All, > > I am looking for an example code that read A (in csr format) and b. > Then it > builds A and b petsc format and solves Ax = b. > > I found an example below, but it seems that it doesn't work. > > If you have similar like an example below or let me know where is a > problem, I would appreciate very much. > > Thanks, > > -Hung > --- > /* > Purpose: Test a sparse matrix solver. > */ > #include > #include "petscksp.h" > > int main(int argc,char **args) > { > /* My sample sparse matrix A */ > > /* > 11.0 0 0 14.0 0 > 21.0 22.0 0 24.0 0 > 31.0 0 33. 34.0 35.0 > 0 0 43. 44.0 0 > 0 0 0 0 55. > */ > > > const int sizeMat=5; // Matrix is 5 by 5. > int i,j; > int nonZero=12; > double val[] ={11., 14.,21., 22., 24., 31., 33., 34., 35., 43., > 44., 55.}; > int col_ind[]={0, 3, 0, 1, 3, 0, 2, 3, 4, 2, 3, 4}; > int row_ptr[]={0, 2, 5, 9, 11, 12}; > double knB[]={2.0, 0.0, 1.0, 1.0, 2.0}; > double answer[]={-0.0348308, -0.152452, -0.150927, 0.170224, > 0.0363636}; > > // calculate row_index, vector_index and number of nonzero per row: > > int nZperRow[]={3,4,2,1}; > int row_ind[]={0,0, 1,1,1, 2,2,2,2, 3,3, 4}; > int vec_ind[]={0,1,2,3,4}; > double initX[]={9.,9.,9.,9.,9.}; > > > > /* > PetSc codes start. > */ > printf("\n*** PetSC Testing phase. ***\n"); > /* Create variables of PetSc */ > Vec x,b,u; /* approx solution, RHS, exact solution > */ /*a > linear system, Ax = b. */ > Mat A; /* linear system matrix */ > KSP ksp; /* linear solver context */ > PetscInt Istart,Iend; /* Index for local matrix of > each > processor */ > PetscInt istart,iend; /* Index for local vector of > each > processor */ > PetscViewer viewer; > PetscMPIInt rank; > PetscErrorCode ierr; > PetscTruth flg; > > static char help[] = "Parallel vector layout.\n\n"; > /* Initialization of PetSc */ > PetscInitialize(&argc,&args,(char*)0,help); > MPI_Comm_rank(PETSC_COMM_WORLD,&rank); > > /* > Create parallel matrix, specifying only its global > dimensions. : > When using MatCreate(), the matrix format can be > specified at > runtime. Also, the parallel partitioning of the matrix > is > determined by PETSc at runtime. > > Performance tuning note: For problems of substantial > size, > preallocation of matrix memory is crucial for attaining > good > performance. See the matrix chapter of the users manual > for details. > - Allocates memory for a sparse parallel matrix in AIJ > format > > (the default parallel PETSc format: Compressed Sparse > Row). > */ > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > ierr = > MatSetSizes > (A,PETSC_DECIDE,PETSC_DECIDE,sizeMat,sizeMat);CHKERRQ(ierr); > ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr); > ierr = MatSetFromOptions(A);CHKERRQ(ierr); > > /* > Currently, all PETSc parallel matrix formats are > partitioned by > contiguous chunks of rows across the processors. > Determine > which > rows of the matrix are locally owned. > */ > ierr = MatGetOwnershipRange(A,&Istart,&Iend);CHKERRQ(ierr); > printf(" Rank= %d, Istart_row= %d, Iend_row+1 = %d \n", rank, > Istart, Iend); > > /* > ierr = > MatMPIAIJSetPreallocationCSR > (A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr) > ; > // Standard format, CSR > ierr = > MatSeqAIJSetPreallocation(A,0,nZperRow);CHKERRQ(ierr); > // Defining the number of nonzero for each row. > */ > > ierr = > MatMPIAIJSetPreallocationCSR > (A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr) > ; > // Standard format, CSR > ierr = MatSeqAIJSetPreallocation(A, 0,nZperRow);CHKERRQ(ierr); > // Defining the number of nonzero for each row. > > /* > Set matrix elements in parallel. > - Each processor needs to insert only elements that it > owns > locally (but any non-local elements will be > sent to the > appropriate processor during matrix assembly). > - Always specify global rows and columns of matrix > entries. > */ > /* Method 1: Efficient method. */ > for (i=row_ptr[Istart]; i { > //ierr = > MatSetValue > (A,row_ind[i],col_ind[i],val[i],INSERT_VALUES);CHKERRQ(ierr); > ierr = > MatSetValues(A,1,&(row_ind[i]), > 1,&(col_ind[i]),&(val[i]),INSERT_VALUES); > CHKER > RQ(ierr); > } > > ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > > /* > Visaualize a matrix. Set a viewer's style. > To see a dense matrix, use the following two lines: > Line1: viewer = PETSC_VIEWER_STDOUT_(PETSC_COMM_WORLD); > Line2: ierr = > PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_DENSE); > */ > ierr = MatView(A,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > > /* > Create parallel vectors. > */ > ierr = VecCreate(PETSC_COMM_WORLD,&u);CHKERRQ(ierr); > ierr = VecSetSizes(u,PETSC_DECIDE,sizeMat);CHKERRQ(ierr); > ierr = VecSetFromOptions(u);CHKERRQ(ierr); > ierr = VecDuplicate(u,&b);CHKERRQ(ierr); > ierr = VecDuplicate(b,&x);CHKERRQ(ierr); > /* > PETSc parallel vectors are partitioned by > contiguous chunks of rows across the processors. > Determine > which vector are locally owned. > */ > VecGetOwnershipRange(b,&istart,&iend); > /* > Insert vector values > */ > VecSetValues(u,sizeMat,vec_ind,answer,INSERT_VALUES); > VecSetValues(x,sizeMat,vec_ind,initX,INSERT_VALUES); > VecSetValues(b,sizeMat,vec_ind,knB,INSERT_VALUES); > /* > Assemble vector, using the 2-step process: > VecAssemblyBegin(), VecAssemblyEnd() > Computations can be done while messages are in > transition > by placing code between these two statements. > */ > VecAssemblyBegin(u); VecAssemblyEnd(u); > VecAssemblyBegin(x); VecAssemblyEnd(x); > VecAssemblyBegin(b); VecAssemblyEnd(b); > /* > View the exact solution vector if desired > */ > if(rank==0) printf("Vector u: \n"); > flg = 1; > if (flg) {ierr = > VecView(u,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} > if(rank==0) printf("Vector x: \n"); > if (flg) {ierr = > VecView(x,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} > if(rank==0) printf("Vector b: \n"); > if (flg) {ierr = > VecView(b,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} > > /* > Create the linear solver and set various options > */ > KSPCreate(PETSC_COMM_WORLD,&ksp); > KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN); > KSPSetInitialGuessNonzero(ksp,PETSC_TRUE); > KSPSetFromOptions(ksp); > > /* > Solve the linear system > */ > KSPSolve(ksp,b,x); > > if(rank==0) printf("Solved Vector x: \n"); > if (flg) {ierr = > VecView(x,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} > > /* > Free work space. All PETSc objects should be destroyed > when they are no longer needed. > */ > ierr = KSPDestroy(ksp);CHKERRQ(ierr); > ierr = MatDestroy(A);CHKERRQ(ierr); > ierr = VecDestroy(u);CHKERRQ(ierr); ierr = > VecDestroy(x);CHKERRQ(ierr); > ierr = VecDestroy(b);CHKERRQ(ierr); > > ierr = PetscFinalize();CHKERRQ(ierr); > > printf("\n"); > return 0; > } > From knepley at gmail.com Thu Oct 9 15:27:25 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Oct 2008 15:27:25 -0500 Subject: question In-Reply-To: References: <3CB9DEDE-AF58-4CC3-850B-0C728CAC224D@mcs.anl.gov> Message-ID: On Thu, Oct 9, 2008 at 3:21 PM, Nguyen, Hung V ERDC-ITL-MS wrote: > I am having trouble to use MatCreateMPIAIWithArrays(). Do you have any > example of using this function? I would recommend writing a code that uses MatSetValues first, which you can use to check your calls to MatCreateMPIAIJWithArrays(). What error are you getting? Matt > Thanks > > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On > Behalf Of Barry Smith > Sent: Thursday, October 09, 2008 9:24 AM > To: petsc-users at mcs.anl.gov > Subject: Re: question > > > You can use the utilities: MatCreateSeqAIJWithArrays() or > MatCreateMPIAIWithArrays() they > handle all the details for you. > > > > Barry > > On Oct 9, 2008, at 9:04 AM, Nguyen, Hung V ERDC-ITL-MS wrote: > >> All, >> >> I am looking for an example code that read A (in csr format) and b. >> Then it >> builds A and b petsc format and solves Ax = b. >> >> I found an example below, but it seems that it doesn't work. >> >> If you have similar like an example below or let me know where is a >> problem, I would appreciate very much. >> >> Thanks, >> >> -Hung >> --- >> /* >> Purpose: Test a sparse matrix solver. >> */ >> #include >> #include "petscksp.h" >> >> int main(int argc,char **args) >> { >> /* My sample sparse matrix A */ >> >> /* >> 11.0 0 0 14.0 0 >> 21.0 22.0 0 24.0 0 >> 31.0 0 33. 34.0 35.0 >> 0 0 43. 44.0 0 >> 0 0 0 0 55. >> */ >> >> >> const int sizeMat=5; // Matrix is 5 by 5. >> int i,j; >> int nonZero=12; >> double val[] ={11., 14.,21., 22., 24., 31., 33., 34., 35., 43., >> 44., 55.}; >> int col_ind[]={0, 3, 0, 1, 3, 0, 2, 3, 4, 2, 3, 4}; >> int row_ptr[]={0, 2, 5, 9, 11, 12}; >> double knB[]={2.0, 0.0, 1.0, 1.0, 2.0}; >> double answer[]={-0.0348308, -0.152452, -0.150927, 0.170224, >> 0.0363636}; >> >> // calculate row_index, vector_index and number of nonzero per row: >> >> int nZperRow[]={3,4,2,1}; >> int row_ind[]={0,0, 1,1,1, 2,2,2,2, 3,3, 4}; >> int vec_ind[]={0,1,2,3,4}; >> double initX[]={9.,9.,9.,9.,9.}; >> >> >> >> /* >> PetSc codes start. >> */ >> printf("\n*** PetSC Testing phase. ***\n"); >> /* Create variables of PetSc */ >> Vec x,b,u; /* approx solution, RHS, exact solution >> */ /*a >> linear system, Ax = b. */ >> Mat A; /* linear system matrix */ >> KSP ksp; /* linear solver context */ >> PetscInt Istart,Iend; /* Index for local matrix of >> each >> processor */ >> PetscInt istart,iend; /* Index for local vector of >> each >> processor */ >> PetscViewer viewer; >> PetscMPIInt rank; >> PetscErrorCode ierr; >> PetscTruth flg; >> >> static char help[] = "Parallel vector layout.\n\n"; >> /* Initialization of PetSc */ >> PetscInitialize(&argc,&args,(char*)0,help); >> MPI_Comm_rank(PETSC_COMM_WORLD,&rank); >> >> /* >> Create parallel matrix, specifying only its global >> dimensions. : >> When using MatCreate(), the matrix format can be >> specified at >> runtime. Also, the parallel partitioning of the matrix >> is >> determined by PETSc at runtime. >> >> Performance tuning note: For problems of substantial >> size, >> preallocation of matrix memory is crucial for attaining >> good >> performance. See the matrix chapter of the users manual >> for details. >> - Allocates memory for a sparse parallel matrix in AIJ >> format >> >> (the default parallel PETSc format: Compressed Sparse >> Row). >> */ >> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); >> ierr = >> MatSetSizes >> (A,PETSC_DECIDE,PETSC_DECIDE,sizeMat,sizeMat);CHKERRQ(ierr); >> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr); >> ierr = MatSetFromOptions(A);CHKERRQ(ierr); >> >> /* >> Currently, all PETSc parallel matrix formats are >> partitioned by >> contiguous chunks of rows across the processors. >> Determine >> which >> rows of the matrix are locally owned. >> */ >> ierr = MatGetOwnershipRange(A,&Istart,&Iend);CHKERRQ(ierr); >> printf(" Rank= %d, Istart_row= %d, Iend_row+1 = %d \n", rank, >> Istart, Iend); >> >> /* >> ierr = >> MatMPIAIJSetPreallocationCSR >> (A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr) >> ; >> // Standard format, CSR >> ierr = >> MatSeqAIJSetPreallocation(A,0,nZperRow);CHKERRQ(ierr); >> // Defining the number of nonzero for each row. >> */ >> >> ierr = >> MatMPIAIJSetPreallocationCSR >> (A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr) >> ; >> // Standard format, CSR >> ierr = MatSeqAIJSetPreallocation(A, 0,nZperRow);CHKERRQ(ierr); >> // Defining the number of nonzero for each row. >> >> /* >> Set matrix elements in parallel. >> - Each processor needs to insert only elements that it >> owns >> locally (but any non-local elements will be >> sent to the >> appropriate processor during matrix assembly). >> - Always specify global rows and columns of matrix >> entries. >> */ >> /* Method 1: Efficient method. */ >> for (i=row_ptr[Istart]; i> { >> //ierr = >> MatSetValue >> (A,row_ind[i],col_ind[i],val[i],INSERT_VALUES);CHKERRQ(ierr); >> ierr = >> MatSetValues(A,1,&(row_ind[i]), >> 1,&(col_ind[i]),&(val[i]),INSERT_VALUES); >> CHKER >> RQ(ierr); >> } >> >> ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); >> ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); >> >> /* >> Visaualize a matrix. Set a viewer's style. >> To see a dense matrix, use the following two lines: >> Line1: viewer = PETSC_VIEWER_STDOUT_(PETSC_COMM_WORLD); >> Line2: ierr = >> PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_DENSE); >> */ >> ierr = MatView(A,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); >> >> /* >> Create parallel vectors. >> */ >> ierr = VecCreate(PETSC_COMM_WORLD,&u);CHKERRQ(ierr); >> ierr = VecSetSizes(u,PETSC_DECIDE,sizeMat);CHKERRQ(ierr); >> ierr = VecSetFromOptions(u);CHKERRQ(ierr); >> ierr = VecDuplicate(u,&b);CHKERRQ(ierr); >> ierr = VecDuplicate(b,&x);CHKERRQ(ierr); >> /* >> PETSc parallel vectors are partitioned by >> contiguous chunks of rows across the processors. >> Determine >> which vector are locally owned. >> */ >> VecGetOwnershipRange(b,&istart,&iend); >> /* >> Insert vector values >> */ >> VecSetValues(u,sizeMat,vec_ind,answer,INSERT_VALUES); >> VecSetValues(x,sizeMat,vec_ind,initX,INSERT_VALUES); >> VecSetValues(b,sizeMat,vec_ind,knB,INSERT_VALUES); >> /* >> Assemble vector, using the 2-step process: >> VecAssemblyBegin(), VecAssemblyEnd() >> Computations can be done while messages are in >> transition >> by placing code between these two statements. >> */ >> VecAssemblyBegin(u); VecAssemblyEnd(u); >> VecAssemblyBegin(x); VecAssemblyEnd(x); >> VecAssemblyBegin(b); VecAssemblyEnd(b); >> /* >> View the exact solution vector if desired >> */ >> if(rank==0) printf("Vector u: \n"); >> flg = 1; >> if (flg) {ierr = >> VecView(u,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} >> if(rank==0) printf("Vector x: \n"); >> if (flg) {ierr = >> VecView(x,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} >> if(rank==0) printf("Vector b: \n"); >> if (flg) {ierr = >> VecView(b,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} >> >> /* >> Create the linear solver and set various options >> */ >> KSPCreate(PETSC_COMM_WORLD,&ksp); >> KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN); >> KSPSetInitialGuessNonzero(ksp,PETSC_TRUE); >> KSPSetFromOptions(ksp); >> >> /* >> Solve the linear system >> */ >> KSPSolve(ksp,b,x); >> >> if(rank==0) printf("Solved Vector x: \n"); >> if (flg) {ierr = >> VecView(x,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);} >> >> /* >> Free work space. All PETSc objects should be destroyed >> when they are no longer needed. >> */ >> ierr = KSPDestroy(ksp);CHKERRQ(ierr); >> ierr = MatDestroy(A);CHKERRQ(ierr); >> ierr = VecDestroy(u);CHKERRQ(ierr); ierr = >> VecDestroy(x);CHKERRQ(ierr); >> ierr = VecDestroy(b);CHKERRQ(ierr); >> >> ierr = PetscFinalize();CHKERRQ(ierr); >> >> printf("\n"); >> return 0; >> } >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From mafunk at nmsu.edu Thu Oct 9 15:33:59 2008 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 9 Oct 2008 14:33:59 -0600 Subject: analyze preconditioned operator? In-Reply-To: References: <200810091223.00078.mafunk@nmsu.edu> Message-ID: <200810091433.59906.mafunk@nmsu.edu> I think this is what i want (I hope i can run it though (in terms of memory)). I had not noticed this function before. I'll give it a shot. thanks matt On Thursday 09 October 2008, you wrote: > Take a look at KSPComputeExplicitOperator() routine. Note that it > could be rather slow if your large matrices. Additionally, it returns > a dense matrix in the uniprocessor case. > > It this trick does not fit your need, then perhaps you could use a > shell matrix having the actual Mat and a PC, but that approach is a > bit harder to setup. > > On Thu, Oct 9, 2008 at 3:22 PM, Matt Funk wrote: > > Hi, > > > > i am using PETSC and on top of it also SLEPC to do some matrix analysis. > > In Slepc i simply pass it the Petsc matrix to analyze it. > > However, is there anyway to analyze the preconditioned matrix at all? > > > > The only ways i can think of are either by getting the preconditioned > > matrix back out of PETSC (i.e. some sort of pointer to it if it is stored > > at all), or by Slepc applying the preconditioner (i tried to contact the > > developers but no one has responded to that question so far)? > > > > I looked around a little bit but i could not find anything on this. > > Does anyone have experience with this? > > > > thanks > > matt From mafunk at nmsu.edu Thu Oct 9 15:38:00 2008 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 9 Oct 2008 14:38:00 -0600 Subject: analyze preconditioned operator? In-Reply-To: <383ade90810091202r7a67fbf8wa80f3bbbaa86f8b7@mail.gmail.com> References: <200810091223.00078.mafunk@nmsu.edu> <383ade90810091202r7a67fbf8wa80f3bbbaa86f8b7@mail.gmail.com> Message-ID: <200810091438.01119.mafunk@nmsu.edu> Hi Jed, what you suggest seems like it might be even more efficient for sparse matrices (which i am using) then using the KSPComputeExplicitOperator() routine. However, what kind of call do i make to apply the preconditioner to the matrix. I have not really ever written a MatShell but it seems like there should still be some PetscFunction that allows me to apply the preconditioner. Or am i misunderstanding you ... ? matt ps: also, did you mean to write B=PA where A is my original preconditioner? On Thursday 09 October 2008, Jed Brown wrote: > On Thu, Oct 9, 2008 at 20:22, Matt Funk wrote: > > i am using PETSC and on top of it also SLEPC to do some matrix analysis. > > In Slepc i simply pass it the Petsc matrix to analyze it. > > However, is there anyway to analyze the preconditioned matrix at all? > > You can certainly write a MatShell which applies B = P^-1 A and hand > this to SLEPc with no preconditioner. This seems like a common thing > so maybe SLEPc has an option to use the preconditioned operator. > > Jed From knepley at gmail.com Thu Oct 9 15:51:42 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Oct 2008 15:51:42 -0500 Subject: analyze preconditioned operator? In-Reply-To: <200810091438.01119.mafunk@nmsu.edu> References: <200810091223.00078.mafunk@nmsu.edu> <383ade90810091202r7a67fbf8wa80f3bbbaa86f8b7@mail.gmail.com> <200810091438.01119.mafunk@nmsu.edu> Message-ID: PCApply(). Matt On Thu, Oct 9, 2008 at 3:38 PM, Matt Funk wrote: > Hi Jed, > > what you suggest seems like it might be even more efficient for sparse > matrices (which i am using) then using the KSPComputeExplicitOperator() > routine. > > However, what kind of call do i make to apply the preconditioner to the > matrix. I have not really ever written a MatShell but it seems like there > should still be some PetscFunction that allows me to apply the > preconditioner. > > Or am i misunderstanding you ... ? > > matt > > ps: also, did you mean to write B=PA where A is my original preconditioner? > > On Thursday 09 October 2008, Jed Brown wrote: >> On Thu, Oct 9, 2008 at 20:22, Matt Funk wrote: >> > i am using PETSC and on top of it also SLEPC to do some matrix analysis. >> > In Slepc i simply pass it the Petsc matrix to analyze it. >> > However, is there anyway to analyze the preconditioned matrix at all? >> >> You can certainly write a MatShell which applies B = P^-1 A and hand >> this to SLEPc with no preconditioner. This seems like a common thing >> so maybe SLEPc has an option to use the preconditioned operator. >> >> Jed > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From mafunk at nmsu.edu Thu Oct 9 16:32:01 2008 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 9 Oct 2008 15:32:01 -0600 Subject: analyze preconditioned operator? In-Reply-To: References: <200810091223.00078.mafunk@nmsu.edu> <200810091438.01119.mafunk@nmsu.edu> Message-ID: <200810091532.01512.mafunk@nmsu.edu> mmhh, i think i am missing something. Doesn't PCApply() apply the preconditioner to a vector? So how would that work (easily) with a matrix? matt On Thursday 09 October 2008, Matthew Knepley wrote: > CApply(). From bsmith at mcs.anl.gov Thu Oct 9 16:35:02 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 9 Oct 2008 16:35:02 -0500 Subject: analyze preconditioned operator? In-Reply-To: <200810091532.01512.mafunk@nmsu.edu> References: <200810091223.00078.mafunk@nmsu.edu> <200810091438.01119.mafunk@nmsu.edu> <200810091532.01512.mafunk@nmsu.edu> Message-ID: <55785035-E157-4493-BA1C-92751623C6DD@mcs.anl.gov> You are not missing something. If you want an application of a preconditioner to look like a matrix-vector product for SLEPc then you need to wrap the PCApply inside a MatShell() (it may sound scary but is easy). Barry On Oct 9, 2008, at 4:32 PM, Matt Funk wrote: > mmhh, > > i think i am missing something. Doesn't PCApply() apply the > preconditioner to > a vector? So how would that work (easily) with a matrix? > > matt > > On Thursday 09 October 2008, Matthew Knepley wrote: >> CApply(). > > From knepley at gmail.com Thu Oct 9 16:36:46 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Oct 2008 16:36:46 -0500 Subject: analyze preconditioned operator? In-Reply-To: <200810091532.01512.mafunk@nmsu.edu> References: <200810091223.00078.mafunk@nmsu.edu> <200810091438.01119.mafunk@nmsu.edu> <200810091532.01512.mafunk@nmsu.edu> Message-ID: On Thu, Oct 9, 2008 at 4:32 PM, Matt Funk wrote: > mmhh, > > i think i am missing something. Doesn't PCApply() apply the preconditioner to > a vector? So how would that work (easily) with a matrix? You do not apply it to the matrix. Here is a skeleton (maybe has mistakes) void myApply(Mat A, Vec x, Vec y) { MatShellGetContext(A, &ctx); MatMult(ctx->M, x, ctx->work); PCApply(ctx->pc, ctx->work, y); } MatShellSetOperation(A, MATOP_MULT, myApply) Matt > matt > > On Thursday 09 October 2008, Matthew Knepley wrote: >> CApply(). -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From mafunk at nmsu.edu Thu Oct 9 17:25:22 2008 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 9 Oct 2008 16:25:22 -0600 Subject: analyze preconditioned operator? In-Reply-To: References: <200810091223.00078.mafunk@nmsu.edu> <200810091532.01512.mafunk@nmsu.edu> Message-ID: <200810091625.22873.mafunk@nmsu.edu> Hi Matt, so, the basic idea with this code is to apply the pc to each column vector of the matrix? Is that right? Also, in your example: when is myApply actually invoked? I also looked at the example listed under the MatShellSetOperation reference page. Is the function then actually internally called when MatShellSetOperation is called, or when KSPSetOperators is called or KSPSolve? The reason i am asking is that if it is called when KSPSolve called then there is a problem because for the analysis i never call KSPSolve directly. thanks matt On Thursday 09 October 2008, Matthew Knepley wrote: > On Thu, Oct 9, 2008 at 4:32 PM, Matt Funk wrote: > > mmhh, > > > > i think i am missing something. Doesn't PCApply() apply the > > preconditioner to a vector? So how would that work (easily) with a > > matrix? > > You do not apply it to the matrix. Here is a skeleton (maybe has mistakes) > > void myApply(Mat A, Vec x, Vec y) { > MatShellGetContext(A, &ctx); > MatMult(ctx->M, x, ctx->work); > PCApply(ctx->pc, ctx->work, y); > } > > MatShellSetOperation(A, MATOP_MULT, myApply) > > Matt > > > matt > > > > On Thursday 09 October 2008, Matthew Knepley wrote: > >> CApply(). From knepley at gmail.com Thu Oct 9 17:39:00 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Oct 2008 17:39:00 -0500 Subject: analyze preconditioned operator? In-Reply-To: <200810091625.22873.mafunk@nmsu.edu> References: <200810091223.00078.mafunk@nmsu.edu> <200810091532.01512.mafunk@nmsu.edu> <200810091625.22873.mafunk@nmsu.edu> Message-ID: On Thu, Oct 9, 2008 at 5:25 PM, Matt Funk wrote: > Hi Matt, > > so, the basic idea with this code is to apply the pc to each column vector of > the matrix? Is that right? No, that is what KSPGetExplicitOperator() does. This applies the operator in a matrix-free way. > Also, in your example: when is myApply actually invoked? I also looked at the > example listed under the MatShellSetOperation reference page. When MatMult(A) is called. Matt > Is the function then actually internally called when MatShellSetOperation is > called, or when KSPSetOperators is called or KSPSolve? > > The reason i am asking is that if it is called when KSPSolve called then there > is a problem because for the analysis i never call KSPSolve directly. > > thanks > matt > > > On Thursday 09 October 2008, Matthew Knepley wrote: >> On Thu, Oct 9, 2008 at 4:32 PM, Matt Funk wrote: >> > mmhh, >> > >> > i think i am missing something. Doesn't PCApply() apply the >> > preconditioner to a vector? So how would that work (easily) with a >> > matrix? >> >> You do not apply it to the matrix. Here is a skeleton (maybe has mistakes) >> >> void myApply(Mat A, Vec x, Vec y) { >> MatShellGetContext(A, &ctx); >> MatMult(ctx->M, x, ctx->work); >> PCApply(ctx->pc, ctx->work, y); >> } >> >> MatShellSetOperation(A, MATOP_MULT, myApply) >> >> Matt >> >> > matt >> > >> > On Thursday 09 October 2008, Matthew Knepley wrote: >> >> CApply(). > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From mafunk at nmsu.edu Thu Oct 9 17:58:41 2008 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 9 Oct 2008 16:58:41 -0600 Subject: analyze preconditioned operator? In-Reply-To: References: <200810091223.00078.mafunk@nmsu.edu> <200810091625.22873.mafunk@nmsu.edu> Message-ID: <200810091658.42081.mafunk@nmsu.edu> So then, is it then correct to say that this code "registers" this extra operation (i.e. applying the preconditioner) with the matrix context such that whenever a matrix operation involving this matrix is invoked (like MatMult for example) the pc-applying fcn (i.e. myApply() ) is called first? matt ps: sorry for being a little slow on the uptake here ... On Thursday 09 October 2008, Matthew Knepley wrote: > On Thu, Oct 9, 2008 at 5:25 PM, Matt Funk wrote: > > Hi Matt, > > > > so, the basic idea with this code is to apply the pc to each column > > vector of the matrix? Is that right? > > No, that is what KSPGetExplicitOperator() does. This applies the operator > in a matrix-free way. > > > Also, in your example: when is myApply actually invoked? I also looked at > > the example listed under the MatShellSetOperation reference page. > > When MatMult(A) is called. > > Matt > > > Is the function then actually internally called when MatShellSetOperation > > is called, or when KSPSetOperators is called or KSPSolve? > > > > The reason i am asking is that if it is called when KSPSolve called then > > there is a problem because for the analysis i never call KSPSolve > > directly. > > > > thanks > > matt > > > > On Thursday 09 October 2008, Matthew Knepley wrote: > >> On Thu, Oct 9, 2008 at 4:32 PM, Matt Funk wrote: > >> > mmhh, > >> > > >> > i think i am missing something. Doesn't PCApply() apply the > >> > preconditioner to a vector? So how would that work (easily) with a > >> > matrix? > >> > >> You do not apply it to the matrix. Here is a skeleton (maybe has > >> mistakes) > >> > >> void myApply(Mat A, Vec x, Vec y) { > >> MatShellGetContext(A, &ctx); > >> MatMult(ctx->M, x, ctx->work); > >> PCApply(ctx->pc, ctx->work, y); > >> } > >> > >> MatShellSetOperation(A, MATOP_MULT, myApply) > >> > >> Matt > >> > >> > matt > >> > > >> > On Thursday 09 October 2008, Matthew Knepley wrote: > >> >> CApply(). From knepley at gmail.com Thu Oct 9 18:46:26 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Oct 2008 18:46:26 -0500 Subject: analyze preconditioned operator? In-Reply-To: <200810091658.42081.mafunk@nmsu.edu> References: <200810091223.00078.mafunk@nmsu.edu> <200810091625.22873.mafunk@nmsu.edu> <200810091658.42081.mafunk@nmsu.edu> Message-ID: On Thu, Oct 9, 2008 at 5:58 PM, Matt Funk wrote: > So then, > > is it then correct to say that this code "registers" this extra operation > (i.e. applying the preconditioner) with the matrix context such that whenever > a matrix operation involving this matrix is invoked (like MatMult for > example) the pc-applying fcn (i.e. myApply() ) is called first? The idea here is to replace a given matrix A, which you are passing to SLEPc, with another matrix M, which is a shell matrix. When MatMult() is called on M, we call MatMult on A, and then PCApply on the result. Matt > matt > > ps: sorry for being a little slow on the uptake here ... > > > On Thursday 09 October 2008, Matthew Knepley wrote: >> On Thu, Oct 9, 2008 at 5:25 PM, Matt Funk wrote: >> > Hi Matt, >> > >> > so, the basic idea with this code is to apply the pc to each column >> > vector of the matrix? Is that right? >> >> No, that is what KSPGetExplicitOperator() does. This applies the operator >> in a matrix-free way. >> >> > Also, in your example: when is myApply actually invoked? I also looked at >> > the example listed under the MatShellSetOperation reference page. >> >> When MatMult(A) is called. >> >> Matt >> >> > Is the function then actually internally called when MatShellSetOperation >> > is called, or when KSPSetOperators is called or KSPSolve? >> > >> > The reason i am asking is that if it is called when KSPSolve called then >> > there is a problem because for the analysis i never call KSPSolve >> > directly. >> > >> > thanks >> > matt >> > >> > On Thursday 09 October 2008, Matthew Knepley wrote: >> >> On Thu, Oct 9, 2008 at 4:32 PM, Matt Funk wrote: >> >> > mmhh, >> >> > >> >> > i think i am missing something. Doesn't PCApply() apply the >> >> > preconditioner to a vector? So how would that work (easily) with a >> >> > matrix? >> >> >> >> You do not apply it to the matrix. Here is a skeleton (maybe has >> >> mistakes) >> >> >> >> void myApply(Mat A, Vec x, Vec y) { >> >> MatShellGetContext(A, &ctx); >> >> MatMult(ctx->M, x, ctx->work); >> >> PCApply(ctx->pc, ctx->work, y); >> >> } >> >> >> >> MatShellSetOperation(A, MATOP_MULT, myApply) >> >> >> >> Matt >> >> >> >> > matt >> >> > >> >> > On Thursday 09 October 2008, Matthew Knepley wrote: >> >> >> CApply(). > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From mafunk at nmsu.edu Fri Oct 10 15:11:02 2008 From: mafunk at nmsu.edu (Matt Funk) Date: Fri, 10 Oct 2008 14:11:02 -0600 Subject: analyze preconditioned operator? In-Reply-To: References: <200810091223.00078.mafunk@nmsu.edu> <200810091658.42081.mafunk@nmsu.edu> Message-ID: <200810101411.02514.mafunk@nmsu.edu> Ok, thanks, i think i finally got the main idea (the emphasize here being on 'i think' which might not mean much ...) One other question though: What do i declare ctx to be: PetscObject * ctx ? Another question related to KSPComputeExplicitOperator: I was trying to use KSPComputeExplicitOperator and i am having issues with it as well. My matrix is a sparse matrix of length ((2*73)^3) with a maximum of ~30 entries per row. I believe that in order to use KSPComputeExplicitOperator i need to set up a matrix, allocate the memory for it and then pass it to the routine. >From what i gather on the reference page this matrix needs to be of type MatDense? (if so then i cannot use KSPComputeExplicitOperator due to the size of the resulting matrix) However, it said on the website that when multiple procs are used it uses a sparse format. So why cannot i not use a sparse format in serial? thanks matt On Thursday 09 October 2008, Matthew Knepley wrote: > On Thu, Oct 9, 2008 at 5:58 PM, Matt Funk wrote: > > So then, > > > > is it then correct to say that this code "registers" this extra operation > > (i.e. applying the preconditioner) with the matrix context such that > > whenever a matrix operation involving this matrix is invoked (like > > MatMult for example) the pc-applying fcn (i.e. myApply() ) is called > > first? > > The idea here is to replace a given matrix A, which you are passing to > SLEPc, with another matrix M, which is a shell matrix. When MatMult() is > called on M, we call MatMult on A, and then PCApply on the result. > > Matt > > > matt > > > > ps: sorry for being a little slow on the uptake here ... > > > > On Thursday 09 October 2008, Matthew Knepley wrote: > >> On Thu, Oct 9, 2008 at 5:25 PM, Matt Funk wrote: > >> > Hi Matt, > >> > > >> > so, the basic idea with this code is to apply the pc to each column > >> > vector of the matrix? Is that right? > >> > >> No, that is what KSPGetExplicitOperator() does. This applies the > >> operator in a matrix-free way. > >> > >> > Also, in your example: when is myApply actually invoked? I also looked > >> > at the example listed under the MatShellSetOperation reference page. > >> > >> When MatMult(A) is called. > >> > >> Matt > >> > >> > Is the function then actually internally called when > >> > MatShellSetOperation is called, or when KSPSetOperators is called or > >> > KSPSolve? > >> > > >> > The reason i am asking is that if it is called when KSPSolve called > >> > then there is a problem because for the analysis i never call KSPSolve > >> > directly. > >> > > >> > thanks > >> > matt > >> > > >> > On Thursday 09 October 2008, Matthew Knepley wrote: > >> >> On Thu, Oct 9, 2008 at 4:32 PM, Matt Funk wrote: > >> >> > mmhh, > >> >> > > >> >> > i think i am missing something. Doesn't PCApply() apply the > >> >> > preconditioner to a vector? So how would that work (easily) with a > >> >> > matrix? > >> >> > >> >> You do not apply it to the matrix. Here is a skeleton (maybe has > >> >> mistakes) > >> >> > >> >> void myApply(Mat A, Vec x, Vec y) { > >> >> MatShellGetContext(A, &ctx); > >> >> MatMult(ctx->M, x, ctx->work); > >> >> PCApply(ctx->pc, ctx->work, y); > >> >> } > >> >> > >> >> MatShellSetOperation(A, MATOP_MULT, myApply) > >> >> > >> >> Matt > >> >> > >> >> > matt > >> >> > > >> >> > On Thursday 09 October 2008, Matthew Knepley wrote: > >> >> >> CApply(). From knepley at gmail.com Fri Oct 10 15:18:44 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 10 Oct 2008 15:18:44 -0500 Subject: analyze preconditioned operator? In-Reply-To: <200810101411.02514.mafunk@nmsu.edu> References: <200810091223.00078.mafunk@nmsu.edu> <200810091658.42081.mafunk@nmsu.edu> <200810101411.02514.mafunk@nmsu.edu> Message-ID: On Fri, Oct 10, 2008 at 3:11 PM, Matt Funk wrote: > Ok, > > thanks, i think i finally got the main idea (the emphasize here being on 'i > think' which might not mean much ...) > > One other question though: > What do i declare ctx to be: PetscObject * ctx ? No, you declare a struct which holds a) The original matrix b) The preconditioner c) A work vector or you would not be able to code the apply method. > Another question related to KSPComputeExplicitOperator: > > I was trying to use KSPComputeExplicitOperator and i am having issues with it > as well. My matrix is a sparse matrix of length ((2*73)^3) with a maximum of > ~30 entries per row. This is way way way too big to form the operator explicitly. > I believe that in order to use KSPComputeExplicitOperator i need to set up a > matrix, allocate the memory for it and then pass it to the routine. > From what i gather on the reference page this matrix needs to be of type > MatDense? (if so then i cannot use KSPComputeExplicitOperator due to the size > of the resulting matrix) > However, it said on the website that when multiple procs are used it uses a > sparse format. > So why cannot i not use a sparse format in serial? 1) This operator is in general not sparse at all 2) We only use a sparse format in parllel because matMPIDense was having problems It is instructive to look at what the code is doing. The function is very short and you can get to it right from the link on the manpage. Matt > thanks > matt > > > On Thursday 09 October 2008, Matthew Knepley wrote: >> On Thu, Oct 9, 2008 at 5:58 PM, Matt Funk wrote: >> > So then, >> > >> > is it then correct to say that this code "registers" this extra operation >> > (i.e. applying the preconditioner) with the matrix context such that >> > whenever a matrix operation involving this matrix is invoked (like >> > MatMult for example) the pc-applying fcn (i.e. myApply() ) is called >> > first? >> >> The idea here is to replace a given matrix A, which you are passing to >> SLEPc, with another matrix M, which is a shell matrix. When MatMult() is >> called on M, we call MatMult on A, and then PCApply on the result. >> >> Matt >> >> > matt >> > >> > ps: sorry for being a little slow on the uptake here ... >> > >> > On Thursday 09 October 2008, Matthew Knepley wrote: >> >> On Thu, Oct 9, 2008 at 5:25 PM, Matt Funk wrote: >> >> > Hi Matt, >> >> > >> >> > so, the basic idea with this code is to apply the pc to each column >> >> > vector of the matrix? Is that right? >> >> >> >> No, that is what KSPGetExplicitOperator() does. This applies the >> >> operator in a matrix-free way. >> >> >> >> > Also, in your example: when is myApply actually invoked? I also looked >> >> > at the example listed under the MatShellSetOperation reference page. >> >> >> >> When MatMult(A) is called. >> >> >> >> Matt >> >> >> >> > Is the function then actually internally called when >> >> > MatShellSetOperation is called, or when KSPSetOperators is called or >> >> > KSPSolve? >> >> > >> >> > The reason i am asking is that if it is called when KSPSolve called >> >> > then there is a problem because for the analysis i never call KSPSolve >> >> > directly. >> >> > >> >> > thanks >> >> > matt >> >> > >> >> > On Thursday 09 October 2008, Matthew Knepley wrote: >> >> >> On Thu, Oct 9, 2008 at 4:32 PM, Matt Funk wrote: >> >> >> > mmhh, >> >> >> > >> >> >> > i think i am missing something. Doesn't PCApply() apply the >> >> >> > preconditioner to a vector? So how would that work (easily) with a >> >> >> > matrix? >> >> >> >> >> >> You do not apply it to the matrix. Here is a skeleton (maybe has >> >> >> mistakes) >> >> >> >> >> >> void myApply(Mat A, Vec x, Vec y) { >> >> >> MatShellGetContext(A, &ctx); >> >> >> MatMult(ctx->M, x, ctx->work); >> >> >> PCApply(ctx->pc, ctx->work, y); >> >> >> } >> >> >> >> >> >> MatShellSetOperation(A, MATOP_MULT, myApply) >> >> >> >> >> >> Matt >> >> >> >> >> >> > matt >> >> >> > >> >> >> > On Thursday 09 October 2008, Matthew Knepley wrote: >> >> >> >> CApply(). > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bui at calcreek.com Sat Oct 11 18:03:00 2008 From: bui at calcreek.com (Thuc Bui) Date: Sat, 11 Oct 2008 16:03:00 -0700 Subject: Does Petsc built with MPICH2 work in a single processor box? In-Reply-To: References: <200810091223.00078.mafunk@nmsu.edu> <200810091658.42081.mafunk@nmsu.edu> <200810101411.02514.mafunk@nmsu.edu> Message-ID: <9842D9E7CA6E49DEB3A49DE1B7DCD80F@aphrodite> Hi all, I am able to build Petsc-2.3.3-p15 with MPICH2 under Windows and make it a DLL. It works great with my app in a dual core laptop. However, when the same executable runs on a uniprocessor windows box, it gives me the following errors: ... [0] Error creating mpiexec process...2 [0] launchMpiexecProcess failed Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(294): Initialization failed MPID_Init(82)........: channel initialization failed MPID_Init(383).......: PMI_Get_id returned 1 ... Are these errors due to PatscInitialize() failing to initialize MPI on a single processor box? If this is the case, is there a way in PetscInitialize or else where to turn off MPI without having to recompile Petsc with the option --with-mpi=0? Many thanks in advance for your help, Thuc Bui From knepley at gmail.com Sat Oct 11 18:26:50 2008 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 11 Oct 2008 18:26:50 -0500 Subject: Does Petsc built with MPICH2 work in a single processor box? In-Reply-To: <9842D9E7CA6E49DEB3A49DE1B7DCD80F@aphrodite> References: <200810091223.00078.mafunk@nmsu.edu> <200810091658.42081.mafunk@nmsu.edu> <200810101411.02514.mafunk@nmsu.edu> <9842D9E7CA6E49DEB3A49DE1B7DCD80F@aphrodite> Message-ID: On Sat, Oct 11, 2008 at 6:03 PM, Thuc Bui wrote: > > Hi all, > > I am able to build Petsc-2.3.3-p15 with MPICH2 under Windows and make it a > DLL. It works great with my app in a dual core laptop. However, when the > same executable runs on a uniprocessor windows box, it gives me the > following errors: > > ... > [0] Error creating mpiexec process...2 > [0] launchMpiexecProcess failed > Fatal error in MPI_Init: Other MPI error, error stack: > MPIR_Init_thread(294): Initialization failed > MPID_Init(82)........: channel initialization failed > MPID_Init(383).......: PMI_Get_id returned 1 > ... Can you run any MPI program on this machine? Matt > Are these errors due to PatscInitialize() failing to initialize MPI on a > single processor box? > > If this is the case, is there a way in PetscInitialize or else where to turn > off MPI without having to recompile Petsc with the option --with-mpi=0? > > Many thanks in advance for your help, > Thuc Bui > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Sat Oct 11 19:52:14 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 11 Oct 2008 19:52:14 -0500 Subject: Does Petsc built with MPICH2 work in a single processor box? In-Reply-To: <9842D9E7CA6E49DEB3A49DE1B7DCD80F@aphrodite> References: <200810091223.00078.mafunk@nmsu.edu> <200810091658.42081.mafunk@nmsu.edu> <200810101411.02514.mafunk@nmsu.edu> <9842D9E7CA6E49DEB3A49DE1B7DCD80F@aphrodite> Message-ID: Looks like you may not have the proper MPICH demons running on this "uniprocessor" machine? Barry On Oct 11, 2008, at 6:03 PM, Thuc Bui wrote: > > Hi all, > > I am able to build Petsc-2.3.3-p15 with MPICH2 under Windows and > make it a > DLL. It works great with my app in a dual core laptop. However, when > the > same executable runs on a uniprocessor windows box, it gives me the > following errors: > > ... > [0] Error creating mpiexec process...2 > [0] launchMpiexecProcess failed > Fatal error in MPI_Init: Other MPI error, error stack: > MPIR_Init_thread(294): Initialization failed > MPID_Init(82)........: channel initialization failed > MPID_Init(383).......: PMI_Get_id returned 1 > ... > > Are these errors due to PatscInitialize() failing to initialize MPI > on a > single processor box? > > If this is the case, is there a way in PetscInitialize or else where > to turn > off MPI without having to recompile Petsc with the option --with- > mpi=0? > > Many thanks in advance for your help, > Thuc Bui > From bui at calcreek.com Sat Oct 11 21:07:54 2008 From: bui at calcreek.com (Thuc Bui) Date: Sat, 11 Oct 2008 19:07:54 -0700 Subject: Does Petsc built with MPICH2 work in a single processor box? In-Reply-To: References: <200810091223.00078.mafunk@nmsu.edu> <200810091658.42081.mafunk@nmsu.edu> <200810101411.02514.mafunk@nmsu.edu> <9842D9E7CA6E49DEB3A49DE1B7DCD80F@aphrodite> Message-ID: <63509B0102B04D07B305BFA1EFA62A07@aphrodite> Hi Barry and Matt, Yes, I do not have proper MPI authentication to run on this single processor machine, which has MPICH2 installed. However, I do not expect the users on this type of machine needs to install MPICH2 to run my Petsc app. So, I went to another single processor PC, which has no MPICH2 installed, ran my Petsc app. It complains that mpich2mpi.dll and mpich2.dll are missing. So, I just copied these DLL's to a directory on the PATH then my Petsc app would run fine. Thank you both again for your help, Thuc -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith Sent: Saturday, October 11, 2008 5:52 PM To: petsc-users at mcs.anl.gov Subject: Re: Does Petsc built with MPICH2 work in a single processor box? Looks like you may not have the proper MPICH demons running on this "uniprocessor" machine? Barry On Oct 11, 2008, at 6:03 PM, Thuc Bui wrote: > > Hi all, > > I am able to build Petsc-2.3.3-p15 with MPICH2 under Windows and > make it a > DLL. It works great with my app in a dual core laptop. However, when > the > same executable runs on a uniprocessor windows box, it gives me the > following errors: > > ... > [0] Error creating mpiexec process...2 > [0] launchMpiexecProcess failed > Fatal error in MPI_Init: Other MPI error, error stack: > MPIR_Init_thread(294): Initialization failed > MPID_Init(82)........: channel initialization failed > MPID_Init(383).......: PMI_Get_id returned 1 > ... > > Are these errors due to PatscInitialize() failing to initialize MPI > on a > single processor box? > > If this is the case, is there a way in PetscInitialize or else where > to turn > off MPI without having to recompile Petsc with the option --with- > mpi=0? > > Many thanks in advance for your help, > Thuc Bui > From bsmith at mcs.anl.gov Sat Oct 11 21:34:31 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 11 Oct 2008 21:34:31 -0500 Subject: Does Petsc built with MPICH2 work in a single processor box? In-Reply-To: <63509B0102B04D07B305BFA1EFA62A07@aphrodite> References: <200810091223.00078.mafunk@nmsu.edu> <200810091658.42081.mafunk@nmsu.edu> <200810101411.02514.mafunk@nmsu.edu> <9842D9E7CA6E49DEB3A49DE1B7DCD80F@aphrodite> <63509B0102B04D07B305BFA1EFA62A07@aphrodite> Message-ID: If you have multiple people wanting to run your code on single processes, it is probably worth your while to build another PETSC_ARCH using --with-mpi=0 to simplify handing them the final program. Barry On Oct 11, 2008, at 9:07 PM, Thuc Bui wrote: > Hi Barry and Matt, > > Yes, I do not have proper MPI authentication to run on this single > processor > machine, which has MPICH2 installed. However, I do not expect the > users on > this type of machine needs to install MPICH2 to run my Petsc app. > So, I went > to another single processor PC, which has no MPICH2 installed, ran > my Petsc > app. It complains that mpich2mpi.dll and mpich2.dll are missing. So, > I just > copied these DLL's to a directory on the PATH then my Petsc app > would run > fine. > > Thank you both again for your help, > Thuc > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov > ] > On Behalf Of Barry Smith > Sent: Saturday, October 11, 2008 5:52 PM > To: petsc-users at mcs.anl.gov > Subject: Re: Does Petsc built with MPICH2 work in a single processor > box? > > > Looks like you may not have the proper MPICH demons running on > this "uniprocessor" > machine? > > Barry > > On Oct 11, 2008, at 6:03 PM, Thuc Bui wrote: > >> >> Hi all, >> >> I am able to build Petsc-2.3.3-p15 with MPICH2 under Windows and >> make it a >> DLL. It works great with my app in a dual core laptop. However, when >> the >> same executable runs on a uniprocessor windows box, it gives me the >> following errors: >> >> ... >> [0] Error creating mpiexec process...2 >> [0] launchMpiexecProcess failed >> Fatal error in MPI_Init: Other MPI error, error stack: >> MPIR_Init_thread(294): Initialization failed >> MPID_Init(82)........: channel initialization failed >> MPID_Init(383).......: PMI_Get_id returned 1 >> ... >> >> Are these errors due to PatscInitialize() failing to initialize MPI >> on a >> single processor box? >> >> If this is the case, is there a way in PetscInitialize or else where >> to turn >> off MPI without having to recompile Petsc with the option --with- >> mpi=0? >> >> Many thanks in advance for your help, >> Thuc Bui >> > > From bui at calcreek.com Sat Oct 11 23:31:12 2008 From: bui at calcreek.com (Thuc Bui) Date: Sat, 11 Oct 2008 21:31:12 -0700 Subject: Does Petsc built with MPICH2 work in a single processor box? In-Reply-To: References: <200810091223.00078.mafunk@nmsu.edu> <200810091658.42081.mafunk@nmsu.edu> <200810101411.02514.mafunk@nmsu.edu> <9842D9E7CA6E49DEB3A49DE1B7DCD80F@aphrodite> <63509B0102B04D07B305BFA1EFA62A07@aphrodite> Message-ID: Hi Barry, I will probably do that, but first I will need to modify the Visual Studio project file to exclude those codes required by MPI and include those needed by non-MPI. Fortunately, I would be able to do that by looking at the build log of the MPI static library with --with-mpi=0. BTW, if anyone has the need to compile Petsc using Visual Studio IDE to DLL, please let me know. I can email you the file with the instructions how to do that. I had to make some code changes that have to do with extern and extern "C" to make VS happy. I will need to make the list of these files with these changes. The good thing with the VS project file is that I can now just automatically migrate to either VS05 or VS08. Many thanks to Chetan Jhurani for the initial VS project file for version 2.3.2. Cheers, Thuc -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Barry Smith Sent: Saturday, October 11, 2008 7:35 PM To: petsc-users at mcs.anl.gov Subject: Re: Does Petsc built with MPICH2 work in a single processor box? If you have multiple people wanting to run your code on single processes, it is probably worth your while to build another PETSC_ARCH using --with-mpi=0 to simplify handing them the final program. Barry On Oct 11, 2008, at 9:07 PM, Thuc Bui wrote: > Hi Barry and Matt, > > Yes, I do not have proper MPI authentication to run on this single > processor > machine, which has MPICH2 installed. However, I do not expect the > users on > this type of machine needs to install MPICH2 to run my Petsc app. > So, I went > to another single processor PC, which has no MPICH2 installed, ran > my Petsc > app. It complains that mpich2mpi.dll and mpich2.dll are missing. So, > I just > copied these DLL's to a directory on the PATH then my Petsc app > would run > fine. > > Thank you both again for your help, > Thuc > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov > ] > On Behalf Of Barry Smith > Sent: Saturday, October 11, 2008 5:52 PM > To: petsc-users at mcs.anl.gov > Subject: Re: Does Petsc built with MPICH2 work in a single processor > box? > > > Looks like you may not have the proper MPICH demons running on > this "uniprocessor" > machine? > > Barry > > On Oct 11, 2008, at 6:03 PM, Thuc Bui wrote: > >> >> Hi all, >> >> I am able to build Petsc-2.3.3-p15 with MPICH2 under Windows and >> make it a >> DLL. It works great with my app in a dual core laptop. However, when >> the >> same executable runs on a uniprocessor windows box, it gives me the >> following errors: >> >> ... >> [0] Error creating mpiexec process...2 >> [0] launchMpiexecProcess failed >> Fatal error in MPI_Init: Other MPI error, error stack: >> MPIR_Init_thread(294): Initialization failed >> MPID_Init(82)........: channel initialization failed >> MPID_Init(383).......: PMI_Get_id returned 1 >> ... >> >> Are these errors due to PatscInitialize() failing to initialize MPI >> on a >> single processor box? >> >> If this is the case, is there a way in PetscInitialize or else where >> to turn >> off MPI without having to recompile Petsc with the option --with- >> mpi=0? >> >> Many thanks in advance for your help, >> Thuc Bui >> > > From christoph.statz at ifn.et.tu-dresden.de Mon Oct 13 07:12:51 2008 From: christoph.statz at ifn.et.tu-dresden.de (Christoph Statz) Date: Mon, 13 Oct 2008 14:12:51 +0200 Subject: Performance Issues on ccNuma-System Message-ID: Dear PETSc-users, i'm trying to work with PETSc on a ccNuma-system, where i am confronted with severe performance problems. Is there anyone using PETSc on e.g. a SGI Altix System? Which are the best kernels to use on cache coherent systems? The fortran kernels produces many cache misses (in functions like fsolve and fmatmul) slowing down a 3GFLOP/s machine to about 200MFLOP/ s . Has anyone any advice to increase speed on ccNuma-system? Sincerly, Christoph Statz -- Christoph Statz Institut f?r Nachrichtentechnik Technische Universit?t Dresden 01062 Dresden Email: christoph.statz at mailbox.tu-dresden.de Phone: +49 351 463 32287 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part URL: From knepley at gmail.com Mon Oct 13 08:26:58 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 13 Oct 2008 08:26:58 -0500 Subject: Performance Issues on ccNuma-System In-Reply-To: References: Message-ID: On Mon, Oct 13, 2008 at 7:12 AM, Christoph Statz wrote: > Dear PETSc-users, > i'm trying to work with PETSc on a ccNuma-system, where i am confronted with > severe performance problems. > Is there anyone using PETSc on e.g. a SGI Altix System? > Which are the best kernels to use on cache coherent systems? > The fortran kernels produces many cache misses (in functions like fsolve and > fmatmul) slowing down a 3GFLOP/s machine to about 200MFLOP/s . > Has anyone any advice to increase speed on ccNuma-system? 1) With any performance question, please send the output of -log_summary 2) I think it is unlikely that cache misses are responsible for this performance. It is much more likely that bandwidth limitations are responsible. Please see the paper by Kaushik and Gropp which models sparse matvec performance (on Dinesh's website). 3) You would see better performance using a block method. Sparse matvec without blocks will never see good percentages of peak (ditto for backsolve). Matt > Sincerly, > Christoph Statz > -- > Christoph Statz > Institut f?r Nachrichtentechnik > Technische Universit?t Dresden > 01062 Dresden > Email: christoph.statz at mailbox.tu-dresden.de > Phone: +49 351 463 32287 -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From christoph.statz at ifn.et.tu-dresden.de Mon Oct 13 10:08:10 2008 From: christoph.statz at ifn.et.tu-dresden.de (Christoph Statz) Date: Mon, 13 Oct 2008 17:08:10 +0200 Subject: Performance Issues on ccNuma-System In-Reply-To: References: Message-ID: <48AF0D34-26E5-4370-8154-9973EA7AD328@ifn.et.tu-dresden.de> Hello Matt and PETSc-users, > > 1) With any performance question, please send the output of - > log_summary You'll find the output attached (But there is _really_ not much to see). > > 2) I think it is unlikely that cache misses are responsible for this > performance. It is > much more likely that bandwidth limitations are responsible. As far as I can see, there are neither bandwidth limitations nor latency problems (since there is an infiniband-interconnect). MPI-Performance (Vampirtrace + Scalasca) looks good (late senders/ receivers, barriers etcpp.). PAPI-Instrumentation says: cache misses. > > Please see the paper > by Kaushik and Gropp which models sparse matvec performance (on > Dinesh's website). > Which Paper on which website. Please send a link. > 3) You would see better performance using a block method. Sparse > matvec without > blocks will never see good percentages of peak (ditto for > backsolve). How do I use the block methods? Since I rely on the "user-level" interfaces kspsolve etcpp., I don't see how i could influence this. You'll find basic source code attached. Sincerly, Christoph -- Christoph Statz Institut f?r Nachrichtentechnik Technische Universit?t Dresden 01062 Dresden Email: christoph.statz at mailbox.tu-dresden.de Phone: +49 351 463 32287 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fdtd.F90 Type: application/octet-stream Size: 8957 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: altix_bench_2500_petsc2.3.3-p15-real_userlocal_np_1 Type: application/octet-stream Size: 10755 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: altix_bench_2500_petsc2.3.2-p8_global_np_1 Type: application/octet-stream Size: 10782 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part URL: From knepley at gmail.com Mon Oct 13 11:56:22 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 13 Oct 2008 11:56:22 -0500 Subject: Performance Issues on ccNuma-System In-Reply-To: <48AF0D34-26E5-4370-8154-9973EA7AD328@ifn.et.tu-dresden.de> References: <48AF0D34-26E5-4370-8154-9973EA7AD328@ifn.et.tu-dresden.de> Message-ID: On Mon, Oct 13, 2008 at 10:08 AM, Christoph Statz wrote: > Hello Matt and PETSc-users, > > 1) With any performance question, please send the output of -log_summary > > You'll find the output attached > (But there is _really_ not much to see). I will look at it. > 2) I think it is unlikely that cache misses are responsible for this > performance. It is > much more likely that bandwidth limitations are responsible. > > As far as I can see, there are neither bandwidth limitations nor latency > problems (since there is an infiniband-interconnect). > MPI-Performance (Vampirtrace + Scalasca) looks good (late senders/receivers, > barriers etcpp.). > PAPI-Instrumentation says: cache misses. That stuff is rarely worth running. Without a decent model of the performance, the data is no help. I am not talking about network bandwidth, but memory bandwidth. For a sparse matvec that comes from a simple scalar PDE, you need incredible amounts of bandwidth to drive the tiny amount of flops. The equation is in the paper. > Please see the paper > by Kaushik and Gropp which models sparse matvec performance (on > Dinesh's website). > > Which Paper on which website. Please send a link. I believe you want 11 and 17 here http://www.mcs.anl.gov/~kaushik/ under the Publications link. > 3) You would see better performance using a block method. Sparse matvec > without > blocks will never see good percentages of peak (ditto for backsolve). > > How do I use the block methods? > Since I rely on the "user-level" interfaces kspsolve etcpp., I don't see how > i could influence this. You can't unless your system has block structure. If it does, you can use the BAIJ matrix types. Matt > You'll find basic source code attached. > Sincerly, > Christoph > -- > Christoph Statz > Institut f?r Nachrichtentechnik > Technische Universit?t Dresden > 01062 Dresden > Email: christoph.statz at mailbox.tu-dresden.de > Phone: +49 351 463 32287 > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From christoph.statz at ifn.et.tu-dresden.de Mon Oct 13 12:56:51 2008 From: christoph.statz at ifn.et.tu-dresden.de (Christoph Statz) Date: Mon, 13 Oct 2008 19:56:51 +0200 Subject: Performance Issues on ccNuma-System In-Reply-To: References: <48AF0D34-26E5-4370-8154-9973EA7AD328@ifn.et.tu-dresden.de> Message-ID: <59A95936-8CD9-4AEA-A3BA-8B5C59792D14@ifn.et.tu-dresden.de> Hello Matt, > > That stuff is rarely worth running. Without a decent model of the > performance, the > data is no help. I am not talking about network bandwidth, but > memory bandwidth. > For a sparse matvec that comes from a simple scalar PDE, you need > incredible > amounts of bandwidth to drive the tiny amount of flops. The equation > is in the paper. > >> Please see the paper >> by Kaushik and Gropp which models sparse matvec performance (on >> Dinesh's website). >> >> Which Paper on which website. Please send a link. > > I believe you want 11 and 17 here http://www.mcs.anl.gov/~kaushik/ > under the Publications link. Got it. I will look at it and check the memory bandwidth > > >> 3) You would see better performance using a block method. Sparse >> matvec >> without >> blocks will never see good percentages of peak (ditto for >> backsolve). >> >> How do I use the block methods? >> Since I rely on the "user-level" interfaces kspsolve etcpp., I >> don't see how >> i could influence this. > > You can't unless your system has block structure. If it does, you can > use the BAIJ > matrix types. I'll try it. Thank you for your advice that far. Sincerly, Christoph > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part URL: From bsmith at mcs.anl.gov Mon Oct 13 15:24:21 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 13 Oct 2008 15:24:21 -0500 Subject: Performance Issues on ccNuma-System In-Reply-To: References: Message-ID: <575CBAD1-EEBA-44EE-A70C-6199351C5445@mcs.anl.gov> The sparse matrix is MUCH to big for the cache so has to stream through from memory, thus the huge number of "cache misses". This same performance issue occurs on all modern systems. Barry On Oct 13, 2008, at 7:12 AM, Christoph Statz wrote: > Dear PETSc-users, > > i'm trying to work with PETSc on a ccNuma-system, where i am > confronted with severe performance problems. > Is there anyone using PETSc on e.g. a SGI Altix System? > Which are the best kernels to use on cache coherent systems? > The fortran kernels produces many cache misses (in functions like > fsolve and fmatmul) slowing down a 3GFLOP/s machine to about > 200MFLOP/s . > Has anyone any advice to increase speed on ccNuma-system? > > Sincerly, > > Christoph Statz > > -- > Christoph Statz > > Institut f?r Nachrichtentechnik > Technische Universit?t Dresden > 01062 Dresden > > Email: christoph.statz at mailbox.tu-dresden.de > Phone: +49 351 463 32287 > > > From christoph.statz at ifn.et.tu-dresden.de Tue Oct 14 02:06:06 2008 From: christoph.statz at ifn.et.tu-dresden.de (Christoph Statz) Date: Tue, 14 Oct 2008 09:06:06 +0200 Subject: Performance Issues on ccNuma-System In-Reply-To: <575CBAD1-EEBA-44EE-A70C-6199351C5445@mcs.anl.gov> References: <575CBAD1-EEBA-44EE-A70C-6199351C5445@mcs.anl.gov> Message-ID: <0DAC9B14-AB1F-4509-BCBB-8782C0373F0E@ifn.et.tu-dresden.de> > > The sparse matrix is MUCH to big for the cache so has to stream > through from memory, > thus the huge number of "cache misses". This same performance issue > occurs on > all modern systems. Yes, of course. But on most modern systems i don't have the "problem" that the cache is held coherent which causes a bunch of extra cycles penalty. I don't have performance problems on our Clusters. It is just the "big" machine. Christoph -- Christoph Statz Institut f?r Nachrichtentechnik Technische Universit?t Dresden 01062 Dresden Email: christoph.statz at mailbox.tu-dresden.de Phone: +49 351 463 32287 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part URL: From tim.kroeger at cevis.uni-bremen.de Tue Oct 14 07:26:40 2008 From: tim.kroeger at cevis.uni-bremen.de (Tim Kroeger) Date: Tue, 14 Oct 2008 14:26:40 +0200 (CEST) Subject: Matrix free preconditioning Message-ID: Dear PETSc developers, Can you give me some advice on possible/recommended choices of preconditioners in the case of matrix free ksp? I create my matrix using MatCreateShell(). The matrix is not symmetric. I understand that the typical preconditioners like ILU or JACOBI will not work since they require access to the matrix entries. Is there any good precoditioner that will work? Best Regards, Tim -- Dr. Tim Kroeger Phone +49-421-218-7710 tim.kroeger at mevis.de, tim.kroeger at cevis.uni-bremen.de Fax +49-421-218-4236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.-O. Peitgen From knepley at gmail.com Tue Oct 14 08:10:42 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Oct 2008 08:10:42 -0500 Subject: Performance Issues on ccNuma-System In-Reply-To: <0DAC9B14-AB1F-4509-BCBB-8782C0373F0E@ifn.et.tu-dresden.de> References: <575CBAD1-EEBA-44EE-A70C-6199351C5445@mcs.anl.gov> <0DAC9B14-AB1F-4509-BCBB-8782C0373F0E@ifn.et.tu-dresden.de> Message-ID: On Tue, Oct 14, 2008 at 2:06 AM, Christoph Statz wrote: > > The sparse matrix is MUCH to big for the cache so has to stream through > from memory, > thus the huge number of "cache misses". This same performance issue occurs > on > all modern systems. > > Yes, of course. But on most modern systems i don't have the "problem" that > the cache is held coherent which causes a bunch of extra cycles penalty. > I don't have performance problems on our Clusters. It is just the "big" > machine. Write down the balance between cycles and memory bandwidth for these machines. I am guessing the clusters have better balance since they tend to have slower processors. Matt > Christoph > > -- > Christoph Statz > Institut f?r Nachrichtentechnik > Technische Universit?t Dresden > 01062 Dresden > Email: christoph.statz at mailbox.tu-dresden.de > Phone: +49 351 463 32287 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From knepley at gmail.com Tue Oct 14 08:16:20 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Oct 2008 08:16:20 -0500 Subject: Matrix free preconditioning In-Reply-To: References: Message-ID: The short answer is no. You msut know something more about your problem. For instance, it is common to build an auxilliary matrix from simple, analytic approximation to the operator and use that to construct a preconditioner (this is why we always take 2 matrix args). If your problem is elliptic, maybe you could get away with matrix-free solves in a MG methd (but unlikely since you usually need good smoothers). Matt On Tue, Oct 14, 2008 at 7:26 AM, Tim Kroeger wrote: > Dear PETSc developers, > > Can you give me some advice on possible/recommended choices of > preconditioners in the case of matrix free ksp? I create my matrix using > MatCreateShell(). The matrix is not symmetric. I understand that the > typical preconditioners like ILU or JACOBI will not work since they require > access to the matrix entries. Is there any good precoditioner that will > work? > > Best Regards, > > Tim > > -- > Dr. Tim Kroeger Phone > +49-421-218-7710 > tim.kroeger at mevis.de, tim.kroeger at cevis.uni-bremen.de Fax > +49-421-218-4236 > > MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany > > Amtsgericht Bremen HRB 16222 > Geschaeftsfuehrer: Prof. Dr. H.-O. Peitgen > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From jed at 59A2.org Tue Oct 14 08:20:47 2008 From: jed at 59A2.org (Jed Brown) Date: Tue, 14 Oct 2008 15:20:47 +0200 Subject: Matrix free preconditioning In-Reply-To: <20081014131801.GB12864@brakk.ethz.ch> References: <20081014131801.GB12864@brakk.ethz.ch> Message-ID: <20081014132047.GC12864@brakk.ethz.ch> That should be alpha = 1/(1 + v' w) and you need only store the product alpha * w Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From jed at 59A2.org Tue Oct 14 08:18:01 2008 From: jed at 59A2.org (Jed Brown) Date: Tue, 14 Oct 2008 15:18:01 +0200 Subject: Matrix free preconditioning In-Reply-To: References: Message-ID: <20081014131801.GB12864@brakk.ethz.ch> On Tue 2008-10-14 14:26, Tim Kroeger wrote: > Can you give me some advice on possible/recommended choices of > preconditioners in the case of matrix free ksp? I create my matrix > using MatCreateShell(). The matrix is not symmetric. I understand that > the typical preconditioners like ILU or JACOBI will not work since they > require access to the matrix entries. Is there any good precoditioner > that will work? The short answer is PCShell. That is, you have to know something about the matrix and can't expect any black-box approach to work. I assume your matrix comes from the rank-1 update you brought up recently on the libmesh list? That is B = A + u v' for where A is an explicit matrix and u,v are vectors. Did you try the rank-1 update I suggested? In detail, let A^ be an approximate inverse of A (i.e. an application of any standard preconditioner) and form a preconditioner for B as B^ = A^ - alpha w v' A^ where w = A^ u alpha = 1 + v' w. If A^ is an exact inverse (i.e. -pc_type lu), then B^ is an exact inverse. This shell preconditioner requires one application of the preconditioner for A in PCSetUp (you implement a setup function which stores `w' and `alpha') and one rank-1 update per application. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From tim.kroeger at cevis.uni-bremen.de Wed Oct 15 01:49:28 2008 From: tim.kroeger at cevis.uni-bremen.de (Tim Kroeger) Date: Wed, 15 Oct 2008 08:49:28 +0200 (CEST) Subject: Matrix free preconditioning In-Reply-To: <20081014131801.GB12864@brakk.ethz.ch> References: <20081014131801.GB12864@brakk.ethz.ch> Message-ID: Dear Jed and Matt and all, On Tue, 14 Oct 2008, Jed Brown wrote: > I assume your matrix comes from the rank-1 update you brought up > recently on the libmesh list? That is > > B = A + u v' > > for where A is an explicit matrix and u,v are vectors. Yes, that's it. Actually, I didn't know that anyone was reading both libmesh-users and petsc-users. > Did you try the > rank-1 update I suggested? In detail, let A^ be an approximate inverse > of A (i.e. an application of any standard preconditioner) and form a > preconditioner for B as > > B^ = A^ - alpha w v' A^ > > where > > w = A^ u > alpha = 1/(1 + v' w). Thank you very much for that suggestion. I guess this would be the best thing to do, but actually I have to admit that I am rather shy of implemeting that, in particular with a sensible interface to libMesh. Hence, I am thinking whether there might be some easier (altough certainly less efficient) possibility. I am thinking abouth the following two ideas: 1. I could do MatShellSetOperation(A,MATOP_GET_DIAGONAL,...), which is easy in my case and should enable PETSc to do at least JACOBI preconditioning for that matrix, shouldn't it? 2. Following Matt's suggestion, I could handle an auxilary matrix C for preconditioning. In my case, it would choose C(i,j)=A(i,j) whenever B(i,j)!=0, but C(i,j)=0 else. I.e., C has the same sparsity pattern as the (sparse) matrix B. This would also be easy to construct in my case. Any opinion on this is welcome. Best Regards, Tim -- Dr. Tim Kroeger Phone +49-421-218-7710 tim.kroeger at mevis.de, tim.kroeger at cevis.uni-bremen.de Fax +49-421-218-4236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.-O. Peitgen From tim.kroeger at cevis.uni-bremen.de Wed Oct 15 07:43:14 2008 From: tim.kroeger at cevis.uni-bremen.de (Tim Kroeger) Date: Wed, 15 Oct 2008 14:43:14 +0200 (CEST) Subject: Matrix free preconditioning In-Reply-To: References: <20081014131801.GB12864@brakk.ethz.ch> Message-ID: Dear Jed and Matt and all, In the meantime, I implemented idea no. 1: On Wed, 15 Oct 2008, Tim Kroeger wrote: > 1. I could do MatShellSetOperation(A,MATOP_GET_DIAGONAL,...), which is easy > in my case and should enable PETSc to do at least JACOBI preconditioning for > that matrix, shouldn't it? This seems to work fine for my example at the moment. Thank you very much again for your help. I might come back to the other suggestions/ideas if I encounter new problems when my example becomes more complicated. Best Regards, Tim -- Dr. Tim Kroeger Phone +49-421-218-7710 tim.kroeger at mevis.de, tim.kroeger at cevis.uni-bremen.de Fax +49-421-218-4236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.-O. Peitgen From eplanung at t-online.de Thu Oct 16 02:43:07 2008 From: eplanung at t-online.de (Franz Th. Langer) Date: Thu, 16 Oct 2008 09:43:07 +0200 Subject: cygwin/MPI-question Message-ID: <48F6F08B.1010901@t-online.de> Hi, System: Windows 2000, cygwin parallel computation with MPI (I am a newcomer to cygwin, I wrote a lot of par. progs for VC 6.0 +MPI.) compiling and linking of my par. petsc-program under cygwin is ok! I can run the program on 1 proc only! when I want to use more then 1 proc I am using Rexecshell! than the program querries about a wrong commandline? questions: -under cygwin: do I have to use something else than Rexecshell? -what has than to be installed/initiated on the other procs? Best regards Franz From balay at mcs.anl.gov Thu Oct 16 03:12:46 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 16 Oct 2008 03:12:46 -0500 (CDT) Subject: cygwin/MPI-question In-Reply-To: <48F6F08B.1010901@t-online.de> References: <48F6F08B.1010901@t-online.de> Message-ID: On Thu, 16 Oct 2008, Franz Th. Langer wrote: > > Hi, > > System: Windows 2000, cygwin > parallel computation with MPI > > (I am a newcomer to cygwin, I wrote a lot > of par. progs for VC 6.0 +MPI.) > > compiling and linking of my par. petsc-program under cygwin is ok! > > I can run the program on 1 proc only! > > when I want to use more then 1 proc I am using Rexecshell! > than the program querries about a wrong commandline? > > questions: > > -under cygwin: do I have to use something else than Rexecshell? > -what has than to be installed/initiated on the other procs? Cygwin is used only to build libraries. If you built PETSc with MPI - then you have to use the MPI startup mecanism [i.e mpiexec or mpirun] to start parallel MPI jobs. Satish From eplanung at t-online.de Thu Oct 16 05:42:43 2008 From: eplanung at t-online.de (Franz Th. Langer) Date: Thu, 16 Oct 2008 12:42:43 +0200 Subject: cygwin/MPI-question In-Reply-To: References: <48F6F08B.1010901@t-online.de> Message-ID: <48F71AA3.3010507@t-online.de> Hi Satish, thanks very much for quick infos! I understand that I have to use mpirun or mpiexec. (I still dont know how the system knows which procs can be used?) (in rexecshell one can fill in a list with the node-names) my questions arise out of the following situation: under cygwin: 1. I downloaded Petsc and made the necc. definitions 2. make all (everything ok!) 3. make test ( (everything ok!) under the tests there are also test for parallelizations! I still dont know how Petsc was doing this tests??? I may have missed something , but I never found a call to mpirun or mpiexec? perhaps you can explain it? Best regards Franz Satish Balay wrote: >On Thu, 16 Oct 2008, Franz Th. Langer wrote: > > > >>Hi, >> >>System: Windows 2000, cygwin >>parallel computation with MPI >> >>(I am a newcomer to cygwin, I wrote a lot >>of par. progs for VC 6.0 +MPI.) >> >>compiling and linking of my par. petsc-program under cygwin is ok! >> >>I can run the program on 1 proc only! >> >>when I want to use more then 1 proc I am using Rexecshell! >>than the program querries about a wrong commandline? >> >>questions: >> >>-under cygwin: do I have to use something else than Rexecshell? >>-what has than to be installed/initiated on the other procs? >> >> > >Cygwin is used only to build libraries. If you built PETSc with MPI - >then you have to use the MPI startup mecanism [i.e mpiexec or mpirun] >to start parallel MPI jobs. > >Satish > > > > -- Mit freundlichen Gr??en Dipl.-Ing. Franz Theodor Langer (Gesch?ftsf?hrer) ----------------------------------------------------------------- E_Planung GmbH Planung + Berechnung f?r Wissenschaft und Technik im Ingenieurbau Schl?sselbergstra?e 30, 81673 M?nchen, Tel. 089/454933-0 Fax -14 Gesch?ftsnummer: HRB 90116, Gerichtsstand: M?nchen -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Oct 16 06:04:46 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Oct 2008 06:04:46 -0500 Subject: cygwin/MPI-question In-Reply-To: <48F71AA3.3010507@t-online.de> References: <48F6F08B.1010901@t-online.de> <48F71AA3.3010507@t-online.de> Message-ID: On Thu, Oct 16, 2008 at 5:42 AM, Franz Th. Langer wrote: > Hi Satish, > > thanks very much for quick infos! > > I understand that I have to use mpirun or mpiexec. > (I still dont know how the system knows which procs can be used?) > > (in rexecshell one can fill in a list with the node-names) > > my questions arise out of the following situation: > > under cygwin: > > 1. I downloaded Petsc and made the necc. definitions > > 2. make all (everything ok!) > > 3. make test ( (everything ok!) > > under the tests there are also test for parallelizations! > > I still dont know how Petsc was doing this tests??? I assume you are using the latest release. In bmake/$PETSC_ARCH/petscconf there is a definition of MPIRUN (or MPIEXEC) which is the location of that program and it used to run the test by make. Matt > I may have missed something , but I never found a call to mpirun or mpiexec? > > perhaps you can explain it? > > Best regards > Franz > > > > Satish Balay wrote: > > On Thu, 16 Oct 2008, Franz Th. Langer wrote: > > > > Hi, > > System: Windows 2000, cygwin > parallel computation with MPI > > (I am a newcomer to cygwin, I wrote a lot > of par. progs for VC 6.0 +MPI.) > > compiling and linking of my par. petsc-program under cygwin is ok! > > I can run the program on 1 proc only! > > when I want to use more then 1 proc I am using Rexecshell! > than the program querries about a wrong commandline? > > questions: > > -under cygwin: do I have to use something else than Rexecshell? > -what has than to be installed/initiated on the other procs? > > > Cygwin is used only to build libraries. If you built PETSc with MPI - > then you have to use the MPI startup mecanism [i.e mpiexec or mpirun] > to start parallel MPI jobs. > > Satish > > > > > -- > Mit freundlichen Gr??en > > Dipl.-Ing. Franz Theodor Langer (Gesch?ftsf?hrer) > ----------------------------------------------------------------- > E_Planung GmbH > Planung + Berechnung f?r Wissenschaft und Technik im Ingenieurbau > > Schl?sselbergstra?e 30, 81673 M?nchen, Tel. 089/454933-0 Fax -14 > Gesch?ftsnummer: HRB 90116, Gerichtsstand: M?nchen > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From eplanung at t-online.de Thu Oct 16 09:35:09 2008 From: eplanung at t-online.de (Franz Th. Langer) Date: Thu, 16 Oct 2008 16:35:09 +0200 Subject: cygwin/MPI-question In-Reply-To: References: <48F6F08B.1010901@t-online.de> <48F71AA3.3010507@t-online.de> Message-ID: <48F7511D.2070500@t-online.de> Hi Matt, thanks very much for help! you are right I am using the latest version. I found thedefinition of MPIEXEC where mpiexec.exe is called. when I want to run my program with more than 1 proc, where do I tell the systems which nodes I like to have? is it enough just to copy mpiexec.exe to the other nodes? (I am new to cygwin!) I was using rexecshell.exe all the time. regards Franz Matthew Knepley wrote: >On Thu, Oct 16, 2008 at 5:42 AM, Franz Th. Langer wrote: > > >>Hi Satish, >> >>thanks very much for quick infos! >> >>I understand that I have to use mpirun or mpiexec. >>(I still dont know how the system knows which procs can be used?) >> >>(in rexecshell one can fill in a list with the node-names) >> >>my questions arise out of the following situation: >> >>under cygwin: >> >>1. I downloaded Petsc and made the necc. definitions >> >>2. make all (everything ok!) >> >>3. make test ( (everything ok!) >> >>under the tests there are also test for parallelizations! >> >>I still dont know how Petsc was doing this tests??? >> >> > >I assume you are using the latest release. In bmake/$PETSC_ARCH/petscconf >there is a definition of MPIRUN (or MPIEXEC) which is the location of that >program and it used to run the test by make. > > Matt > > > >>I may have missed something , but I never found a call to mpirun or mpiexec? >> >>perhaps you can explain it? >> >>Best regards >>Franz >> >> >> >>Satish Balay wrote: >> >>On Thu, 16 Oct 2008, Franz Th. Langer wrote: >> >> >> >>Hi, >> >>System: Windows 2000, cygwin >>parallel computation with MPI >> >>(I am a newcomer to cygwin, I wrote a lot >>of par. progs for VC 6.0 +MPI.) >> >>compiling and linking of my par. petsc-program under cygwin is ok! >> >>I can run the program on 1 proc only! >> >>when I want to use more then 1 proc I am using Rexecshell! >>than the program querries about a wrong commandline? >> >>questions: >> >>-under cygwin: do I have to use something else than Rexecshell? >>-what has than to be installed/initiated on the other procs? >> >> >>Cygwin is used only to build libraries. If you built PETSc with MPI - >>then you have to use the MPI startup mecanism [i.e mpiexec or mpirun] >>to start parallel MPI jobs. >> >>Satish >> >> >> >> >>-- >>Mit freundlichen Gr??en >> >>Dipl.-Ing. Franz Theodor Langer (Gesch?ftsf?hrer) >>----------------------------------------------------------------- >>E_Planung GmbH >>Planung + Berechnung f?r Wissenschaft und Technik im Ingenieurbau >> >>Schl?sselbergstra?e 30, 81673 M?nchen, Tel. 089/454933-0 Fax -14 >>Gesch?ftsnummer: HRB 90116, Gerichtsstand: M?nchen >> >> >> > > > > > -- Mit freundlichen Gr??en Dipl.-Ing. Franz Theodor Langer (Gesch?ftsf?hrer) ----------------------------------------------------------------- E_Planung GmbH Planung + Berechnung f?r Wissenschaft und Technik im Ingenieurbau Schl?sselbergstra?e 30, 81673 M?nchen, Tel. 089/454933-0 Fax -14 Gesch?ftsnummer: HRB 90116, Gerichtsstand: M?nchen -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Oct 16 09:47:08 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Oct 2008 09:47:08 -0500 Subject: cygwin/MPI-question In-Reply-To: <48F7511D.2070500@t-online.de> References: <48F6F08B.1010901@t-online.de> <48F71AA3.3010507@t-online.de> <48F7511D.2070500@t-online.de> Message-ID: On Thu, Oct 16, 2008 at 9:35 AM, Franz Th. Langer wrote: > Hi Matt, > > thanks very much for help! > you are right I am using the latest version. > > I found thedefinition of MPIEXEC where mpiexec.exe is called. > > when I want to run my program with more than > 1 proc, where do I tell the systems which nodes I like to have? > > is it enough just to copy mpiexec.exe to the other nodes? It is completely handled by the MPI implementation, not by PETSc or the MPI standard. If this is MPICH, you would specify a procgroup file, which is detailed in the MPICH documentation. Thanks, Matt > (I am new to cygwin!) > > I was using rexecshell.exe all the time. > > regards > Franz > > > Matthew Knepley wrote: > > On Thu, Oct 16, 2008 at 5:42 AM, Franz Th. Langer > wrote: > > > Hi Satish, > > thanks very much for quick infos! > > I understand that I have to use mpirun or mpiexec. > (I still dont know how the system knows which procs can be used?) > > (in rexecshell one can fill in a list with the node-names) > > my questions arise out of the following situation: > > under cygwin: > > 1. I downloaded Petsc and made the necc. definitions > > 2. make all (everything ok!) > > 3. make test ( (everything ok!) > > under the tests there are also test for parallelizations! > > I still dont know how Petsc was doing this tests??? > > > I assume you are using the latest release. In bmake/$PETSC_ARCH/petscconf > there is a definition of MPIRUN (or MPIEXEC) which is the location of that > program and it used to run the test by make. > > Matt > > > > I may have missed something , but I never found a call to mpirun or mpiexec? > > perhaps you can explain it? > > Best regards > Franz > > > > Satish Balay wrote: > > On Thu, 16 Oct 2008, Franz Th. Langer wrote: > > > > Hi, > > System: Windows 2000, cygwin > parallel computation with MPI > > (I am a newcomer to cygwin, I wrote a lot > of par. progs for VC 6.0 +MPI.) > > compiling and linking of my par. petsc-program under cygwin is ok! > > I can run the program on 1 proc only! > > when I want to use more then 1 proc I am using Rexecshell! > than the program querries about a wrong commandline? > > questions: > > -under cygwin: do I have to use something else than Rexecshell? > -what has than to be installed/initiated on the other procs? > > > Cygwin is used only to build libraries. If you built PETSc with MPI - > then you have to use the MPI startup mecanism [i.e mpiexec or mpirun] > to start parallel MPI jobs. > > Satish > > > > > -- > Mit freundlichen Gr??en > > Dipl.-Ing. Franz Theodor Langer (Gesch?ftsf?hrer) > ----------------------------------------------------------------- > E_Planung GmbH > Planung + Berechnung f?r Wissenschaft und Technik im Ingenieurbau > > Schl?sselbergstra?e 30, 81673 M?nchen, Tel. 089/454933-0 Fax -14 > Gesch?ftsnummer: HRB 90116, Gerichtsstand: M?nchen > > > > > > -- > Mit freundlichen Gr??en > > Dipl.-Ing. Franz Theodor Langer (Gesch?ftsf?hrer) > ----------------------------------------------------------------- > E_Planung GmbH > Planung + Berechnung f?r Wissenschaft und Technik im Ingenieurbau > > Schl?sselbergstra?e 30, 81673 M?nchen, Tel. 089/454933-0 Fax -14 > Gesch?ftsnummer: HRB 90116, Gerichtsstand: M?nchen > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From manav at vt.edu Thu Oct 16 12:42:26 2008 From: manav at vt.edu (Manav Bhatia) Date: Thu, 16 Oct 2008 13:42:26 -0400 Subject: Petsc matrices in Matlab Message-ID: Hi, Is there a way to directly read a Petsc sequential sparse matrix in Matlab using a Matlab script? I vaguely remember doing this a couple of years ago, and the script possibly was a part of the Petsc distribution. Is that still the case? I am, otherwise, aware of the methods outlined in chapter 9 of the Petsc users guide. Thanks, Manav From balay at mcs.anl.gov Thu Oct 16 13:03:20 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 16 Oct 2008 13:03:20 -0500 (CDT) Subject: Petsc matrices in Matlab In-Reply-To: References: Message-ID: Check bin/matlab/PetscBinaryRead.m Satish On Thu, 16 Oct 2008, Manav Bhatia wrote: > Hi, > > Is there a way to directly read a Petsc sequential sparse matrix in Matlab > using a Matlab script? > I vaguely remember doing this a couple of years ago, and the script > possibly was a part of the Petsc distribution. Is that still the case? > I am, otherwise, aware of the methods outlined in chapter 9 of the Petsc > users guide. > > Thanks, > Manav > > From z.sheng at ewi.tudelft.nl Mon Oct 20 10:56:53 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Mon, 20 Oct 2008 17:56:53 +0200 Subject: Problem with MatMatMultTranspose In-Reply-To: References: Message-ID: <48FCAA45.6050609@ewi.tudelft.nl> Dear all I am using this MatMatMultTranspose function for complex matrices, but it seems to be doing something weird. for instance, if I have complex matrix A, and I compute A^T*A with this function, it does not generate a Hermitian matrix. I am thinking that maybe the function take the transpose of A instead of the conjugate transpose .... Do you know how I can get an A^H*A instead of A^T*A for complex matrices? Thanks a lot Best regards Zhifeng From hzhang at mcs.anl.gov Mon Oct 20 14:06:16 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 20 Oct 2008 14:06:16 -0500 (CDT) Subject: Problem with MatMatMultTranspose In-Reply-To: <48FCAA45.6050609@ewi.tudelft.nl> References: <48FCAA45.6050609@ewi.tudelft.nl> Message-ID: Zhifeng, We do not have support for matrix operations on Hermitian matrix yet. Hong On Mon, 20 Oct 2008, zhifeng sheng wrote: > Dear all > > I am using this MatMatMultTranspose function for complex matrices, but it > seems to be doing something weird. > > for instance, if I have complex matrix A, and I compute A^T*A with this > function, it does not generate a Hermitian > > matrix. > > I am thinking that maybe the function take the transpose of A instead of the > conjugate transpose .... > > Do you know how I can get an A^H*A instead of A^T*A for complex matrices? > > Thanks a lot > Best regards > Zhifeng > > From z.sheng at ewi.tudelft.nl Wed Oct 22 02:40:11 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Wed, 22 Oct 2008 09:40:11 +0200 Subject: Problem with MatMatMultTranspose In-Reply-To: References: <48FCAA45.6050609@ewi.tudelft.nl> Message-ID: <48FED8DB.4020902@ewi.tudelft.nl> Hi, you mean the conjugate transpose for complex matrix is not supported? then how can you implement the iterative solvers for complex matrices? because, some iterative solvers need it. Thanks Best regards Hong Zhang wrote: > > Zhifeng, > > We do not have support for matrix operations on Hermitian matrix yet. > Hong > > On Mon, 20 Oct 2008, zhifeng sheng wrote: > >> Dear all >> >> I am using this MatMatMultTranspose function for complex matrices, >> but it seems to be doing something weird. >> >> for instance, if I have complex matrix A, and I compute A^T*A with >> this function, it does not generate a Hermitian >> >> matrix. >> >> I am thinking that maybe the function take the transpose of A instead >> of the conjugate transpose .... >> >> Do you know how I can get an A^H*A instead of A^T*A for complex >> matrices? >> >> Thanks a lot >> Best regards >> Zhifeng >> >> > From bsmith at mcs.anl.gov Wed Oct 22 07:23:23 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 22 Oct 2008 07:23:23 -0500 Subject: Problem with MatMatMultTranspose In-Reply-To: <48FED8DB.4020902@ewi.tudelft.nl> References: <48FCAA45.6050609@ewi.tudelft.nl> <48FED8DB.4020902@ewi.tudelft.nl> Message-ID: <2BDE918B-B508-4F69-8237-37868AA4BBC2@mcs.anl.gov> There is only support for CG with Hermitian transpose, ksp_cg_type symmetric or hermitian, KSPCGSetType() the others only support complex, no Hermitian transpose. Barry On Oct 22, 2008, at 2:40 AM, zhifeng sheng wrote: > Hi, > > you mean the conjugate transpose for complex matrix is not supported? > > then how can you implement the iterative solvers for complex > matrices? because, some iterative solvers need it. > > Thanks > Best regards > > > > Hong Zhang wrote: >> >> Zhifeng, >> >> We do not have support for matrix operations on Hermitian matrix yet. >> Hong >> >> On Mon, 20 Oct 2008, zhifeng sheng wrote: >> >>> Dear all >>> >>> I am using this MatMatMultTranspose function for complex matrices, >>> but it seems to be doing something weird. >>> >>> for instance, if I have complex matrix A, and I compute A^T*A with >>> this function, it does not generate a Hermitian >> > matrix. >>> >>> I am thinking that maybe the function take the transpose of A >>> instead of the conjugate transpose .... >>> >>> Do you know how I can get an A^H*A instead of A^T*A for complex >>> matrices? >>> >>> Thanks a lot >>> Best regards >>> Zhifeng >>> >> > >> > From z.sheng at ewi.tudelft.nl Wed Oct 22 09:15:25 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Wed, 22 Oct 2008 16:15:25 +0200 Subject: Problem with MatMatMultTranspose (con'd) In-Reply-To: <2BDE918B-B508-4F69-8237-37868AA4BBC2@mcs.anl.gov> References: <48FCAA45.6050609@ewi.tudelft.nl> <48FED8DB.4020902@ewi.tudelft.nl> <2BDE918B-B508-4F69-8237-37868AA4BBC2@mcs.anl.gov> Message-ID: <48FF357D.2020306@ewi.tudelft.nl> Dear all suppose I have a complex matrix (Hermitian positive definite) to solve, which KSP solver(s) can support solving it? PS: for this moment, I don't need to take into account that the matrix is hermitian. By the way, if I really need conjugate transpose function for complex matrices, must I implement it myself? (I mean, is there any function that I can make use of?) Thanks Best regards Zhifeng Barry Smith wrote: > > There is only support for CG with Hermitian transpose, ksp_cg_type > symmetric or hermitian, KSPCGSetType() > the others only support complex, no Hermitian transpose. > > Barry > > On Oct 22, 2008, at 2:40 AM, zhifeng sheng wrote: > >> Hi, >> >> you mean the conjugate transpose for complex matrix is not supported? >> >> then how can you implement the iterative solvers for complex >> matrices? because, some iterative solvers need it. >> >> Thanks >> Best regards >> >> >> >> Hong Zhang wrote: >>> >>> Zhifeng, >>> >>> We do not have support for matrix operations on Hermitian matrix yet. >>> Hong >>> >>> On Mon, 20 Oct 2008, zhifeng sheng wrote: >>> >>>> Dear all >>>> >>>> I am using this MatMatMultTranspose function for complex matrices, >>>> but it seems to be doing something weird. >>>> >>>> for instance, if I have complex matrix A, and I compute A^T*A with >>>> this function, it does not generate a Hermitian >>>> >>>> matrix. >>>> >>>> I am thinking that maybe the function take the transpose of A >>>> instead of the conjugate transpose .... >>>> >>>> Do you know how I can get an A^H*A instead of A^T*A for complex >>>> matrices? >>>> >>>> Thanks a lot >>>> Best regards >>>> Zhifeng >>>> >>>> >>> >> > From bsmith at mcs.anl.gov Wed Oct 22 09:21:44 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 22 Oct 2008 09:21:44 -0500 Subject: Problem with MatMatMultTranspose (con'd) In-Reply-To: <48FF357D.2020306@ewi.tudelft.nl> References: <48FCAA45.6050609@ewi.tudelft.nl> <48FED8DB.4020902@ewi.tudelft.nl> <2BDE918B-B508-4F69-8237-37868AA4BBC2@mcs.anl.gov> <48FF357D.2020306@ewi.tudelft.nl> Message-ID: <0FD60B8F-3161-4F72-8230-83E74DFEE3C4@mcs.anl.gov> On Oct 22, 2008, at 9:15 AM, zhifeng sheng wrote: > Dear all > > suppose I have a complex matrix (Hermitian positive definite) to > solve, which KSP solver(s) can support solving it? > PS: for this moment, I don't need to take into account that the > matrix is hermitian. > KSPCG and use KSPCGSetType(ksp,KSP_CG_HERMITIAN ) > By the way, if I really need conjugate transpose function for > complex matrices, must I implement it myself? yes > > (I mean, is there any function that I can make use of?) > > Thanks > Best regards > Zhifeng > > Barry Smith wrote: >> >> There is only support for CG with Hermitian transpose, >> ksp_cg_type symmetric or hermitian, KSPCGSetType() >> the others only support complex, no Hermitian transpose. >> >> Barry >> >> On Oct 22, 2008, at 2:40 AM, zhifeng sheng wrote: >> >>> Hi, >>> >>> you mean the conjugate transpose for complex matrix is not >>> supported? >>> >>> then how can you implement the iterative solvers for complex >>> matrices? because, some iterative solvers need it. >>> >>> Thanks >>> Best regards >>> >>> >>> >>> Hong Zhang wrote: >>>> >>>> Zhifeng, >>>> >>>> We do not have support for matrix operations on Hermitian matrix >>>> yet. >>>> Hong >>>> >>>> On Mon, 20 Oct 2008, zhifeng sheng wrote: >>>> >>>>> Dear all >>>>> >>>>> I am using this MatMatMultTranspose function for complex >>>>> matrices, but it seems to be doing something weird. >>>>> >>>>> for instance, if I have complex matrix A, and I compute A^T*A >>>>> with this function, it does not generate a Hermitian >>>> > matrix. >>>>> >>>>> I am thinking that maybe the function take the transpose of A >>>>> instead of the conjugate transpose .... >>>>> >>>>> Do you know how I can get an A^H*A instead of A^T*A for complex >>>>> matrices? >>>>> >>>>> Thanks a lot >>>>> Best regards >>>>> Zhifeng >>>>> >>>> > >>>> >>> >> > From z.sheng at ewi.tudelft.nl Thu Oct 23 09:36:58 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Thu, 23 Oct 2008 16:36:58 +0200 Subject: Problem with MatMatMultTranspose (con'd) In-Reply-To: <0FD60B8F-3161-4F72-8230-83E74DFEE3C4@mcs.anl.gov> References: <48FCAA45.6050609@ewi.tudelft.nl> <48FED8DB.4020902@ewi.tudelft.nl> <2BDE918B-B508-4F69-8237-37868AA4BBC2@mcs.anl.gov> <48FF357D.2020306@ewi.tudelft.nl> <0FD60B8F-3161-4F72-8230-83E74DFEE3C4@mcs.anl.gov> Message-ID: <49008C0A.3000706@ewi.tudelft.nl> Hi I have implemented the conjugate transpose, and it totally works for complex matrices. thanks for your help. but I think if you could just overload the transpose function of complex matrices as conjugate transpose then every solver should work on complex matrices automatically. Plus, I don't think it is meaningful to compute the transpose of a complex matrix thanks Best regards Zhifeng Barry Smith wrote: > > On Oct 22, 2008, at 9:15 AM, zhifeng sheng wrote: > >> Dear all >> >> suppose I have a complex matrix (Hermitian positive definite) to >> solve, which KSP solver(s) can support solving it? >> PS: for this moment, I don't need to take into account that the >> matrix is hermitian. >> > > KSPCG and use KSPCGSetType(ksp,KSP_CG_HERMITIAN ) > >> By the way, if I really need conjugate transpose function for complex >> matrices, must I implement it myself? > yes > >> >> (I mean, is there any function that I can make use of?) >> >> Thanks >> Best regards >> Zhifeng >> >> Barry Smith wrote: >>> >>> There is only support for CG with Hermitian transpose, ksp_cg_type >>> symmetric or hermitian, KSPCGSetType() >>> the others only support complex, no Hermitian transpose. >>> >>> Barry >>> >>> On Oct 22, 2008, at 2:40 AM, zhifeng sheng wrote: >>> >>>> Hi, >>>> >>>> you mean the conjugate transpose for complex matrix is not supported? >>>> >>>> then how can you implement the iterative solvers for complex >>>> matrices? because, some iterative solvers need it. >>>> >>>> Thanks >>>> Best regards >>>> >>>> >>>> >>>> Hong Zhang wrote: >>>>> >>>>> Zhifeng, >>>>> >>>>> We do not have support for matrix operations on Hermitian matrix yet. >>>>> Hong >>>>> >>>>> On Mon, 20 Oct 2008, zhifeng sheng wrote: >>>>> >>>>>> Dear all >>>>>> >>>>>> I am using this MatMatMultTranspose function for complex >>>>>> matrices, but it seems to be doing something weird. >>>>>> >>>>>> for instance, if I have complex matrix A, and I compute A^T*A >>>>>> with this function, it does not generate a Hermitian >>>>>> >>>>>> matrix. >>>>>> >>>>>> I am thinking that maybe the function take the transpose of A >>>>>> instead of the conjugate transpose .... >>>>>> >>>>>> Do you know how I can get an A^H*A instead of A^T*A for complex >>>>>> matrices? >>>>>> >>>>>> Thanks a lot >>>>>> Best regards >>>>>> Zhifeng >>>>>> >>>>>> >>>>> >>>> >>> >> > From eplanung at t-online.de Fri Oct 24 08:07:40 2008 From: eplanung at t-online.de (Franz Th. Langer) Date: Fri, 24 Oct 2008 15:07:40 +0200 Subject: precompiled lib Message-ID: <4901C89C.3030202@t-online.de> Hi, is there a precompiled parallel Petsc-lib which I could use under Windows 2000, vc6.0 ? Best regards Franz From knepley at gmail.com Fri Oct 24 16:09:05 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 24 Oct 2008 16:09:05 -0500 Subject: precompiled lib In-Reply-To: <4901C89C.3030202@t-online.de> References: <4901C89C.3030202@t-online.de> Message-ID: On Fri, Oct 24, 2008 at 8:07 AM, Franz Th. Langer wrote: > Hi, > > is there a precompiled parallel Petsc-lib which I could use under Windows > 2000, vc6.0 ? We do not provide precompiled libraries, only source code. The great variety of systems makes it too hard to maintain these. Thanks, Matt > Best regards > Franz -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From eplanung at t-online.de Sat Oct 25 06:53:24 2008 From: eplanung at t-online.de (Franz Th. Langer) Date: Sat, 25 Oct 2008 13:53:24 +0200 Subject: precompiled lib In-Reply-To: References: <4901C89C.3030202@t-online.de> Message-ID: <490308B4.9070902@t-online.de> Hi Matt, thanks for your infos! what source-package do you recommend to download for my case? and where do I have to compile it? Best regards Franz Matthew Knepley wrote: >On Fri, Oct 24, 2008 at 8:07 AM, Franz Th. Langer wrote: > > >>Hi, >> >>is there a precompiled parallel Petsc-lib which I could use under Windows >>2000, vc6.0 ? >> >> > >We do not provide precompiled libraries, only source code. The great variety of >systems makes it too hard to maintain these. > > Thanks, > > Matt > > > >>Best regards >>Franz >> >> -- Mit freundlichen Gr??en Dipl.-Ing. Franz Theodor Langer (Gesch?ftsf?hrer) ----------------------------------------------------------------- E_Planung GmbH Planung + Berechnung f?r Wissenschaft und Technik im Ingenieurbau Schl?sselbergstra?e 30, 81673 M?nchen, Tel. 089/454933-0 Fax -14 Gesch?ftsnummer: HRB 90116, Gerichtsstand: M?nchen -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Oct 25 10:32:28 2008 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 25 Oct 2008 10:32:28 -0500 Subject: precompiled lib In-Reply-To: <490308B4.9070902@t-online.de> References: <4901C89C.3030202@t-online.de> <490308B4.9070902@t-online.de> Message-ID: 2008/10/25 Franz Th. Langer : > Hi Matt, > > thanks for your infos! > > what source-package do you recommend to download for my case? > > and where do I have to compile it? You can download the source from our website. Installation instructions are also there. Thanks, Matt > Best regards > Franz > > Matthew Knepley wrote: > > On Fri, Oct 24, 2008 at 8:07 AM, Franz Th. Langer > wrote: > > > Hi, > > is there a precompiled parallel Petsc-lib which I could use under Windows > 2000, vc6.0 ? > > > We do not provide precompiled libraries, only source code. The great variety > of > systems makes it too hard to maintain these. > > Thanks, > > Matt > > > > Best regards > Franz > > > -- > Mit freundlichen Gr??en > > Dipl.-Ing. Franz Theodor Langer (Gesch?ftsf?hrer) > ----------------------------------------------------------------- > E_Planung GmbH > Planung + Berechnung f?r Wissenschaft und Technik im Ingenieurbau > > Schl?sselbergstra?e 30, 81673 M?nchen, Tel. 089/454933-0 Fax -14 > Gesch?ftsnummer: HRB 90116, Gerichtsstand: M?nchen > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bui at calcreek.com Sat Oct 25 13:34:58 2008 From: bui at calcreek.com (Thuc Bui) Date: Sat, 25 Oct 2008 11:34:58 -0700 Subject: precompiled lib In-Reply-To: <490308B4.9070902@t-online.de> References: <4901C89C.3030202@t-online.de> <490308B4.9070902@t-online.de> Message-ID: <4136B96A8DF14C8689189E60A80EE0EA@aphrodite> Hi Matthew, I do have Petsc 2.3.3-p15 built under Windows XP and Vista with either VS2003 or VS2005 as a DLL. They are built with downloaded BLAS and LAPACK and MPICH2 for both Release and Debug configurations. If you want them I can email them to you as a standalone directory so that you can simply link to it. My email address is bui at calcreek.com. Cheers, Thuc _____ From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Franz Th. Langer Sent: Saturday, October 25, 2008 4:53 AM To: petsc-users at mcs.anl.gov Subject: Re: precompiled lib Hi Matt, thanks for your infos! what source-package do you recommend to download for my case? and where do I have to compile it? Best regards Franz Matthew Knepley wrote: On Fri, Oct 24, 2008 at 8:07 AM, Franz Th. Langer wrote: Hi, is there a precompiled parallel Petsc-lib which I could use under Windows 2000, vc6.0 ? We do not provide precompiled libraries, only source code. The great variety of systems makes it too hard to maintain these. Thanks, Matt Best regards Franz -- Mit freundlichen Gr??en Dipl.-Ing. Franz Theodor Langer (Gesch?ftsf?hrer) ----------------------------------------------------------------- E_Planung GmbH Planung + Berechnung f?r Wissenschaft und Technik im Ingenieurbau Schl?sselbergstra?e 30, 81673 M?nchen, Tel. 089/454933-0 Fax -14 Gesch?ftsnummer: HRB 90116, Gerichtsstand: M?nchen -------------- next part -------------- An HTML attachment was scrubbed... URL: From z.sheng at ewi.tudelft.nl Mon Oct 27 04:53:33 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Mon, 27 Oct 2008 10:53:33 +0100 Subject: linear solver for complex matrix In-Reply-To: <4901C89C.3030202@t-online.de> References: <4901C89C.3030202@t-online.de> Message-ID: <49058F9D.6000604@ewi.tudelft.nl> Dear all How can I make the other linear solvers work for complex system? I think if only I can make the transpose function a little different then they should work. but I don't know where I should start. Did anyone have similar problem with the linear solvers for complex system before (the linear solver for complex system needs conjugate transpose)? and how could you solve it? Thanks a lot Best regards Zhifeng Sheng From hzhang at mcs.anl.gov Mon Oct 27 09:03:37 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 27 Oct 2008 09:03:37 -0500 (CDT) Subject: linear solver for complex matrix In-Reply-To: <49058F9D.6000604@ewi.tudelft.nl> References: <4901C89C.3030202@t-online.de> <49058F9D.6000604@ewi.tudelft.nl> Message-ID: Zhifeng, Petsc's linear solvers, including the external packages (e.g., superlu, mumps, and spooles) all support complex precision, simply configure petsc library with '--with-scalar-type=complex'. > How can I make the other linear solvers work for complex system? I think if > only I can make the transpose function a little different then they should > work. but I don't know where I should start. > > Did anyone have similar problem with the linear solvers for complex system > before (the linear solver for complex system needs conjugate transpose)? and > how could you solve it? Why do you need conjugate transpose for using petsc solver? Do you develop your onw solver for Hermitian matrix? We do not have some basic matrix operations for Hermitian matrix yet, e.g., MatMult_Hermitian() when only half of the matrix entries are stored. You need implement this operation for your onw solver. If you use petsc AIJ matrix format, all complex linear solvers should work. Storing half of entries is not as efficient as entire matrix when you have sufficient memory space. Hong > > Thanks a lot > Best regards > Zhifeng Sheng > > From z.sheng at ewi.tudelft.nl Mon Oct 27 10:11:37 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Mon, 27 Oct 2008 16:11:37 +0100 Subject: linear solver for complex matrix In-Reply-To: References: <4901C89C.3030202@t-online.de> <49058F9D.6000604@ewi.tudelft.nl> Message-ID: <4905DA29.7040302@ewi.tudelft.nl> Dear all It looks like something strange is going on for my Petsc (which was built with '--with-scalar-type=complex'.): 1) I got a complex system of equation, with I can solve with CG+SOR . But when I try to use BICGS+SOR to solve it, it never converges. 2) I noticed that the transpose function for complex matrices was not what I was looking for. It really computes the transpose instead of the conjugate transpose which is the "right transpose" for complex matrix. For some linear solvers, e.g. BICG, BCGS, need to compute the transpose of real system matrix (to multiply it with a vector), while they need to compute the conjugate transpose for complex system matrices. So I am wondering whether BICGS+SOR did not converge for my complex matrix because of a bad A^Tx used internally in these linear solvers. Such problem does not exist for CG+SOR, therefore, CG+SOR converged. PS: I sent some emails about this problem before, but I guess I did not make myself clear :o Thanks a lot Best regards Zhifeng Sheng Hong Zhang wrote: > Zhifeng, > > Petsc's linear solvers, including > the external packages (e.g., superlu, mumps, and spooles) > all support complex precision, > simply configure petsc library with > '--with-scalar-type=complex'. > >> How can I make the other linear solvers work for complex system? I >> think if only I can make the transpose function a little different >> then they should work. but I don't know where I should start. >> >> Did anyone have similar problem with the linear solvers for complex >> system before (the linear solver for complex system needs conjugate >> transpose)? and how could you solve it? > > Why do you need conjugate transpose for using petsc solver? > Do you develop your onw solver for Hermitian matrix? > We do not have some basic matrix operations for Hermitian matrix yet, > e.g., MatMult_Hermitian() when only half of the matrix entries > are stored. You need implement this operation for your onw > solver. > > If you use petsc AIJ matrix format, all complex linear solvers > should work. Storing half of entries is not as efficient > as entire matrix when you have sufficient memory space. > > Hong > >> >> Thanks a lot >> Best regards >> Zhifeng Sheng >> >> > From bsmith at mcs.anl.gov Mon Oct 27 11:20:52 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 27 Oct 2008 11:20:52 -0500 Subject: linear solver for complex matrix In-Reply-To: <4905DA29.7040302@ewi.tudelft.nl> References: <4901C89C.3030202@t-online.de> <49058F9D.6000604@ewi.tudelft.nl> <4905DA29.7040302@ewi.tudelft.nl> Message-ID: The PETSc MatMultTranspose(), as you state, does not use conjugate transpose, thus the Krylov methods that require a conjugate transpose will not work. Unless you change the MatMultTranpose() routine. In addition, these algorithms use VecDot(), with the complex conjugate transpose the order of the arguments to this function matter (determining which one is conjugated). The coding for these methods was NOT done to use the proper conjugation (many of the original references do not include this information). In other words PETSc DOES NOT support complex conjugate for most of the Krylov methods, I think we already said this before. If your matrix is complex hermitian why not just use CG, why would you want to use a different Krylov method? Barry On Oct 27, 2008, at 10:11 AM, zhifeng sheng wrote: > Dear all > > It looks like something strange is going on for my Petsc (which was > built with '--with-scalar-type=complex'.): > > 1) I got a complex system of equation, with I can solve with CG > +SOR . But when I try to use BICGS+SOR to solve it, it never > converges. > > 2) I noticed that the transpose function for complex matrices was > not what I was looking for. It really computes the transpose > instead of the conjugate transpose which is the "right transpose" > for complex matrix. > > For some linear solvers, e.g. BICG, BCGS, need to compute the > transpose of real system matrix (to multiply it with a vector), > while they need to compute the conjugate transpose for complex > system matrices. > > So I am wondering whether BICGS+SOR did not converge for my complex > matrix because of a bad A^Tx used internally in these linear > solvers. Such problem does not exist for CG+SOR, therefore, CG+SOR > converged. > > PS: I sent some emails about this problem before, but I guess I did > not make myself clear :o > > Thanks a lot > Best regards > Zhifeng Sheng > > Hong Zhang wrote: >> Zhifeng, >> >> Petsc's linear solvers, including >> the external packages (e.g., superlu, mumps, and spooles) >> all support complex precision, >> simply configure petsc library with >> '--with-scalar-type=complex'. >> >>> How can I make the other linear solvers work for complex system? I >>> think if only I can make the transpose function a little different >>> then they should work. but I don't know where I should start. >>> >>> Did anyone have similar problem with the linear solvers for >>> complex system before (the linear solver for complex system needs >>> conjugate transpose)? and how could you solve it? >> >> Why do you need conjugate transpose for using petsc solver? >> Do you develop your onw solver for Hermitian matrix? >> We do not have some basic matrix operations for Hermitian matrix yet, >> e.g., MatMult_Hermitian() when only half of the matrix entries >> are stored. You need implement this operation for your onw >> solver. >> >> If you use petsc AIJ matrix format, all complex linear solvers >> should work. Storing half of entries is not as efficient >> as entire matrix when you have sufficient memory space. >> >> Hong >> >>> >>> Thanks a lot >>> Best regards >>> Zhifeng Sheng >>> >>> >> > From schlamp at informatik.tu-muenchen.de Tue Oct 28 10:08:04 2008 From: schlamp at informatik.tu-muenchen.de (Johann Schlamp ) Date: Tue, 28 Oct 2008 16:08:04 +0100 Subject: Matrixfree modification Message-ID: <49072AD4.40802@informatik.tu-muenchen.de> Hello folks, I have to implement some advanced matrixfree calculation. Usually, we use our own analytical method for calculating the Jacobian matrix and provide it via SNESSetJacobian(). For complex problems, the global matrix takes too much memory. So here's the idea: First, I set some empty method as user-provided Jacobian method. After a corresponding call, SNES hopefully thinks the Jacobian matrix got calculated right. Then it will use it in some multiplication like y=A*x. For that I created the matrix A with MatCreateShell() and set up my own MatMult method. This new method iterates over my grid, calculates local Jacobians, multiplies them with the corresponding part of the vector x and writes them into y. After this full iteration, it should look like I had the full global Jacobian and multiplied it with the full vector x. In y, the result will be the same. The matrix A and the empty Jacobian method are set through SNESSetJacobian(). I have implemented some unittests for generally proofing the idea, seems to work. But if I run a complete simulation, the KSP converges after the first iteration with a residual norm of about 1.0e-18, which is definitely not right. Now my question: does this procedure make sense at all, and if so - is it possible that just the calculation of the residual norm goes wrong due to my new matrix structure? I searched the PETSc code, but I wasn't able to find a proof or solution for that. Any help would be appreciated. Best regards, Johann Schlamp From bsmith at mcs.anl.gov Tue Oct 28 13:52:58 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 28 Oct 2008 13:52:58 -0500 Subject: Matrixfree modification In-Reply-To: <49072AD4.40802@informatik.tu-muenchen.de> References: <49072AD4.40802@informatik.tu-muenchen.de> Message-ID: Your general approach seems fine. I would put a break point in your MatMult routine for a tiny problem and verify that the input vector for u and x are what you expect (and then at the end of the function make sure the output y is what you expect). l Here is my guess: The matrix vector product is J(u)*x; when your calculation is done you need to use the correct u value. This is the vector that is passed into your "empty method as user-provided Jacobian method". In your "empty method as user-provided Jacobian method" you should make a copy of this vector so that you have it for each of the matrix vector products. At each Newton step your "empty method as user-provided Jacobian method" will be called and you will copy the new value of u over. Barry On Oct 28, 2008, at 10:08 AM, Johann Schlamp wrote: > Hello folks, > > I have to implement some advanced matrixfree calculation. > > Usually, we use our own analytical method for calculating the Jacobian > matrix and provide it via SNESSetJacobian(). For complex problems, the > global matrix takes too much memory. So here's the idea: > > First, I set some empty method as user-provided Jacobian method. > After a > corresponding call, SNES hopefully thinks the Jacobian matrix got > calculated right. Then it will use it in some multiplication like > y=A*x. > For that I created the matrix A with MatCreateShell() and set up my > own > MatMult method. This new method iterates over my grid, calculates > local > Jacobians, multiplies them with the corresponding part of the vector x > and writes them into y. After this full iteration, it should look > like I > had the full global Jacobian and multiplied it with the full vector x. > In y, the result will be the same. > The matrix A and the empty Jacobian method are set through > SNESSetJacobian(). > > I have implemented some unittests for generally proofing the idea, > seems > to work. > > But if I run a complete simulation, the KSP converges after the first > iteration with a residual norm of about 1.0e-18, which is definitely > not > right. > > > Now my question: does this procedure make sense at all, and if so - is > it possible that just the calculation of the residual norm goes wrong > due to my new matrix structure? I searched the PETSc code, but I > wasn't > able to find a proof or solution for that. > > Any help would be appreciated. > > > Best regards, > Johann Schlamp > > From schlamp at in.tum.de Tue Oct 28 15:39:07 2008 From: schlamp at in.tum.de (Johann Schlamp) Date: Tue, 28 Oct 2008 21:39:07 +0100 Subject: Matrixfree modification In-Reply-To: References: <49072AD4.40802@informatik.tu-muenchen.de> Message-ID: <4907786B.1060400@in.tum.de> Thanks for your reply, Barry! Barry Smith wrote: > Your general approach seems fine. I would put a break point in > your MatMult routine for a tiny problem and verify that the input > vector for u and > x are what you expect (and then at the end of the function make sure the > output y is what you expect). I have already done this, everything is as expected. > Here is my guess: The matrix vector product is J(u)*x; when your > calculation > is done you need to use the correct u value. This is the vector that is > passed into your "empty method as user-provided Jacobian method". > In your "empty method as user-provided Jacobian method" you should make > a copy of this vector so that you have it for each of the matrix > vector products. > At each Newton step your "empty method as user-provided Jacobian method" > will be called and you will copy the new value of u over. It took me some time to understand what you mean. But after that I got excited trying it out. :) It definitely had some effect on the nonlinear solution, but the linear solver still finishes after one iteration with the way too small residual norm. Anyway, thanks for the anticipated bugfix! Do you have any further suggestions? Best regards, Johann > On Oct 28, 2008, at 10:08 AM, Johann Schlamp wrote: > >> Hello folks, >> >> I have to implement some advanced matrixfree calculation. >> >> Usually, we use our own analytical method for calculating the Jacobian >> matrix and provide it via SNESSetJacobian(). For complex problems, the >> global matrix takes too much memory. So here's the idea: >> >> First, I set some empty method as user-provided Jacobian method. After a >> corresponding call, SNES hopefully thinks the Jacobian matrix got >> calculated right. Then it will use it in some multiplication like y=A*x. >> For that I created the matrix A with MatCreateShell() and set up my own >> MatMult method. This new method iterates over my grid, calculates local >> Jacobians, multiplies them with the corresponding part of the vector x >> and writes them into y. After this full iteration, it should look like I >> had the full global Jacobian and multiplied it with the full vector x. >> In y, the result will be the same. >> The matrix A and the empty Jacobian method are set through >> SNESSetJacobian(). >> >> I have implemented some unittests for generally proofing the idea, seems >> to work. >> >> But if I run a complete simulation, the KSP converges after the first >> iteration with a residual norm of about 1.0e-18, which is definitely not >> right. >> >> >> Now my question: does this procedure make sense at all, and if so - is >> it possible that just the calculation of the residual norm goes wrong >> due to my new matrix structure? I searched the PETSc code, but I wasn't >> able to find a proof or solution for that. >> >> Any help would be appreciated. >> >> >> Best regards, >> Johann Schlamp >> >> > From bsmith at mcs.anl.gov Tue Oct 28 15:59:07 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 28 Oct 2008 15:59:07 -0500 Subject: Matrixfree modification In-Reply-To: <4907786B.1060400@in.tum.de> References: <49072AD4.40802@informatik.tu-muenchen.de> <4907786B.1060400@in.tum.de> Message-ID: <3A515CE5-6510-458A-AC11-9D27BEE0177F@mcs.anl.gov> If you have the "matrix-based" version that you can run on the same problem then look at the residuals computed in the 0th and 1st step of the Krylov method to see how they are different in the two runs. Perhaps your matrix-free is corrupting memory? Run with - malloc_debug and put a CHKMEMQ; at the end of your matrix free multiply. Or better run through valgrind,. www.valgrind.org Barry On Oct 28, 2008, at 3:39 PM, Johann Schlamp wrote: > Thanks for your reply, Barry! > > Barry Smith wrote: >> Your general approach seems fine. I would put a break point in >> your MatMult routine for a tiny problem and verify that the input >> vector for u and >> x are what you expect (and then at the end of the function make >> sure the >> output y is what you expect). > I have already done this, everything is as expected. > >> Here is my guess: The matrix vector product is J(u)*x; when your >> calculation >> is done you need to use the correct u value. This is the vector >> that is >> passed into your "empty method as user-provided Jacobian method". >> In your "empty method as user-provided Jacobian method" you should >> make >> a copy of this vector so that you have it for each of the matrix >> vector products. >> At each Newton step your "empty method as user-provided Jacobian >> method" >> will be called and you will copy the new value of u over. > It took me some time to understand what you mean. But after that I got > excited trying it out. :) > It definitely had some effect on the nonlinear solution, but the > linear > solver still finishes after one iteration with the way too small > residual norm. > > Anyway, thanks for the anticipated bugfix! > > Do you have any further suggestions? > > > Best regards, > Johann > > >> On Oct 28, 2008, at 10:08 AM, Johann Schlamp >> wrote: >> >>> Hello folks, >>> >>> I have to implement some advanced matrixfree calculation. >>> >>> Usually, we use our own analytical method for calculating the >>> Jacobian >>> matrix and provide it via SNESSetJacobian(). For complex problems, >>> the >>> global matrix takes too much memory. So here's the idea: >>> >>> First, I set some empty method as user-provided Jacobian method. >>> After a >>> corresponding call, SNES hopefully thinks the Jacobian matrix got >>> calculated right. Then it will use it in some multiplication like >>> y=A*x. >>> For that I created the matrix A with MatCreateShell() and set up >>> my own >>> MatMult method. This new method iterates over my grid, calculates >>> local >>> Jacobians, multiplies them with the corresponding part of the >>> vector x >>> and writes them into y. After this full iteration, it should look >>> like I >>> had the full global Jacobian and multiplied it with the full >>> vector x. >>> In y, the result will be the same. >>> The matrix A and the empty Jacobian method are set through >>> SNESSetJacobian(). >>> >>> I have implemented some unittests for generally proofing the idea, >>> seems >>> to work. >>> >>> But if I run a complete simulation, the KSP converges after the >>> first >>> iteration with a residual norm of about 1.0e-18, which is >>> definitely not >>> right. >>> >>> >>> Now my question: does this procedure make sense at all, and if so >>> - is >>> it possible that just the calculation of the residual norm goes >>> wrong >>> due to my new matrix structure? I searched the PETSc code, but I >>> wasn't >>> able to find a proof or solution for that. >>> >>> Any help would be appreciated. >>> >>> >>> Best regards, >>> Johann Schlamp >>> >>> >> > From schlamp at in.tum.de Tue Oct 28 16:35:39 2008 From: schlamp at in.tum.de (Johann Schlamp) Date: Tue, 28 Oct 2008 22:35:39 +0100 Subject: Matrixfree modification In-Reply-To: <3A515CE5-6510-458A-AC11-9D27BEE0177F@mcs.anl.gov> References: <49072AD4.40802@informatik.tu-muenchen.de> <4907786B.1060400@in.tum.de> <3A515CE5-6510-458A-AC11-9D27BEE0177F@mcs.anl.gov> Message-ID: <490785AB.6050704@in.tum.de> Barry Smith wrote: > If you have the "matrix-based" version that you can run on the same > problem then > look at the residuals computed in the 0th and 1st step of the Krylov > method > to see how they are different in the two runs. The residuals in the 0th and 1st step of the linear solver are 0.00363413 0.00189276 for the "matrix-based" version, and 0.00363413 1.27858e-17 for the matrix-free version. That's definitely smaller than epsilon, so it converges. By the way, the "matrix-based" version doesn't converge either, as I was not using a preconditioner for getting comparable results. Simply thought: the residual is in the magnitude of machine accuracy, so I would have concluded that the calculation of the residual (y-A*x) results in zero with respect to some rounding errors. Unfortunately, I don't completely understand the PETSc code for calculating the residual and therfore cannot verify it for my new matrix structure. > Perhaps your matrix-free is corrupting memory? Run with -malloc_debug > and put a CHKMEMQ; at the end of your matrix free multiply. Or better > run through > valgrind,. www.valgrind.org Interesting thought! I will check this tomorrow. Johann > On Oct 28, 2008, at 3:39 PM, Johann Schlamp wrote: > >> Thanks for your reply, Barry! >> >> Barry Smith wrote: >>> Your general approach seems fine. I would put a break point in >>> your MatMult routine for a tiny problem and verify that the input >>> vector for u and >>> x are what you expect (and then at the end of the function make sure >>> the >>> output y is what you expect). >> I have already done this, everything is as expected. >> >>> Here is my guess: The matrix vector product is J(u)*x; when your >>> calculation >>> is done you need to use the correct u value. This is the vector that is >>> passed into your "empty method as user-provided Jacobian method". >>> In your "empty method as user-provided Jacobian method" you should make >>> a copy of this vector so that you have it for each of the matrix >>> vector products. >>> At each Newton step your "empty method as user-provided Jacobian >>> method" >>> will be called and you will copy the new value of u over. >> It took me some time to understand what you mean. But after that I got >> excited trying it out. :) >> It definitely had some effect on the nonlinear solution, but the linear >> solver still finishes after one iteration with the way too small >> residual norm. >> >> Anyway, thanks for the anticipated bugfix! >> >> Do you have any further suggestions? >> >> >> Best regards, >> Johann >> >> >>> On Oct 28, 2008, at 10:08 AM, Johann Schlamp wrote: >>> >>>> Hello folks, >>>> >>>> I have to implement some advanced matrixfree calculation. >>>> >>>> Usually, we use our own analytical method for calculating the Jacobian >>>> matrix and provide it via SNESSetJacobian(). For complex problems, the >>>> global matrix takes too much memory. So here's the idea: >>>> >>>> First, I set some empty method as user-provided Jacobian method. >>>> After a >>>> corresponding call, SNES hopefully thinks the Jacobian matrix got >>>> calculated right. Then it will use it in some multiplication like >>>> y=A*x. >>>> For that I created the matrix A with MatCreateShell() and set up my >>>> own >>>> MatMult method. This new method iterates over my grid, calculates >>>> local >>>> Jacobians, multiplies them with the corresponding part of the vector x >>>> and writes them into y. After this full iteration, it should look >>>> like I >>>> had the full global Jacobian and multiplied it with the full vector x. >>>> In y, the result will be the same. >>>> The matrix A and the empty Jacobian method are set through >>>> SNESSetJacobian(). >>>> >>>> I have implemented some unittests for generally proofing the idea, >>>> seems >>>> to work. >>>> >>>> But if I run a complete simulation, the KSP converges after the first >>>> iteration with a residual norm of about 1.0e-18, which is >>>> definitely not >>>> right. >>>> >>>> >>>> Now my question: does this procedure make sense at all, and if so - is >>>> it possible that just the calculation of the residual norm goes wrong >>>> due to my new matrix structure? I searched the PETSc code, but I >>>> wasn't >>>> able to find a proof or solution for that. >>>> >>>> Any help would be appreciated. >>>> >>>> >>>> Best regards, >>>> Johann Schlamp >>>> >>>> >>> >> > From knepley at gmail.com Tue Oct 28 16:57:39 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Oct 2008 16:57:39 -0500 Subject: Matrixfree modification In-Reply-To: <490785AB.6050704@in.tum.de> References: <49072AD4.40802@informatik.tu-muenchen.de> <4907786B.1060400@in.tum.de> <3A515CE5-6510-458A-AC11-9D27BEE0177F@mcs.anl.gov> <490785AB.6050704@in.tum.de> Message-ID: On Tue, Oct 28, 2008 at 4:35 PM, Johann Schlamp wrote: > Barry Smith wrote: >> If you have the "matrix-based" version that you can run on the same >> problem then >> look at the residuals computed in the 0th and 1st step of the Krylov >> method >> to see how they are different in the two runs. > The residuals in the 0th and 1st step of the linear solver are > 0.00363413 > 0.00189276 > for the "matrix-based" version, and > 0.00363413 > 1.27858e-17 No this looks wrong. Shouldn't these be identical? It looks like you are wiping out the input vector instead. Matt > for the matrix-free version. That's definitely smaller than epsilon, so > it converges. By the way, the "matrix-based" version doesn't converge > either, as I was not using a preconditioner for getting comparable results. > > Simply thought: the residual is in the magnitude of machine accuracy, so > I would have concluded that the calculation of the residual (y-A*x) > results in zero with respect to some rounding errors. Unfortunately, I > don't completely understand the PETSc code for calculating the residual > and therfore cannot verify it for my new matrix structure. > >> Perhaps your matrix-free is corrupting memory? Run with -malloc_debug >> and put a CHKMEMQ; at the end of your matrix free multiply. Or better >> run through >> valgrind,. www.valgrind.org > Interesting thought! I will check this tomorrow. > > > Johann > > >> On Oct 28, 2008, at 3:39 PM, Johann Schlamp wrote: >> >>> Thanks for your reply, Barry! >>> >>> Barry Smith wrote: >>>> Your general approach seems fine. I would put a break point in >>>> your MatMult routine for a tiny problem and verify that the input >>>> vector for u and >>>> x are what you expect (and then at the end of the function make sure >>>> the >>>> output y is what you expect). >>> I have already done this, everything is as expected. >>> >>>> Here is my guess: The matrix vector product is J(u)*x; when your >>>> calculation >>>> is done you need to use the correct u value. This is the vector that is >>>> passed into your "empty method as user-provided Jacobian method". >>>> In your "empty method as user-provided Jacobian method" you should make >>>> a copy of this vector so that you have it for each of the matrix >>>> vector products. >>>> At each Newton step your "empty method as user-provided Jacobian >>>> method" >>>> will be called and you will copy the new value of u over. >>> It took me some time to understand what you mean. But after that I got >>> excited trying it out. :) >>> It definitely had some effect on the nonlinear solution, but the linear >>> solver still finishes after one iteration with the way too small >>> residual norm. >>> >>> Anyway, thanks for the anticipated bugfix! >>> >>> Do you have any further suggestions? >>> >>> >>> Best regards, >>> Johann >>> >>> >>>> On Oct 28, 2008, at 10:08 AM, Johann Schlamp wrote: >>>> >>>>> Hello folks, >>>>> >>>>> I have to implement some advanced matrixfree calculation. >>>>> >>>>> Usually, we use our own analytical method for calculating the Jacobian >>>>> matrix and provide it via SNESSetJacobian(). For complex problems, the >>>>> global matrix takes too much memory. So here's the idea: >>>>> >>>>> First, I set some empty method as user-provided Jacobian method. >>>>> After a >>>>> corresponding call, SNES hopefully thinks the Jacobian matrix got >>>>> calculated right. Then it will use it in some multiplication like >>>>> y=A*x. >>>>> For that I created the matrix A with MatCreateShell() and set up my >>>>> own >>>>> MatMult method. This new method iterates over my grid, calculates >>>>> local >>>>> Jacobians, multiplies them with the corresponding part of the vector x >>>>> and writes them into y. After this full iteration, it should look >>>>> like I >>>>> had the full global Jacobian and multiplied it with the full vector x. >>>>> In y, the result will be the same. >>>>> The matrix A and the empty Jacobian method are set through >>>>> SNESSetJacobian(). >>>>> >>>>> I have implemented some unittests for generally proofing the idea, >>>>> seems >>>>> to work. >>>>> >>>>> But if I run a complete simulation, the KSP converges after the first >>>>> iteration with a residual norm of about 1.0e-18, which is >>>>> definitely not >>>>> right. >>>>> >>>>> >>>>> Now my question: does this procedure make sense at all, and if so - is >>>>> it possible that just the calculation of the residual norm goes wrong >>>>> due to my new matrix structure? I searched the PETSc code, but I >>>>> wasn't >>>>> able to find a proof or solution for that. >>>>> >>>>> Any help would be appreciated. >>>>> >>>>> >>>>> Best regards, >>>>> Johann Schlamp >>>>> >>>>> >>>> >>> >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Tue Oct 28 16:59:09 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 28 Oct 2008 16:59:09 -0500 Subject: request for Science advances using PETSc Message-ID: Dear PETSc users, Occasionally we get requests from our funding agency to provide examples of "major scientific advancements" that we have enabled. Providing these helps us maintain funding. If you feel you have made a scientific advance by using PETSc that you would not have achieved otherwise please send them to petsc-maint at mcs.anl.gov Note: the advance does not have to be related to work for any particular agency. Thanks Barry From schlamp at in.tum.de Tue Oct 28 17:28:43 2008 From: schlamp at in.tum.de (Johann Schlamp) Date: Tue, 28 Oct 2008 23:28:43 +0100 Subject: Matrixfree modification In-Reply-To: References: <49072AD4.40802@informatik.tu-muenchen.de> <4907786B.1060400@in.tum.de> <3A515CE5-6510-458A-AC11-9D27BEE0177F@mcs.anl.gov> <490785AB.6050704@in.tum.de> Message-ID: <4907921B.8020308@in.tum.de> Matthew Knepley wrote: > On Tue, Oct 28, 2008 at 4:35 PM, Johann Schlamp wrote: > >> Barry Smith wrote: >> >>> If you have the "matrix-based" version that you can run on the same >>> problem then >>> look at the residuals computed in the 0th and 1st step of the Krylov >>> method >>> to see how they are different in the two runs. >>> >> The residuals in the 0th and 1st step of the linear solver are >> 0.00363413 >> 0.00189276 >> for the "matrix-based" version, and >> 0.00363413 >> 1.27858e-17 >> > > No this looks wrong. Shouldn't these be identical? It looks like you > are wiping out the input vector instead. > > Matt > Yes, they should be identical. That's excactly the point. My naive interpretation was that maybe only the residual's calculation is wrong. After thinking again, I believe I have misunderstood Barry's hint on copying the 'u'. Apparently, the user-provided Jacobian calculation method gets a different input than my customized MatMult method called within KSP on my matrixfree context. But they are both needed for my approach, right? I haven't thought of that in advance, so it will take me some time to rewrite the code. I will report again tomorrow (it's 11 o'clock pm at my site). Thanks for your help! Johann >> for the matrix-free version. That's definitely smaller than epsilon, so >> it converges. By the way, the "matrix-based" version doesn't converge >> either, as I was not using a preconditioner for getting comparable results. >> >> Simply thought: the residual is in the magnitude of machine accuracy, so >> I would have concluded that the calculation of the residual (y-A*x) >> results in zero with respect to some rounding errors. Unfortunately, I >> don't completely understand the PETSc code for calculating the residual >> and therfore cannot verify it for my new matrix structure. >> >> >>> Perhaps your matrix-free is corrupting memory? Run with -malloc_debug >>> and put a CHKMEMQ; at the end of your matrix free multiply. Or better >>> run through >>> valgrind,. www.valgrind.org >>> >> Interesting thought! I will check this tomorrow. >> >> >> Johann >> >> >> >>> On Oct 28, 2008, at 3:39 PM, Johann Schlamp wrote: >>> >>> >>>> Thanks for your reply, Barry! >>>> >>>> Barry Smith wrote: >>>> >>>>> Your general approach seems fine. I would put a break point in >>>>> your MatMult routine for a tiny problem and verify that the input >>>>> vector for u and >>>>> x are what you expect (and then at the end of the function make sure >>>>> the >>>>> output y is what you expect). >>>>> >>>> I have already done this, everything is as expected. >>>> >>>> >>>>> Here is my guess: The matrix vector product is J(u)*x; when your >>>>> calculation >>>>> is done you need to use the correct u value. This is the vector that is >>>>> passed into your "empty method as user-provided Jacobian method". >>>>> In your "empty method as user-provided Jacobian method" you should make >>>>> a copy of this vector so that you have it for each of the matrix >>>>> vector products. >>>>> At each Newton step your "empty method as user-provided Jacobian >>>>> method" >>>>> will be called and you will copy the new value of u over. >>>>> >>>> It took me some time to understand what you mean. But after that I got >>>> excited trying it out. :) >>>> It definitely had some effect on the nonlinear solution, but the linear >>>> solver still finishes after one iteration with the way too small >>>> residual norm. >>>> >>>> Anyway, thanks for the anticipated bugfix! >>>> >>>> Do you have any further suggestions? >>>> >>>> >>>> Best regards, >>>> Johann >>>> >>>> >>>> >>>>> On Oct 28, 2008, at 10:08 AM, Johann Schlamp wrote: >>>>> >>>>> >>>>>> Hello folks, >>>>>> >>>>>> I have to implement some advanced matrixfree calculation. >>>>>> >>>>>> Usually, we use our own analytical method for calculating the Jacobian >>>>>> matrix and provide it via SNESSetJacobian(). For complex problems, the >>>>>> global matrix takes too much memory. So here's the idea: >>>>>> >>>>>> First, I set some empty method as user-provided Jacobian method. >>>>>> After a >>>>>> corresponding call, SNES hopefully thinks the Jacobian matrix got >>>>>> calculated right. Then it will use it in some multiplication like >>>>>> y=A*x. >>>>>> For that I created the matrix A with MatCreateShell() and set up my >>>>>> own >>>>>> MatMult method. This new method iterates over my grid, calculates >>>>>> local >>>>>> Jacobians, multiplies them with the corresponding part of the vector x >>>>>> and writes them into y. After this full iteration, it should look >>>>>> like I >>>>>> had the full global Jacobian and multiplied it with the full vector x. >>>>>> In y, the result will be the same. >>>>>> The matrix A and the empty Jacobian method are set through >>>>>> SNESSetJacobian(). >>>>>> >>>>>> I have implemented some unittests for generally proofing the idea, >>>>>> seems >>>>>> to work. >>>>>> >>>>>> But if I run a complete simulation, the KSP converges after the first >>>>>> iteration with a residual norm of about 1.0e-18, which is >>>>>> definitely not >>>>>> right. >>>>>> >>>>>> >>>>>> Now my question: does this procedure make sense at all, and if so - is >>>>>> it possible that just the calculation of the residual norm goes wrong >>>>>> due to my new matrix structure? I searched the PETSc code, but I >>>>>> wasn't >>>>>> able to find a proof or solution for that. >>>>>> >>>>>> Any help would be appreciated. >>>>>> >>>>>> >>>>>> Best regards, >>>>>> Johann Schlamp >>>>>> >>>>>> >>>>>> >> > > > > From christophs at al.com.au Tue Oct 28 22:52:29 2008 From: christophs at al.com.au (Christoph Sprenger) Date: Wed, 29 Oct 2008 14:52:29 +1100 Subject: f2cblaslapack with single precision? Message-ID: <4907DDFD.5070603@al.com.au> Hello everyone, i'm new here and have a first question concerning the building of petsc. i try to build the libraries with single precision, to reduce the memory profile, but having troubles to do so. i get an undefined reference to saxpy_ and other blas bits and pieces, since the f2cblaslapack.tar.gz doesn't seem to contain any single precision files. not sure if i am missing something fundamental though. here is my config i used to build the lot: --with-cc=mpicc --with-fc=0 --with-cxx=mpicxx --download-c-blas-lapack=1 --with-mpi-dir=/home/users/christophs/mpich2 --with-clanguage=cxx --with-sudo=sudo --with-shared=1 --with-debugging=1 --with-scalar-type=real --with-precision=single it would be great to get a hint whats going on and what the best solution for this would be without going through the fortran side of things. are there other c packages than f2cblaslapack that have the individual single precision files ? apologies for the rather newbie style questions. Kind Regards, Christoph Animal Logic http://www.animallogic.com Please think of the environment before printing this email. This email and any attachments may be confidential and/or privileged. If you are not the intended recipient of this email, you must not disclose or use the information contained in it. Please notify the sender immediately and delete this document if you have received it in error. We do not guarantee this email is error or virus free. From balay at mcs.anl.gov Wed Oct 29 00:25:32 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 29 Oct 2008 00:25:32 -0500 (CDT) Subject: f2cblaslapack with single precision? In-Reply-To: <4907DDFD.5070603@al.com.au> References: <4907DDFD.5070603@al.com.au> Message-ID: The old f2cblas is limited to double precision. We have updated petsc-dev to use a fuller version of f2cblas.. You can try using this new f2cblas with petsc-2.3.3. [with the following steps:] cd $PETSC_DIR mkdir externalpackages cd externalpackages wget ftp://ftp.mcs.anl.gov/pub/petsc/externalpackages/f2cblaslapack-3.1.1.tar.gz tar -xzf f2cblaslapack-3.1.1.tar.gz mv f2cblaslapack-3.1.1 f2cblaslapack config/configure.py --download-c-blas-lapack=1 ... Note: Depending upon the compilers you have - generally fblaslapack should work fine [so might not be worth the hassle of trying to avoid using it] Satish On Wed, 29 Oct 2008, Christoph Sprenger wrote: > Hello everyone, > > i'm new here and have a first question concerning the building of petsc. > i try to build the libraries with single precision, to reduce the memory > profile, but having troubles to do so. > i get an undefined reference to saxpy_ and other blas bits and pieces, since > the f2cblaslapack.tar.gz doesn't seem to contain any single precision files. > not sure if i am missing something fundamental though. > > here is my config i used to build the lot: > --with-cc=mpicc --with-fc=0 --with-cxx=mpicxx --download-c-blas-lapack=1 > --with-mpi-dir=/home/users/christophs/mpich2 --with-clanguage=cxx > --with-sudo=sudo --with-shared=1 --with-debugging=1 --with-scalar-type=real > --with-precision=single > > it would be great to get a hint whats going on and what the best solution for > this would be without going through the fortran side of things. > are there other c packages than f2cblaslapack that have the individual single > precision files ? > > apologies for the rather newbie style questions. > > > Kind Regards, > Christoph > > Animal Logic > http://www.animallogic.com > > Please think of the environment before printing this email. > > This email and any attachments may be confidential and/or privileged. If you > are not the intended recipient of this email, you must not disclose or use the > information contained in it. Please notify the sender immediately and delete > this document if you have received it in error. We do not guarantee this email > is error or virus free. > > > > From christophs at al.com.au Wed Oct 29 01:43:29 2008 From: christophs at al.com.au (Christoph Sprenger) Date: Wed, 29 Oct 2008 17:43:29 +1100 Subject: f2cblaslapack with single precision? In-Reply-To: References: <4907DDFD.5070603@al.com.au> Message-ID: <49080611.4000300@al.com.au> thanks a lot for pointing me in the right direction. its all working now ;) Christoph Satish Balay wrote: > The old f2cblas is limited to double precision. We have updated > petsc-dev to use a fuller version of f2cblas.. > > You can try using this new f2cblas with petsc-2.3.3. [with the following steps:] > > cd $PETSC_DIR > mkdir externalpackages > cd externalpackages > wget ftp://ftp.mcs.anl.gov/pub/petsc/externalpackages/f2cblaslapack-3.1.1.tar.gz > tar -xzf f2cblaslapack-3.1.1.tar.gz > mv f2cblaslapack-3.1.1 f2cblaslapack > config/configure.py --download-c-blas-lapack=1 ... > > Note: Depending upon the compilers you have - generally fblaslapack > should work fine [so might not be worth the hassle of trying to avoid > using it] > > Satish > > On Wed, 29 Oct 2008, Christoph Sprenger wrote: > > >> Hello everyone, >> >> i'm new here and have a first question concerning the building of petsc. >> i try to build the libraries with single precision, to reduce the memory >> profile, but having troubles to do so. >> i get an undefined reference to saxpy_ and other blas bits and pieces, since >> the f2cblaslapack.tar.gz doesn't seem to contain any single precision files. >> not sure if i am missing something fundamental though. >> >> here is my config i used to build the lot: >> --with-cc=mpicc --with-fc=0 --with-cxx=mpicxx --download-c-blas-lapack=1 >> --with-mpi-dir=/home/users/christophs/mpich2 --with-clanguage=cxx >> --with-sudo=sudo --with-shared=1 --with-debugging=1 --with-scalar-type=real >> --with-precision=single >> >> it would be great to get a hint whats going on and what the best solution for >> this would be without going through the fortran side of things. >> are there other c packages than f2cblaslapack that have the individual single >> precision files ? >> >> apologies for the rather newbie style questions. >> >> >> Kind Regards, >> Christoph >> >> Animal Logic >> http://www.animallogic.com >> >> Please think of the environment before printing this email. >> >> This email and any attachments may be confidential and/or privileged. If you >> are not the intended recipient of this email, you must not disclose or use the >> information contained in it. Please notify the sender immediately and delete >> this document if you have received it in error. We do not guarantee this email >> is error or virus free. >> >> >> >> >> > > Animal Logic http://www.animallogic.com Please think of the environment before printing this email. This email and any attachments may be confidential and/or privileged. If you are not the intended recipient of this email, you must not disclose or use the information contained in it. Please notify the sender immediately and delete this document if you have received it in error. We do not guarantee this email is error or virus free. From z.sheng at ewi.tudelft.nl Wed Oct 29 04:53:07 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Wed, 29 Oct 2008 10:53:07 +0100 Subject: linear solver for complex matrix In-Reply-To: References: <4901C89C.3030202@t-online.de> <49058F9D.6000604@ewi.tudelft.nl> <4905DA29.7040302@ewi.tudelft.nl> Message-ID: <49083283.7000708@ewi.tudelft.nl> Hi Thanks, now it is clear :) I may want to use other solvers, since our software generates other kind of complex matrices. Also, I remembered that left preconditioners are used in iterative solvers as default, so I am confused about what preconditioners are used for CG, left or "left and right" ?. PS: is the Petsc development team planning to support it in the next release? Thanks a lot Zhifeng Sheng Barry Smith wrote: > > The PETSc MatMultTranspose(), as you state, does not use conjugate > transpose, thus the Krylov methods that > require a conjugate transpose will not work. Unless you change the > MatMultTranpose() routine. > In addition, these algorithms use VecDot(), with the complex > conjugate transpose the order of the arguments to this function > matter (determining which one is conjugated). The coding for these > methods was NOT done to use the proper conjugation > (many of the original references do not include this information). In > other words PETSc DOES NOT support > complex conjugate for most of the Krylov methods, I think we already > said this before. > > > If your matrix is complex hermitian why not just use CG, why would you > want to use a different Krylov method? > > Barry > > > > On Oct 27, 2008, at 10:11 AM, zhifeng sheng wrote: > >> Dear all >> >> It looks like something strange is going on for my Petsc (which was >> built with '--with-scalar-type=complex'.): >> >> 1) I got a complex system of equation, with I can solve with CG+SOR . >> But when I try to use BICGS+SOR to solve it, it never converges. >> >> 2) I noticed that the transpose function for complex matrices was not >> what I was looking for. It really computes the transpose instead of >> the conjugate transpose which is the "right transpose" for complex >> matrix. >> >> For some linear solvers, e.g. BICG, BCGS, need to compute the >> transpose of real system matrix (to multiply it with a vector), while >> they need to compute the conjugate transpose for complex system >> matrices. >> >> So I am wondering whether BICGS+SOR did not converge for my complex >> matrix because of a bad A^Tx used internally in these linear solvers. >> Such problem does not exist for CG+SOR, therefore, CG+SOR converged. >> >> PS: I sent some emails about this problem before, but I guess I did >> not make myself clear :o >> >> Thanks a lot >> Best regards >> Zhifeng Sheng >> >> Hong Zhang wrote: >>> Zhifeng, >>> >>> Petsc's linear solvers, including >>> the external packages (e.g., superlu, mumps, and spooles) >>> all support complex precision, >>> simply configure petsc library with >>> '--with-scalar-type=complex'. >>> >>>> How can I make the other linear solvers work for complex system? I >>>> think if only I can make the transpose function a little different >>>> then they should work. but I don't know where I should start. >>>> >>>> Did anyone have similar problem with the linear solvers for complex >>>> system before (the linear solver for complex system needs conjugate >>>> transpose)? and how could you solve it? >>> >>> Why do you need conjugate transpose for using petsc solver? >>> Do you develop your onw solver for Hermitian matrix? >>> We do not have some basic matrix operations for Hermitian matrix yet, >>> e.g., MatMult_Hermitian() when only half of the matrix entries >>> are stored. You need implement this operation for your onw >>> solver. >>> >>> If you use petsc AIJ matrix format, all complex linear solvers >>> should work. Storing half of entries is not as efficient >>> as entire matrix when you have sufficient memory space. >>> >>> Hong >>> >>>> >>>> Thanks a lot >>>> Best regards >>>> Zhifeng Sheng >>>> >>>> >>> >> > From schlamp at in.tum.de Wed Oct 29 17:01:18 2008 From: schlamp at in.tum.de (Johann Schlamp) Date: Wed, 29 Oct 2008 23:01:18 +0100 Subject: Matrixfree modification In-Reply-To: <4907921B.8020308@in.tum.de> References: <49072AD4.40802@informatik.tu-muenchen.de> <4907786B.1060400@in.tum.de> <3A515CE5-6510-458A-AC11-9D27BEE0177F@mcs.anl.gov> <490785AB.6050704@in.tum.de> <4907921B.8020308@in.tum.de> Message-ID: <4908DD2E.10508@in.tum.de> Johann Schlamp wrote: > Matthew Knepley wrote: > >> On Tue, Oct 28, 2008 at 4:35 PM, Johann Schlamp wrote: >> >> >>> Barry Smith wrote: >>> >>> >>>> If you have the "matrix-based" version that you can run on the same >>>> problem then >>>> look at the residuals computed in the 0th and 1st step of the Krylov >>>> method >>>> to see how they are different in the two runs. >>>> >>>> >>> The residuals in the 0th and 1st step of the linear solver are >>> 0.00363413 >>> 0.00189276 >>> for the "matrix-based" version, and >>> 0.00363413 >>> 1.27858e-17 >>> >>> >> No this looks wrong. Shouldn't these be identical? It looks like you >> are wiping out the input vector instead. >> >> Matt >> >> After some extended testing, I think I really messed up the input vector somehow. I dare say I can take it from here. If not - I know where to get competent support. :) Thanks for all help! Johann > Yes, they should be identical. That's excactly the point. > My naive interpretation was that maybe only the residual's calculation > is wrong. > > After thinking again, I believe I have misunderstood Barry's hint on > copying the 'u'. > Apparently, the user-provided Jacobian calculation method gets a > different input than my customized MatMult method called within KSP on > my matrixfree context. But they are both needed for my approach, right? > I haven't thought of that in advance, so it will take me some time to > rewrite the code. I will report again tomorrow (it's 11 o'clock pm at my > site). > > > Thanks for your help! > > > Johann > > >>> for the matrix-free version. That's definitely smaller than epsilon, so >>> it converges. By the way, the "matrix-based" version doesn't converge >>> either, as I was not using a preconditioner for getting comparable results. >>> >>> Simply thought: the residual is in the magnitude of machine accuracy, so >>> I would have concluded that the calculation of the residual (y-A*x) >>> results in zero with respect to some rounding errors. Unfortunately, I >>> don't completely understand the PETSc code for calculating the residual >>> and therfore cannot verify it for my new matrix structure. >>> >>> >>> >>>> Perhaps your matrix-free is corrupting memory? Run with -malloc_debug >>>> and put a CHKMEMQ; at the end of your matrix free multiply. Or better >>>> run through >>>> valgrind,. www.valgrind.org >>>> >>>> >>> Interesting thought! I will check this tomorrow. >>> >>> >>> Johann >>> >>> >>> >>> >>>> On Oct 28, 2008, at 3:39 PM, Johann Schlamp wrote: >>>> >>>> >>>> >>>>> Thanks for your reply, Barry! >>>>> >>>>> Barry Smith wrote: >>>>> >>>>> >>>>>> Your general approach seems fine. I would put a break point in >>>>>> your MatMult routine for a tiny problem and verify that the input >>>>>> vector for u and >>>>>> x are what you expect (and then at the end of the function make sure >>>>>> the >>>>>> output y is what you expect). >>>>>> >>>>>> >>>>> I have already done this, everything is as expected. >>>>> >>>>> >>>>> >>>>>> Here is my guess: The matrix vector product is J(u)*x; when your >>>>>> calculation >>>>>> is done you need to use the correct u value. This is the vector that is >>>>>> passed into your "empty method as user-provided Jacobian method". >>>>>> In your "empty method as user-provided Jacobian method" you should make >>>>>> a copy of this vector so that you have it for each of the matrix >>>>>> vector products. >>>>>> At each Newton step your "empty method as user-provided Jacobian >>>>>> method" >>>>>> will be called and you will copy the new value of u over. >>>>>> >>>>>> >>>>> It took me some time to understand what you mean. But after that I got >>>>> excited trying it out. :) >>>>> It definitely had some effect on the nonlinear solution, but the linear >>>>> solver still finishes after one iteration with the way too small >>>>> residual norm. >>>>> >>>>> Anyway, thanks for the anticipated bugfix! >>>>> >>>>> Do you have any further suggestions? >>>>> >>>>> >>>>> Best regards, >>>>> Johann >>>>> >>>>> >>>>> >>>>> >>>>>> On Oct 28, 2008, at 10:08 AM, Johann Schlamp wrote: >>>>>> >>>>>> >>>>>> >>>>>>> Hello folks, >>>>>>> >>>>>>> I have to implement some advanced matrixfree calculation. >>>>>>> >>>>>>> Usually, we use our own analytical method for calculating the Jacobian >>>>>>> matrix and provide it via SNESSetJacobian(). For complex problems, the >>>>>>> global matrix takes too much memory. So here's the idea: >>>>>>> >>>>>>> First, I set some empty method as user-provided Jacobian method. >>>>>>> After a >>>>>>> corresponding call, SNES hopefully thinks the Jacobian matrix got >>>>>>> calculated right. Then it will use it in some multiplication like >>>>>>> y=A*x. >>>>>>> For that I created the matrix A with MatCreateShell() and set up my >>>>>>> own >>>>>>> MatMult method. This new method iterates over my grid, calculates >>>>>>> local >>>>>>> Jacobians, multiplies them with the corresponding part of the vector x >>>>>>> and writes them into y. After this full iteration, it should look >>>>>>> like I >>>>>>> had the full global Jacobian and multiplied it with the full vector x. >>>>>>> In y, the result will be the same. >>>>>>> The matrix A and the empty Jacobian method are set through >>>>>>> SNESSetJacobian(). >>>>>>> >>>>>>> I have implemented some unittests for generally proofing the idea, >>>>>>> seems >>>>>>> to work. >>>>>>> >>>>>>> But if I run a complete simulation, the KSP converges after the first >>>>>>> iteration with a residual norm of about 1.0e-18, which is >>>>>>> definitely not >>>>>>> right. >>>>>>> >>>>>>> >>>>>>> Now my question: does this procedure make sense at all, and if so - is >>>>>>> it possible that just the calculation of the residual norm goes wrong >>>>>>> due to my new matrix structure? I searched the PETSc code, but I >>>>>>> wasn't >>>>>>> able to find a proof or solution for that. >>>>>>> >>>>>>> Any help would be appreciated. >>>>>>> >>>>>>> >>>>>>> Best regards, >>>>>>> Johann Schlamp >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>> >>> >> >> >> > > From naromero at alcf.anl.gov Thu Oct 30 17:34:37 2008 From: naromero at alcf.anl.gov (Nichols A. Romero) Date: Thu, 30 Oct 2008 17:34:37 -0500 (CDT) Subject: python interface to PETSC In-Reply-To: <5688178.304051225406001360.JavaMail.root@zimbra> Message-ID: <18066737.304071225406077030.JavaMail.root@zimbra> Hi, I am post-doc in LCF working on scaling a real-space DFT code called GPAW. https://wiki.fysik.dtu.dk/gpaw/ This code is a mixture of Python and C. The PETSC webpage states that there is now a Python interface but I could not find a lot of documentation about it in the manual. I did download the PETSC tarball and see that there is a python directory. Right now the GPAW uses NumPy for basic array manipulation, element-wise dot products, and other simple manipulation. The time consuming part of the GPAW is spent in the solution of a sparse eigenvalue problem. H*Psi=lambda*S*Psi The Hamiltonian matrix (H) is not stored at all, only H*Psi products are computed (Psi are the eigenvectors). It would seem like PETSc could be helpful for solving this problem. Is there a python interface to all the PETSc functions? Nichols A. Romero, Ph.D. Argonne Leadership Computing Facility Argonne National Laboratory Building 360 Room L-146 9700 South Cass Avenue Argonne, IL 60490 (630) 252-3441 From knepley at gmail.com Thu Oct 30 17:59:30 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Oct 2008 17:59:30 -0500 Subject: python interface to PETSC In-Reply-To: <18066737.304071225406077030.JavaMail.root@zimbra> References: <5688178.304051225406001360.JavaMail.root@zimbra> <18066737.304071225406077030.JavaMail.root@zimbra> Message-ID: On Thu, Oct 30, 2008 at 5:34 PM, Nichols A. Romero wrote: > Hi, > > I am post-doc in LCF working on scaling a real-space DFT code called GPAW. > https://wiki.fysik.dtu.dk/gpaw/ > > This code is a mixture of Python and C. > > The PETSC webpage states that there is now a Python interface but I could > not find a lot of documentation about it in the manual. I did download the > PETSC tarball and see that there is a python directory. The PETSc Python interface is maintained by Lisandro Dalcin. You can have it automatically downloaded by configuring with --with-petsc4py. It has bindings for all functions. I am not sure if there is a manual, but he reads this list. > Right now the GPAW uses NumPy for basic array manipulation, element-wise > dot products, and other simple manipulation. The time consuming part of > the GPAW is spent in the solution of a sparse eigenvalue problem. > H*Psi=lambda*S*Psi You probably want SLEPc, which also has slepc4py. They use Numpy as well. I think it would not be hard to formulate your problem for SLEPc. You can use a MatShell object for your H*Psi product, and I believe use a Python function for it with Lisandro's awesome wrappers. Matt > The Hamiltonian matrix (H) is not stored at all, only H*Psi products are > computed (Psi are the eigenvectors). It would seem like PETSc could be > helpful for solving this problem. > > Is there a python interface to all the PETSc functions? > > > Nichols A. Romero, Ph.D. > Argonne Leadership Computing Facility > Argonne National Laboratory > Building 360 Room L-146 > 9700 South Cass Avenue > Argonne, IL 60490 > (630) 252-3441 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From dalcinl at gmail.com Thu Oct 30 18:07:50 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 30 Oct 2008 20:07:50 -0300 Subject: python interface to PETSC In-Reply-To: References: <5688178.304051225406001360.JavaMail.root@zimbra> <18066737.304071225406077030.JavaMail.root@zimbra> Message-ID: Nichols, If you do not have problems about being in the bledding edge, you should try to use the all-new implementations of petsc4py and slepc4py (for this later, I still have to clone the repo in the MCS server, but that is just a minute). The only real problem you could have with all this is related to documentation. But as many other uses of petsc4py do, you can contact me at any time at Gmail, even by chat :-) If you decide to give a try, just mail me privately, and I'll give you a few instructions. On Thu, Oct 30, 2008 at 7:59 PM, Matthew Knepley wrote: > On Thu, Oct 30, 2008 at 5:34 PM, Nichols A. Romero > wrote: >> Hi, >> >> I am post-doc in LCF working on scaling a real-space DFT code called GPAW. >> https://wiki.fysik.dtu.dk/gpaw/ >> >> This code is a mixture of Python and C. >> >> The PETSC webpage states that there is now a Python interface but I could >> not find a lot of documentation about it in the manual. I did download the >> PETSC tarball and see that there is a python directory. > > The PETSc Python interface is maintained by Lisandro Dalcin. You can have it > automatically downloaded by configuring with --with-petsc4py. It has bindings > for all functions. I am not sure if there is a manual, but he reads this list. > >> Right now the GPAW uses NumPy for basic array manipulation, element-wise >> dot products, and other simple manipulation. The time consuming part of >> the GPAW is spent in the solution of a sparse eigenvalue problem. >> H*Psi=lambda*S*Psi > > You probably want SLEPc, which also has slepc4py. They use Numpy as well. > I think it would not be hard to formulate your problem for SLEPc. You can > use a MatShell object for your H*Psi product, and I believe use a Python > function for it with Lisandro's awesome wrappers. > > Matt > >> The Hamiltonian matrix (H) is not stored at all, only H*Psi products are >> computed (Psi are the eigenvectors). It would seem like PETSc could be >> helpful for solving this problem. >> >> Is there a python interface to all the PETSc functions? >> >> >> Nichols A. Romero, Ph.D. >> Argonne Leadership Computing Facility >> Argonne National Laboratory >> Building 360 Room L-146 >> 9700 South Cass Avenue >> Argonne, IL 60490 >> (630) 252-3441 >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From stali at purdue.edu Fri Oct 31 15:42:46 2008 From: stali at purdue.edu (Tabrez Ali) Date: Fri, 31 Oct 2008 16:42:46 -0400 Subject: petsc error Message-ID: <1225485766.14963.12.camel@x61> Hello The attached trivial program work fines on my old setup (petsc-2.3.3-p7 built with old Intel compilers and MPICH1) but fails with the petsc-2.3.3-p15 which I just compiled on my new machine. Any ideas as to what is wrong? Thanks in advance. T -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc_ex.f90 Type: text/x-fortran Size: 2787 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Makefile Type: text/x-makefile Size: 196 bytes Desc: not available URL: -------------- next part -------------- [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Read from file failed! [0]PETSC ERROR: Read past end of file! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./a.out on a linux-gnu named x61 by stali Fri Oct 31 16:20:47 2008 [0]PETSC ERROR: Libraries linked from /opt/petsc-2.3.3-p15/lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Fri Oct 31 14:12:26 2008 [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/mpich2-intel --with-blas-lapack-dir=/opt/intel/mkl/10.0.5.025 --with-shared=yes --with-blacs=1 --download-blacs=ifneeded --with-scalapack=1 --download-scalapack=ifneeded --with-mumps=1 --download-mumps=ifneeded --with-parmetis=1 --download-parmetis=ifneeded [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscBinaryRead() line 194 in src/sys/fileio/sysio.c [0]PETSC ERROR: MatLoad_MPIAIJ() line 2540 in src/mat/impls/aij/mpi/mpiaij.c [0]PETSC ERROR: MatLoad() line 131 in src/mat/utils/matio.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Read from file failed! [0]PETSC ERROR: Read past end of file! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./a.out on a linux-gnu named x61 by stali Fri Oct 31 16:20:47 2008 [0]PETSC ERROR: Libraries linked from /opt/petsc-2.3.3-p15/lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Fri Oct 31 14:12:26 2008 [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/mpich2-intel --with-blas-lapack-dir=/opt/intel/mkl/10.0.5.025 --with-shared=yes --with-blacs=1 --download-blacs=ifneeded --with-scalapack=1 --download-scalapack=ifneeded --with-mumps=1 --download-mumps=ifneeded --with-parmetis=1 --download-parmetis=ifneeded [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscBinaryRead() line 194 in src/sys/fileio/sysio.c [0]PETSC ERROR: VecLoad_Binary() line 263 in src/vec/vec/utils/vecio.c [0]PETSC ERROR: VecLoad() line 130 in src/vec/vec/utils/vecio.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Wrong type of object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./a.out on a linux-gnu named x61 by stali Fri Oct 31 16:20:47 2008 [0]PETSC ERROR: Libraries linked from /opt/petsc-2.3.3-p15/lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Fri Oct 31 14:12:26 2008 [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/mpich2-intel --with-blas-lapack-dir=/opt/intel/mkl/10.0.5.025 --with-shared=yes --with-blacs=1 --download-blacs=ifneeded --with-scalapack=1 --download-scalapack=ifneeded --with-mumps=1 --download-mumps=ifneeded --with-parmetis=1 --download-parmetis=ifneeded [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: VecDuplicate() line 482 in src/vec/vec/interface/vector.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Wrong type of object: Parameter # 2! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./a.out on a linux-gnu named x61 by stali Fri Oct 31 16:20:47 2008 [0]PETSC ERROR: Libraries linked from /opt/petsc-2.3.3-p15/lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Fri Oct 31 14:12:26 2008 [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/mpich2-intel --with-blas-lapack-dir=/opt/intel/mkl/10.0.5.025 --with-shared=yes --with-blacs=1 --download-blacs=ifneeded --with-scalapack=1 --download-scalapack=ifneeded --with-mumps=1 --download-mumps=ifneeded --with-parmetis=1 --download-parmetis=ifneeded [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: KSPSolve() line 307 in src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Wrong type of object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./a.out on a linux-gnu named x61 by stali Fri Oct 31 16:20:47 2008 [0]PETSC ERROR: Libraries linked from /opt/petsc-2.3.3-p15/lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Fri Oct 31 14:12:26 2008 [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/mpich2-intel --with-blas-lapack-dir=/opt/intel/mkl/10.0.5.025 --with-shared=yes --with-blacs=1 --download-blacs=ifneeded --with-scalapack=1 --download-scalapack=ifneeded --with-mumps=1 --download-mumps=ifneeded --with-parmetis=1 --download-parmetis=ifneeded [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: VecView() line 719 in src/vec/vec/interface/vector.c Number of iterations = 0 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Null argument, when expecting valid pointer! [0]PETSC ERROR: Null Object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./a.out on a linux-gnu named x61 by stali Fri Oct 31 16:20:47 2008 [0]PETSC ERROR: Libraries linked from /opt/petsc-2.3.3-p15/lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Fri Oct 31 14:12:26 2008 [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/mpich2-intel --with-blas-lapack-dir=/opt/intel/mkl/10.0.5.025 --with-shared=yes --with-blacs=1 --download-blacs=ifneeded --with-scalapack=1 --download-scalapack=ifneeded --with-mumps=1 --download-mumps=ifneeded --with-parmetis=1 --download-parmetis=ifneeded [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatDestroy() line 706 in src/mat/interface/matrix.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Wrong type of object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./a.out on a linux-gnu named x61 by stali Fri Oct 31 16:20:47 2008 [0]PETSC ERROR: Libraries linked from /opt/petsc-2.3.3-p15/lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Fri Oct 31 14:12:26 2008 [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/mpich2-intel --with-blas-lapack-dir=/opt/intel/mkl/10.0.5.025 --with-shared=yes --with-blacs=1 --download-blacs=ifneeded --with-scalapack=1 --download-scalapack=ifneeded --with-mumps=1 --download-mumps=ifneeded --with-parmetis=1 --download-parmetis=ifneeded [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: VecDestroy() line 509 in src/vec/vec/interface/vector.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Wrong type of object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./a.out on a linux-gnu named x61 by stali Fri Oct 31 16:20:47 2008 [0]PETSC ERROR: Libraries linked from /opt/petsc-2.3.3-p15/lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Fri Oct 31 14:12:26 2008 [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/mpich2-intel --with-blas-lapack-dir=/opt/intel/mkl/10.0.5.025 --with-shared=yes --with-blacs=1 --download-blacs=ifneeded --with-scalapack=1 --download-scalapack=ifneeded --with-mumps=1 --download-mumps=ifneeded --with-parmetis=1 --download-parmetis=ifneeded [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: VecDestroy() line 509 in src/vec/vec/interface/vector.c From knepley at gmail.com Fri Oct 31 16:05:56 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 31 Oct 2008 16:05:56 -0500 Subject: petsc error In-Reply-To: <1225485766.14963.12.camel@x61> References: <1225485766.14963.12.camel@x61> Message-ID: On Fri, Oct 31, 2008 at 3:42 PM, Tabrez Ali wrote: > Hello > > The attached trivial program work fines on my old setup (petsc-2.3.3-p7 > built with old Intel compilers and MPICH1) but fails with the > petsc-2.3.3-p15 which I just compiled on my new machine. > > Any ideas as to what is wrong? You are deallocating the arrays before output. Matt > Thanks in advance. > T -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener